Radiotherapy is one of the main treatment methods for cancer, and the delineation of the radiotherapy target area is the basis and premise of precise treatment. Artificial intelligence technology represented by machine learning has done a lot of research in this area, improving the accuracy and efficiency of target delineation. This article will review the applications and research of machine learning in medical image matching, normal organ delineation and treatment target delineation according to the procudures of doctors to delineate the target volume, and give an outlook on the development prospects.

To estimate the global burden of Cancer-based on the cancer and mortality information provided by the International Agency for Research on Cancer in GLOBOCAN 2020 [1], by 2020, Globally, there are an estimated 19.3 million new cancer cases (18.1 million excluding non-melanoma skin cancer) and nearly 10 million cancer deaths (9.9 million excluding non-melanoma skin cancer). The global cancer patients will be expected to reach 28.4 million cases by 2040, a 47% increase from 2020. Malignant tumors will surpass all other chronic diseases and become the “number one killer” that threatens human life and health. Radiotherapy is one of the main treatments for malignant tumors. Its principle is to use the high-energy ionizing radiation to kill cells of tumors. About 60%-70% of tumor patients need to receive radiotherapy. According to statistics, the current average progression-free survival rate of malignant tumors is about 55%, of which radiotherapy contributes 40% of the tumor cure [2], and the therapeutic effect has been widely recognized in clinical practice. The rapid development of artificial intelligence represented by machine learning can be applied to all aspects of clinical practice of radiotherapy [36], making radiotherapy decision-making more simplified, individualized and precise, and improving the automation of the entire process of radiotherapy. The precise determination of radiotherapy target volume is the basis and premise of precision radiotherapy. The automatic delineation of radiotherapy target volume based on machine learning is essential in the research of artificial intelligence in the field of radiotherapy application, which greatly improves the efficiency and accuracy of target volume delineation [7]. This article will review medical image matching, normal organ delineation and treatment target delineation.

Recently, with the development and progress of medical and computer technology, radiotherapy has entered a new era of precision radiotherapy, and more and more precision radiotherapy technologies have entered the practice of clinical tumor treatment. Precision radiotherapy is playing an increasingly important role in improving curative effect, delaying disease progression, improving prognosis and improving patients’ quality of life [8, 9].

In the 1930s, radiation technology has been used to treat tumor patients [10], and in the 1960s with the widespread applications of medical linear accelerators [11]. However, X-ray simulation localization is used for tumor localization during radiotherapy in this period. The doctor obtains the location of the tumor from the patient's fluoroscopic image, and marks the irradiation range on the patient's body surface according to the localization image, and performs treatment through the body surface projection field. Due to the failure to clearly define the tumor and normal tissue, and the poor uniformity of radiation dose distribution, it is easy to miss the tumor or normal tissue is irradiated with a higher dose, resulting in a lower cure rate and higher complications. In 1959, Takahashi et al. [12] proposed the concept of three-dimensional conformal radiation therapy (3D-CRT). The prototype is based on the three-dimensional morphological structure of the tumor, using lead blocks to irradiate in multiple radiation directions through the blocking part field, so that the shape of the irradiated area is the same as that of the tumor target, while reducing the radiation dose received by the blocked area. In the 1970s, the widespread application of computer systems and the emergence of computed tomography (CT), magnetic resonance imaging (MRI) and other equipment promoted radiotherapy to three-dimensional space, enabling 3D-CRT to be realized.

In recent years, three-dimensional digital precise radiotherapy technology has gradually replaced traditional two-dimensional radiotherapy technology, and has become an important development direction of tumor radiotherapy in the 21st century. The three-dimensional digital precise radiotherapy technology focuses on precise positioning and precise treatment, and performs conformal or intensity-modulated radiotherapy at the three-dimensional level through dose segmentation, so that the internal irradiation dose of the lesion in the target area is the largest, and the surrounding normal tissue is the smallest, the irradiation dose is evenly distributed, and has the advantages of high precision, high efficacy and low damage [13]. In addition to 3D-CRT, the currently recognized precision radiotherapy techniques also include stereotactic body radiotherapy (SBRT), intensity modulated radiotherapy (IMRT), and image guided radiation therapy (IGRT), etc. The technique system of precise radiotherapy for tumor is gradually perfection, and the treatment accuracy is increasingly improved.

At present, the steps of precise radiotherapy are to first obtain the anatomical images of the patient on the treatment couch by simulated positioning, then manually delineate the target area and organs at risk by the doctor, and then configure the radiation dose, number of fields, field angle and other parameters can be used to generate a radiotherapy plan suitable for the shape and dose of the tumor target. Finally, after the radiotherapy plan is verified and correct, the treatment can be carried out. Among them, target delineation is the core work of radiotherapy physicians. Accurate target delineation is the premise and crucial step of precise tumor radiotherapy. The quality of delineation has a great impact on the treatment effect of patients and the occurrence of complications [14]. If the treatment target volume is too large, it will increase the radiation dose received by the surrounding organs, thereby increasing the probability of complications [15]. Conversely, if the tumor area is not completely covered, it will lead to insufficient doses to kill all cancer cells, greatly increasing the possibility of recurrence after treatment [16].

Currently, the therapeutic target volume that needs to be manually delineated by radiologists mainly includes the gross tumor volume (GTV) visible on the image; the clinical target volume (CTV) is delineated based on the knowledge of tumor pathology, tumor invasion range, and lymph node metastasis pathway. In addition, the target area of the organ at risk (OAR) within the irradiation range also needs to be accurately delineated to avoid over-irradiation of the OARs, causing serious side effects and complications of radiotherapy [17]. The above-mentioned delineation quality of the therapeutic target volume and OARs completely depends on the professional knowledge and experience of the doctor, and certain errors will occur. Moreover, these large-scale structures are delineated manually layer by layer for the radiologists, and the time cost is also very high. With the development of artificial intelligence technology, deep learning methods based on the big data of radiotherapy patient images can automatically delineate the therapeutic target area and OARs of patients. The speed and accuracy are greatly improved, which helps to reduce the workload of doctors and reduce manual delineation. uncertainty, further improving the precision of radiotherapy [18, 19].

As the main method in the field of artificial intelligence, machine learning can be divided into supervised learning, unsupervised learning, and semi-supervised learning which combines the two [2022]. Specifically in the field of radiotherapy, supervised learning-assisted radiotherapy is mainly used [23]. Combining multiple simple machine learning models to obtain an ensemble learning model with better performance can design a combination scheme for specific machine learning problems to get a better solution [24]. Neural networks are a form of machine learning inspired by the way the brain works, referencing the connection structure of neurons [2527]. When the neural network has many hidden layers, it is defined as a deep neural network. Deep learning methods use deep neural networks to solve various classification and prediction problems. Compared with traditional machine learning methods, deep learning methods have the advantage of being able to automatically learn features in data and avoid manual feature selection. A large amount of data accumulation and the improvement of hardware computing power have made deep learning methods more and more applied in the medical field, and they have shown better performance than traditional machine learning methods [2831].

The electron density of CT images is linearly related to the density of the human body, which can be directly used to calculate the radiation dose, and has become the most commonly used radiotherapy positioning equipment. It has a good effect on bone and lung tissue observation, while soft tissue MRI images have better observation effects, and PET images can indicate areas with strong metabolism. Therefore, multi-modal imaging registration is often used in clinical assessment of disease. Medical image registration is to find the optimal spatial transformation between the source image and the target image to match all the feature points or at least all the corresponding points with diagnostic significance on the two images, and provide doctors with more abundant clinical information. Common registration methods include rigid registration and non-rigid registration.

3.1 Rigid Registration

Rigid deformation can be described by a few transformation parameters. In the field of radiotherapy, rigid registration is very common and highly accepted, and clinicians will fuse images of different modalities through this transformation to obtain more information about areas of interest. The registration method is to align the two images by finding the rotation-translation transformation matrix between the fixed image and the moving image [32]. The methods used include linear transformations such as translation and rotation, which can ensure that the overall structure or line parallelism of the image remains unchanged after spatial transformation. At the same time, it has the advantages of simple calculation and low time complexity, and is suitable for images with little deformation.

Rigid registration not only provides a prerequisite for further non-rigid registration and saves the calculation time of image optimization iterations, but also can intuitively display the anatomical structure differences between images between different modalities, assisting doctors in accurate delineation. Traditional registration methods include surface-based methods, point-based methods (usually based on anatomical markers), and voxel-based methods [33]. Among them, voxel-based methods have been widely used by virtue of the rapid development of computer technology. The goal of this method is to obtain geometric transformation parameters by computing the similarity between two input images without pre-extracting features [34]. However, these traditional registration methods often require iterative calculation of similarity measures such as mean square error, mutual information and normalized mutual information, etc. Due to the non-convexity of similarity measures in parameter space, the registration process is relatively expensive. sometimes with poor robustness [35]. Besides, other methods such as intensity-based feature selection algorithms perform image registration by extracting image features corresponding to the intensity, however, the extracted features are difficult to correspond well in anatomy [36].

3.2 Non-rigid Registration

Since medical images are affected by factors such as imaging time, imaging equipment, and patient posture, it is difficult to spatially register multimodal images. In addition, the internal tissue structure of the human body is complicated and has time-varying characteristics. For example, the tissues and organs in the lung scan images will move with the patient's breathing. For the deformation of the images with large differences in each direction, the rigid registration method cannot meet the requirements. In this case, a non-rigid registration technology needs to be used, and the same parts of different images are corresponding to each other by means of the spatial registration deformation field. The entire registration process will also introduce different degrees of registration errors due to the chosen optimization method.

Non-rigid transformation includes translation, rotation, scaling, and affine transformation based on an affine matrix and other linear and nonlinear transformation forms. Compared with rigid transformation, it has better deformation accuracy, but the calculation speed is slower. Gu et al. [37] proposed a B-spline affine transformation registration method, using affine transformation to replace the traditional displacement of each B-spline control point, and using a two-way distance cost function to replace the traditional oneway distance cost function to achieve bidirectional registration of two images. Pradhan et al. [38] used a P-spline function with a penalty added to the B-spline for brain image registration. The method based on the physical model regards the deformation of the floating image as the physical change caused by the external force, takes the original image as the input, and calculates the result of the image that is changed by the external force under the physical rules through the physical model. The physical models used are mainly viscous fluid models and optical flow field models. Wodzinski et al. [39] applied the algorithm of the optical flow field model to breast cancer tumor localization, compared it with the B-spline method, and obtained a better registration effect.

With the development of deep learning technology, significant progress has been made in the field of image processing, mainly including the use of unsupervised or self-supervised deep learning to calculate deformation parameters and similarity measures. For example, Hessam et al. [40] used a large number of artificially generated displacement vector fields for training to integrate image content from multiple scales, thereby directly estimating the displacement vector field from the input image. Hongming et al. [41] proposed a new non-rigid image registration algorithm based on a fully convolutional network, and optimized and learned the spatial transformation process between images through a self-supervised learning framework. However, until now, the non-rigid registration algorithm is still not mature enough compared with the rigid registration algorithm, and the algorithm acceptance is not enough [42].

4.1 Atlas Based Automatic Contouring

After multimodal image registration, clinicians will delineate contour information on the planned CT. The delineated targets mainly included therapeutic targets and OARs. The shape of OARs is relatively definite, and the location generally does not change much. In terms of automatically delineating OARs, the most widely used clinically is the automatic segmentation technology based on the atlas library [43]. Atlas refers to medical images and their corresponding binary delineation results, since even among different groups of people, the relative spatial positions and spatial shapes of normal organs in the body are similar, and the image textures have the same characteristics. The delineation principle is to pre-establish one or several sets of OARs templates, and machine learning methods automatically match the appropriate templates [44].

The delineation methods based on atlas libraries can be basically divided into two categories: delineation methods based on single atlases and delineation methods based on multiple atlases [45]. The delineation method based on a single map can be regarded as a deformation registration problem. First, the atlas is registered to the image to be delineated, and the transformation matrix and deformation field are obtained. All the delineated organs in the atlas will be deformed and mapped according to the same transformation parameters, and the result of the mapping is the delineation result. However, the single-atlas library delineation method may have a large difference between the input patient images and the average atlas, resulting in unsatisfactory delineation results.

The accuracy of the method based on a single atlas library depends heavily on the accuracy of image registration. When the atlas used is very different from the image to be delineated, it is difficult for the registration algorithm to achieve good results, resulting in a significant reduction in delineation accuracy. In order to improve this phenomenon, Aljabar et al. [46] proposed a multi-atlas method, which registered and fused multiple sets of reference atlases with the images to be delineated, obtained multiple sets of alternative delineation schemes, and used an algorithm to synthesize the alternative plans to form the final delineation. The performance of the multi-atlas library is often more stable than that of the single-atlas library, because the poor mapping results of some atlases in the multi-atlas will be corrected by other better-performing atlases, so that each part can be relatively reasonable. While multi-map-based methods improve the robustness of delineation compared to single-map-based methods, they are prone to topological errors because voxel voting does not necessarily result in closed surfaces. Such topological errors have a great impact on the formulation of radiation therapy plans, and are also difficult to detect, requiring time-consuming review and manual editing by clinicians [47].

4.2 Deep Learning Based Automatic Contouring

The atlas library is essentially the operation of registering the target image and the template image through morphological features, that is, the process of searching for the most approximate shape in the atlas library. But if the shape difference of the template image OARs is too large, the volume is too small or automatically delineated inappropriate choice of deformation algorithm will affect the registration accuracy [48]. The multi-atlas library can improve the accuracy of delineation, but the amount of calculation increases and the time-consuming increases, so a balance between accuracy and speed must be balanced.

Automatic delineation based on deep learning does not require the above trade-offs. Since the key advantage of deep learning is to automatically extract labelled features through the learning of generalized features in training samples to identify new scenes, the more input templates, the more accurate the learned features [49]. Dolz et al. [50] used the support vector machine (SVM) algorithm to successfully achieve automatic segmentation of the brainstem on the MRI image of brain tumors, and then used another deep learning algorithm to segment the optic nerve, optic chiasm, pituitary and small organs such as pituitary stalk are automatically segmented, and the similarity coefficient reaches 76-83% [51]. They also used hand-extracted features, combined with unsupervised stacked denoising autoencoders for brainstem segmentation, and the classification speed was about 70 times faster than that based on SVM methods, reducing segmentation time [52]. Liang et al. [53] performed automatic segmentation on CT images based on deep learning, with a sensitivity of 0.997~1 for automatic segmentation of most organs, which can effectively improve nasopharyngeal cancer radiotherapy planning.

Currently, deep learning networks, especially convolutional neural networks (CNN), have become a common method for medical image analysis [54]. CNN is capable of processing multi-dimensional and multi-channel data, capturing complex nonlinear mappings between input and output, with advantages for image processing and classification. A Stanford University study used a CNN model to automatically segment head and neck OARs for the first time. In the automatic segmentation of organs such as bone, pharynx, larynx, eyeball and optic nerve, it is better than or equivalent to the current best technology. But for organs such as parotid gland, submandibular gland and optic chiasm whose boundaries are not easy to identify on CT images, the delineated results are not satisfactory [55]. Lu et al. [56] used a 3D CNN to automatically segment the liver, combined with a graph cut algorithm to refine the segmentation. The advantage is that no manual initialization is required, and the segmentation process can be performed by non-professionals. Also using 3D CNN for liver segmentation, Hu et al. [57] combined deep learning with global and local shape prior information, and evaluated on the same dataset, and all error indicators were significantly reduced. In a follow-up study, the target was extended to abdominal multi-organ segmentation, using 3D CNN to perform pixel-to-pixel dense prediction with higher accuracy and shorter segmentation time [58].

Therefore, the outline processing of OARs is a complex project, and it is often difficult to use a set of models to achieve the expected accuracy for different parts of the body or different modalities. In actual situations, it is necessary to combine specific factors to make certain improvements to deep neural networks.

5.1 GTV Automatic Delineation

As with normal tissue delineation, deep learning-assisted tumor target delineation helps improve execution efficiency. However, since it is often difficult to distinguish the boundary between the tumor and the surrounding tissue, the clinical information, pathological sections, and images of the patient will become the reference data for GTV delineation. Various techniques are used to aid in identification. In the Multimodal Brain Tumor Image Segmentation Challenge (BraTS) in 2013, Pereira et al. [59] used CNN to automatically segment brain tumor MRI images, which improved the network accuracy and ranked first.

Since then, Kamnitsas et al. [60] proposed a dual-channel 3D CNN network for brain injury (including traumatic brain injury, brain tumor, ischemic stroke) segmentation, the first time to use fully connected conditional randomization on medical data. Both of the above studies used neural networks with small convolution kernels to make the network structure deeper without increasing the computational cost. Men et al. [61] used big data to train deep dilated residual network (DD-ResNet) for breast tumor segmentation, and the results were better than deep dilated convolutional neural networks (DDCNN) and distributed deep neural networks (DDNN), similar to Dice The dice similarity coefficient (DSC) was 91%, which was higher than the result hand-drawn by experts [62].

In addition, for the above-mentioned basic network types, studies have also shown that the improved network in [63] can improve the accuracy of network segmentation and has stronger robustness. Lin et al. [64] trained a 3D CNN to delineate the GTV of nasopharyngeal carcinoma on MRI images, and the similarity with the GTV delineated by experts was high, with the DSC reaching 79%. With the help of machine learning, doctors reduced their time by 39.4% and improved their accuracy. 3D CNN not only utilizes the CT image information of each layer extracted by traditional CNN, but also utilizes the information between layers, the information utilization rate is high, and the accuracy is improved to a certain extent. Qi et al. [65] used convolutional neural networks to delineate the target volume of nasopharyngeal carcinoma based on multimodal imaging (CT and MRI). The results show that the target area is delineated with high precision. Li et al. [66] used the U-Net to automatically delineate the target volume of nasopharyngeal carcinoma based on CT images. The results showed that the segmentation accuracy of the automatically delineated target volume was high. Li et al. [67] based on the four-dimensional computed tomography data of patients with non-small cell lung cancer, used transfer learning to automatically delineate the tumor area, which improved the accuracy and shortened the retraining time of the network. When the breathing range was 5-10 mm, the matching index improved by 36.1% on average compared with the comprehensive elastic deformation registration technique. In a recent study [68], the authors used fuzzy c-means clustering (FCM), artificial neural network (ANN), and SVM algorithms to automatically segment GTV of solid, ground-glass, and mixed lung cancer lesions, respectively. It is considered that the results of the FCM model are more accurate and efficient, and can be reliably applied to SBRT.

Delineating GTV based on deep learning can improve the work efficiency of clinicians, but this method cannot completely replace manual delineation. On the basis of automatic delineation, manual correction is still required to achieve accurate delineation effects [69].

5.2 CTV Automatic Delineation

CTV should be given a certain dose of radiation to the subclinical foci formed by infiltration around the primary tumor and the path of regional lymph node metastasis according to the requirements of radiobiology and the factors of tumor occurrence and metastasis. It is the basis for tumor regional radiotherapy to control recurrence and metastasis. The delineation needs to be judged in combination with the specific pathological conditions and the possible invasion or metastasis range of the diseased tissue, and the delineation results of different types of tumors and different stages are completely different.

Specifically, Men et al. [70] used a DDCNN model to attempt automatic segmentation of CTV and OARs in 218 rectal cancer patients, and the results were accurate and efficient. Among them, the DSC of CTV reaches 87.7%, the DSC of bladder and bilateral femoral head is more than 90%, and the delineation of small intestine and colon is not accurate enough, and the DSC is 65.3% and 61.8%, respectively. It is possibly related with that they are both air-containing hollow organs. Based on deep learning with Area-aware reweight strategy and Recursive refinement strategy, called RA-CTVNet, Shi et al. [71] segment the CTV from cervical cancer CT images. Their experimental results show that RA-CTVNet improves DSC compared with different network architectures. Compared with three clinical experts, RA-CTVNet performed better than the two experts while comparably to the third expert. Shen et al. [72] modified the U-net model by incorporating the contours of gross tumor volume of lymph node (GTVnd) and designed the DiUnet model for the automatic delineation of lung cancer CTV. The results showed that the DSC of most lymph node regions was up to 70%, which was not significantly different from manual delineation.

In addition, our team [73] collected CT images of 53 cervical cancer patients. By modifying the U-net model and the training process according to the task, the automatic segmentation of images of cervical cancer CTV region and normal tissue is realized. By testing the prediction accuracy of the model and the number of required dialogue rounds, the recall rate, accuracy rate, DSC, Intersection over Union (IoU), etc. of the results were evaluated. The results show that the proposed model has good performance in all the indicators outlined in the target area. And compared with commonly used deep learning neural network models such as mask region-based convolution neural network (Mask R-CNN), speech enhancement generative adversarial network (SegAN), and U-net, the segmentation boundary of the proposed model is clearer and smoother, and the recall rate is obviously better than that of other models. Moreover, because of its very light weight, it can be adapted to the dataset size-limited case.

Due to the involvement of subclinical lesions and lymph node drainage areas, CTV automatic delineation is relatively more difficult, and the performance of deep learning delineation is still far from that of experts [7476]. In the future, relying on the disease-specific big data platform to integrate multimodal radiotherapy data, imaging, genetic and other multi-omics data, as well as the experience data of senior radiotherapy physicians, physicists, and technicians, it is expected to be useful in the prediction of efficacy and complication risk. Guided by the results, individualized CTV range decisions are provided.

The research of machine learning methods in the field of radiotherapy has been fully rolled out and achieved phased results, among which the automatic delineation of normal tissues and tumor target areas has always been a research hotspot [7779]. Most of the existing deep learning models are based on natural images, and there is a lack of deep learning models dedicated to medical, especially radiation oncology-related images. The difference between medical images and natural images is that medical images are grayscale images and generally have continuity [80, 81]. In image segmentation, not only the regional structure of an image, but also the spatial structure of 3D data must be considered [82]. In addition, local and global prior information needs to be considered before it can further contribute to the segmentation of OARs and therapeutic target volume [83]. Moreover, multimodal image registration is often required to further identify the extent of tumor invasion [84, 85].

Besides, radiotherapy is one of the links in tumor treatment. How to determine the appropriate radiotherapy target range and irradiation dose is a complex issue that requires system integration, such as disease characteristics and overall treatment mode, even the cross-scale issues from molecular cells to tissues and organs, and the spatio-temporal relationship of biomolecules and other factors need to be comprehensively analyzed. So that the radiotherapy plan obtained in this way is more in line with the principle of precise individualized treatment. The integration of automatic radiotherapy target delineation with artificial intelligence knowledge maps and causal analysis may play an important role in the formulation of clinical radiotherapy targets [86].

At present, most of the current applications are in the preclinical research stage, but there are still some problems in clinical application. First, high-quality clinical data is the basis for artificial intelligence to learn and judge, but the current standardization of relevant medical data for automatic target area delineation is not high. The quality of labeling is uneven, and the data of major medical centers lack a joint construction and sharing mechanism. There are data barriers, which seriously hinder the effective use of data and product development. Secondly, it is still difficult to accurately define the treatment target area. Based on the current CT, MRI, PET-CT and other means, it is generally not difficult to determine the GTV, but some lesions are still difficult to identify, such as soft tissue invasion, bone destruction degree and scope, etc. The doses of CTV are different according to the risk of recurrence and metastasis. There is no relevant research on how to determine high-, medium-, and low-risk CTV. In addition, the clinical application of artificial intelligence is directly related to life and health, and faces many ethical and legal challenges. However, the automatic delineation of radiotherapy target volume based on machine learning will be an important development direction of artificial intelligence in the medical field in the future.

This work was financially supported by

  1. Scientific Research Project of Anhui Provincial Health Commission (No. AHWJ2022b058)

  2. Joint Fund for Medical Artificial Intelligence of the First Affiliated Hospital of USTC (No. MAI2022Q009)

  3. Student Innovation and Entrepreneurship Fund of USTC (No. WK5290000003)

  4. China Scholarship Council (No. 202206340057)

Zhenchao Tao ([email protected], 0000-0001-8142-9164): Methodology, Writing—original draft preparation, Funding acquisition. Shengfei Lyu ([email protected], 0000-0002-1843-6836): Writing— original draft preparation, Writing—review and editing.

[1]
Sung
,
H.
,
Ferlay
,
J.
,
Siegel
,
R.L.
, et al
:
Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries
.
CA: A Cancer Journal for Clinicians
71
(
3
),
209
249
(
2021
)
[2]
Devita
,
V.T.
Jr.
,
Rosenberg
,
S.A.
:
Two hundred years of cancer research
.
The New England Journal of Medicine
366
(
23
),
2207
14
(
2012
)
[3]
Avanzo
,
M.
,
Stancanello
,
J.
,
Pirrone
,
G.
, et al
:
Radiomics and deep learning in lung cancer
.
Strahlentherapie und Onkologie
196
(
10
),
879
887
(
2020
)
[4]
Howard
,
F.M.
,
Kochanny
,
S.
,
Koshy
,
M.
, et al
:
Machine Learning-Guided Adjuvant Treatment of Head and Neck Cancer
.
JAMA Network Open
3
(
11
),
e2025881
(
2020
)
[5]
Xingyu
,
W.
,
Zhenchao
,
T.
,
Bingbing
,
J.
, et al
:
Domain knowledge-enhanced variable selection for biomedical data analysis
,
Information Sciences
606
,
469
488
(
2022
)
[6]
Yang
,
Z.
,
Olszewski
,
D.
,
He
,
C.
, et al
:
Machine learning and statistical prediction of patient quality-of-life after prostate radiation therapy
.
Computers in Biology and Medicine
129
(
Feb
),
104127
(
2021
)
[7]
Smith
,
A.G.
,
Petersen
,
J.
,
Terrones-Campos
,
C.
, et al
:
RootPainter3D: Interactive-machine-learning enables rapid and accurate contouring for radiotherapy
.
Medical Physics
49
(
1
),
461
473
(
2022
)
[8]
Baskar
,
R.
,
Lee
,
K.A.
,
Yeo
,
R.
, et al
:
Cancer and radiation therapy: current advances and future directions
.
International Journal of Medical Sciences
9
(
3
),
193
9
(
2012
)
[9]
Zhenchao
,
T.
,
Jun
,
Q.
,
Yangyang
,
Z.
, et al
:
Endostar plus chemoradiotherapy versus chemoradiotherapy alone for patients with advanced nonsmall cell lung cancer: A systematic review and meta-analysis
.
International Journal of Radiation Research
19
(
1
),
1
12
(
2021
)
[10]
Coutard, H.
Principles of X-ray therapy of malignant disease
.
The Lancet
224
(
5784
),
1
4
(
1934
)
[11]
Thariat
,
J.
,
Hannoun-Levi
,
J.M.
,
Sun-Myint
,
A.
, et al
:
Past, present, and future of radiotherapy for the benefit of patients
.
Nature Reviews Clinical Oncology
10
(
1
),
52
60
(
2013
)
[12]
Takahashi
,
S.
:
Conformation radiotherapy. Rotation techniques as applied to radiography and radiotherapy of cancer
.
Acta Radiologica Diagnosis
242
,
11
17
(
1965
)
[13]
Chang
,
J.Y.
,
Senan
,
S.
,
Paul
,
M.A.
et al.:
Stereotactic ablative radiotherapy versus lobectomy for operable stage I non-small-cell lung cancer: a pooled analysis of two randomised trials
.
The Lancet Oncology
16
(
6
),
630
637
(
2015
)
[14]
Basson
,
L.
,
Jarraya
,
H.
,
Escande
,
A.
, et al
:
Chest Magnetic Resonance Imaging Decreases Inter-observer Variability of Gross Target Volume for Lung Tumors
.
Frontiers in Oncology
9
,
690
(
2019
)
[15]
Sun
,
Y.
,
Shi
,
H.
,
Zhang
,
S.
, et al
:
Accurate and rapid CT image segmentation of the eyes and surrounding organs for precise radiotherapy
.
Medical Physics
46
(
5
),
2214
2222
(
2019
)
[16]
Batra
,
R.
,
Kuecuekkaya
,
A.
,
Zeevi
,
T.
, et al
:
Proof-Of-Concept Use Of Machine Learning To Predict Tumor Recurrence Of Early-Stage Hepatocellular Carcinoma Before Therapy Using Baseline Magnetic Resonance Imaging
.
Transplantation
104
(
S3
),
S43
S44
(
2020
)
[17]
Wong
,
J.
,
Fong
,
A.
,
McVicar
,
N.
, et al
:
Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning
.
Radiotherapy And Oncology
144
,
152
158
(
2020
)
[18]
Men
,
K.
,
Geng
,
H.
,
Cheng
,
C.
, et al
:
Technical Note: More accurate and efficient segmentation of organs-at-risk in radiotherapy with convolutional neural networks cascades
.
Medical Physics
46
(
1
),
286
292
(
2019
)
[19]
Fu
,
Y.
,
Mazur
,
T.R.
,
Wu
,
X.
, et al
:
A novel MRI segmentation method using CNN-based correction network for MRI-guided adaptive radiotherapy
.
Medical Physics
45
(
11
),
5129
5137
(
2018
)
[20]
LeCun
,
Y.
,
Bengio
,
Y.
,
Hinton
,
G.
:
Deep learning
.
Nature
521
(
7553
),
436
444
(
2015
)
[21]
Jiang
,
B.
,
Wu
,
X.
,
Zhou
,
X.
, et al
:
Semi-Supervised Multiview Feature Selection With Adaptive Graph Learning
.
IEEE Transactions Neural Networks and Learning Systems
, pp (
2022
)
[22]
Li
,
L.
,
Yan
,
M.
,
Tao
,
Z.
, et al
:
Semi-Supervised Graph Pattern Matching and Rematching for Expert Community Location
.
ACM Transactions on Knowledge Discovery from Data (TKDD)
, (
2022
)
[23]
Valdes
,
G.
,
Simone
,
C.B.
,
Chen
,
J.
, et al
:
Clinical decision support of radiotherapy treatment planning: A data-driven machine learning strategy for patient-specific dosimetric decision making
.
Radiother Oncol
125
(
3
),
392
397
(
2017
)
[24]
Guo
,
H.Y.
,
Wang
,
D.Z.
:
A Multilevel Optimal Feature Selection and Ensemble Learning for a Specific CAD System-Pulmonary Nodule Detection
.
Applied Mechanics and Materials & Materials
380
,
1593
1599
(
2013
)
[25]
Zhenyu
,
L.
,
Chaohong
,
L.
,
Haiwei
,
H.
, et al
:
Hierarchical Multi-Granularity Attention-Based Hybrid Neural Network for Text Classification
.
IEEE Access
2020
(
8
),
149362
149371
(
2020
)
[26]
Zhao
,
X.
,
Chen
,
H.
,
Xing
,
Z.
, et al
:
Brain-Inspired Search Engine Assistant Based on Knowledge Graph
.
IEEE Transactions on Neural Networks and Learning Systems
, PP (
2021
)
[27]
Huang
,
B.
,
Zhu
,
Y.
,
Usman
,
M.
, et al
:
Graph Neural Networks for Missing Value Classification in a Task-driven Metric Space
.
IEEE Transactions on Knowledge and Data Engineering
, (
2022
)
[28]
Zhao
,
X.
,
Chen
,
L.
,
Chen
,
H.
:
A Weighted Heterogeneous Graph-Based Dialog System
.
IEEE Trans Neural Netw Learn Syst
, pp (
2021
)
[29]
Yuan
,
B.
,
Chen
,
H.
,
Yao
,
X.
:
Toward efficient design space exploration for fault-tolerant multiprocessor systems
.
IEEE Transactions on Evolutionary Computation
24
(
1
),
157
169
(
2019
)
[30]
Lyu
,
S.
,
Tian
,
X.
,
Li
,
Y.
, et al
:
Multiclass probabilistic classification vector machine
.
IEEE Transactions on Neural Networks and Learning Systems
31
(
10
),
3906
3919
(
2019
)
[31]
Yu
,
K.
,
Liu
,
L.
,
Li
,
J.
, et al
:
Mining Markov Blankets Without Causal Sufficiency
.
IEEE Transactions on Neural Networks and Learning Systems
29
(
12
),
6333
6347
(
2018
)
[32]
Rahunathan
,
S.
,
Stredney
,
D.
,
Schmalbrock
,
P.
, et al
:
Image registration using rigid registration and maximization of mutual information
.
The 13th Annual Medicine Meets Virtual Reality Conference
, (
2005
)
[33]
Viergever
,
M.A.
,
Maintz
,
J.B.A.
,
Klein
,
S.
, et al
:
A survey of medical image registration—under review
.
Medical Image Analysis
33
,
140
144
(
2016
)
[34]
Oliveira
,
F.P.
,
Tavares
,
J.M.
:
Medical image registration: a review
.
Computer Methods in Biomechanics & Biomedical Engineering
17
(
2
),
73
93
(
2014
)
[35]
Mahapatra
,
D.
,
Antony
,
B.
,
Sedai
,
S.
, et al
:
Deformable medical image registration using generative adversarial networks
.
2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)
.
IEEE
,
1449
1453
(
2018
)
[36]
Kearney
,
V.
,
Haaf
,
S.
,
Sudhyadhom
,
A.
, et al
:
An unsupervised convolutional neural network-based algorithm for deformable image registration
.
Physics in Medicine and Biology
63
(
18
),
185017
(
2018
)
[37]
Gu
,
S.
,
Meng
,
X.
,
Sciurba
,
F.C.
, et al
:
Bidirectional elastic image registration using B-spline affine transformation
.
Computerized Medical Imaging and Graphics
38
(
4
),
306
314
(
2014
)
[38]
Pradhan
,
S.
,
Patra
,
D.
:
RMI based non-rigid image registration using BF-QPSO optimization and P-spline
.
AEU-International Journal of Electronics and Communications
69
(
3
),
609
621
(
2015
)
[39]
Wodzinski
,
M.
,
Skalski
,
A.
,
Ciepiela
,
I.
, et al
:
Application of demons image registration algorithms in resected breast cancer lodge localization
.
2017 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA)
,
400
405
(
2017
)
[40]
Sokooti
,
H.
,
Vos
,
B.
,
Berendsen
,
F.
, et al
:
Nonrigid image registration using multi-scale 3D convolutional neural networks
.
International conference on medical image computing and computer-assisted intervention
.
Springer
,
Cham
,
232
239
(
2017
)
[41]
Hongming
,
L.
,
Yong
,
F.
:
Non-Rigid Image Registration Using Self-Supervised Fully Convolutional Networks Without Training Data
.
IEEE 15th International Symposium on Biomedical Imaging
,
1075
1078
(
2018
)
[42]
Rong
,
Y.
,
Rosu-Bubulac
,
M.
,
Benedict
,
SH.
, et al
:
Rigid and Deformable Image Registration for Radiation Therapy: A Self-Study Evaluation Guide for NRG Oncology Clinical Trial Participation
.
Practical Radiation Oncology
11
(
4
),
282
298
(
2021
)
[43]
Zhang
,
T.
,
Yang
,
Y.
,
Wang
,
J.
, et al
:
Comparison between atlas and convolutional neural network based automatic segmentation of multiple organs at risk in non-small cell lung cancer
.
Medicine (Baltimore)
99
(
34
),
e21800
(
2020
)
[44]
Lustberg
,
T.
,
Soest
,
J.V.
,
Gooding
,
M.
, et al
:
Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer
.
Radiotherapy and Oncology
126
(
2
),
312
317
(
2018
)
[45]
Cabezas
,
M.
,
Oliver
,
A.
,
Lladó
,
X.
, et al
:
A review of atlas-based segmentation for magnetic resonance brain images
.
Computer Methods and Programs in Biomedicine
104
(
3
),
e158
77
(
2011
)
[46]
Aljabar
,
P.
,
Heckemann
,
R.A.
,
Hammers
,
A.
, et al
:
Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy
.
Neuroimage
46
(
3
),
726
738
(
2009
)
[47]
Sharp
,
G.
,
Fritscher
,
K.D.
,
Pekar
,
V.
, et al
:
Vision 20/20: perspectives on automated image segmentation for radiotherapy
.
Medical Physics
41
(
5
),
050902
(
2014
)
[48]
Valerio
,
F.
,
René
,
F.V.
,
Fedde
,
V.D.L
, et al
:
Tissue segmentation of head and neck CT images for treatment planning: a multiatlas approach combined with intensity modeling
.
Medical Physics
40
(
7
),
071905
(
2013
)
[49]
Li
,
Y.
,
Zhang
,
H.
,
Xue
,
X.
, et al
:
Deep learning for remote sensing image classification: A survey
.
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
8
(
6
),
e1264
(
2018
)
[50]
Dolz
,
J.
,
Laprie
,
A.
,
Ken
,
S.
, et al
:
Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context
.
International Journal of Computer Assisted Radiology Surgery
11
(
1
),
43
51
(
2016
)
[51]
Dolz
,
J.
,
Reyns
,
N.
,
Betrouni
,
N.
, et al
:
A deep learning classification scheme based on augmented-enhanced features to segment organs at risk on the optic region in brain cancer patients
.
arXiv preprint arXiv:1703.10480
(
2017
)
[52]
Dolz
,
J.
,
Betrouni
,
N.
,
Quidet
,
M.
, et al
:
Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study
.
Computerized Medical Imaging and Graphics
52
,
8
18
(
2016
)
[53]
Liang
,
S.
,
Tang
,
F.
,
Huang
,
X.
, et al
:
Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning
.
European Radiology
29
(
4
),
1961
1967
(
2019
)
[54]
Litjens
,
G.
,
Kooi
,
T.
,
Bejnordi
,
B.E.
, et al
:
A survey on deep learning in medical image analysis
.
Medical Image Analysis
42
,
60
88
(
2017
)
[55]
Ibragimov
,
B.
,
Xing
,
L.
:
Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks
.
Medical Physics
44
(
2
),
547
557
(
2017
)
[56]
Lu
,
F.
,
Wu
,
F.
,
Hu
,
P.
, et al
:
Automatic 3D liver location and segmentation via convolutional neural network and graph cut
.
International Journal of Computer Assisted Radiology Surgery
12
(
2
),
171
182
(
2017
)
[57]
Hu
,
P.
,
Wu
,
F.
,
Peng
,
J.
, et al
:
Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution
.
Physics in Medicine and Biology
61
(
24
),
8676
8698
(
2016
)
[58]
Hu
,
P.
,
Wu
,
F.
,
Peng
,
J.
, et al
:
Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets
.
International Journal of Computer Assisted Radiology Surgery
12
(
3
),
399
411
(
2017
)
[59]
Pereira
,
S.
,
Pinto
,
A.
,
Alves
,
V.
, et al
:
Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images
.
IEEE Transactions on Medical Imaging
35
(
5
),
1240
1251
(
2016
)
[60]
Kamnitsas
,
K.
,
Ledig
,
C.
,
Newcombe
,
V.F.J.
, et al
:
Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation
.
Medical Image Analysis
36
,
61
78
(
2017
)
[61]
Men
,
K.
,
Zhang
,
T.
,
Chen
,
X.
, et al
:
Fully automatic and robust segmentation of the clinical target volume for radiotherapy of breast cancer using big data and deep learning
.
Physica Medica
50
,
13
19
(
2018
)
[62]
Mourik
,
A.M.V.
,
Elkhuizen
,
P.H.M.
,
Minkema
,
D.
, et al
:
Multiinstitutional study on target volume delineation variation in breast radiotherapy in the presence of guidelines
.
Radiotherapy & Oncology
94
(
3
),
286
91
(
2010
)
[63]
Oktay
,
O.
,
Ferrante
,
E.
,
Kamnitsas
,
K.
, et al
:
Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation
.
IEEE Transactions on Medical Imaging
37
(
2
),
384
395
(
2018
)
[64]
Lin
,
L.
,
Dou
,
Q.
,
Jin
,
Y.M.
, et al
:
Deep Learning for Automated Contouring of Primary Tumor Volumes by MRI for Nasopharyngeal Carcinoma
.
Radiology
291
(
3
),
677
686
(
2019
)
[65]
Qi
,
Y.
,
Li
,
J.
,
Chen
,
H.
, et al
:
Computer-aided diagnosis and regional segmentation of nasopharyngeal carcinoma based on multi-modality medical images
.
International Journal of Computer Assisted Radiology Surgery
16
(
6
),
871
882
(
2021
)
[66]
Li
,
S.
,
Xiao
,
J.
,
He
,
L.
, et al
:
The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods
.
Technology in Cancer Research & Treatment
18
,
1533033819884561
(
2019
)
[67]
Xiadong
,
L.
,
Ziheng
,
D.
,
Qinghua
,
D.
, et al
:
A novel deep learning framework for internal gross target volume definition from 4D computed tomography of lung cancer patients
.
IEEE Access
6
,
37775
37783
(
2018
)
[68]
Kawata
,
Y.
,
Arimura
,
H.
,
Ikushima
,
K.
, et al
:
Impact of pixel-based machine-learning techniques on automated frameworks for delineation of gross tumor volume regions for stereotactic body radiation therapy
.
Physica Medica
42
,
141
149
(
2017
)
[69]
Chen
,
M.
,
Wu
,
S.
,
Zhao
,
W.
, et al
:
Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy
.
Cancer Radiotherapie
26
(
3
),
494
501
(
2022
)
[70]
Men
,
Kuo.
,
Dai
,
Jianrong.
,
Li
,
Yexiong
.:
Automatic segmentation of the clinical target volume and organs at risk in the planning CT for rectal cancer using deep dilated convolutional neural networks
.
Medical Physics
44
(
12
),
6377
6389
(
2017
)
[71]
Shi
,
J.
,
Ding
,
X.
,
Liu
,
X.
, et al
:
Automatic clinical target volume delineation for cervical cancer in CT images using deep learning
.
Medical Physics
48
(
7
),
3968
3981
(
2021
)
[72]
Shen
,
J.
,
Zhang
,
F.
,
Di
,
M.
, et al
:
Clinical target volume automatic segmentation based on lymph node stations for lung cancer with bulky lump lymph nodes
.
Thoracic Cancer
13
(
20
),
2897
2903
(
2022
)
[73]
Yizhan
,
F.
,
Zhenchao
,
T.
,
Jun
,
L.
, et al
:
An Encoder-Decoder Network for Automatic Clinical Target Volume Target Segmentation of Cervical Cancer in CT Images
.
International Journal of Crowd Science
6
(
3
),
111
116
(
2022
)
[74]
Chen
,
X.
,
Sun
,
S.
,
Bai
,
N.
, et al
:
A deep learning-based auto-segmentation system for organs-at-risk on whole-body computed tomography images for radiation therapy
.
Radiotherapy and Oncology
160
,
175
184
(
2021
)
[75]
Ju
,
Z.
,
Guo
,
W.
,
Gu
,
S.
, et al
:
CT based automatic clinical target volume delineation using a dense-fully connected convolution network for cervical Cancer radiation therapy
.
BMC Cancer
21
(
1
),
1
10
(
2021
)
[76]
Unkelbach
,
J.
,
Bortfeld
,
T.
,
Cardenas
,
C.E.
, et al
:
The role of computational methods for automating and improving clinical target volume definition
.
Radiotherapy and Oncology
153
,
15
25
(
2020
)
[77]
Min
,
H.
,
Dowling
,
J.
,
Jameson
,
M.G.
, et al
:
Automatic radiotherapy delineation quality assurance on prostate MRI with deep learning in a multicentre clinical trial
.
Physics in Medicine Biology
66
(
19
),
195008
(
2021
)
[78]
Robert
,
C.
,
Munoz
,
A.
,
Moreau
,
D.
, et al
:
Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers
.
Cancer Radiotherapie
25
(
6-7
),
607
616
(
2021
)
[79]
Piazzese
,
C.
,
Evans
,
E.
,
Thomas
,
B.
, et al
:
FIELDRT: an open-source platform for the assessment of target volume delineation in radiation therapy
.
British Journal of Radiology
94
(
1126
),
20210356
(
2021
)
[80]
Ogura
,
A.
,
Kamakura
,
A.
,
Kaneko
,
Y.
, et al
:
Comparison of grayscale and color-scale renderings of digital medical images for diagnostic interpretation
.
Radiological Physics and Technology
10
(
3
),
359
363
(
2017
)
[81]
Huang
,
Y.
,
Hu
,
G.
,
Ji
,
C.
, et al
:
Glass-cutting medical images via a mechanical image segmentation method based on crack propagation
.
Nature Communications
11
(
1
),
5669
(
2020
)
[82]
Pizer
,
S.M.
,
Fletcher
,
P.T.
,
Joshi
,
S.
, et al
:
Deformable M-Reps for 3D Medical Image Segmentation
.
International Journal of Computer Vision
55
(
2-3
),
85
106
(
2003
)
[83]
Mansoor
,
A.
,
Bagci
,
U.
,
Foster
,
B.
, et al
:
Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends
.
Radiographics
35
(
4
),
1056
1076
(
2015
)
[84]
Perkuhn
,
M.
,
Stavrinou
,
P.
,
Thiele
,
F.
, et al
:
Clinical Evaluation of a Multiparametric Deep Learning Model for Glioblastoma Segmentation Using Heterogeneous Magnetic Resonance Imaging Data From Clinical Routine
.
Investigative Radiology
53
(
11
),
647
654
(
2018
)
[85]
Li
,
L.
,
Zhao
,
X.
,
Lu
,
W.
, et al
:
Deep Learning for Variational Multimodality Tumor Segmentation in PET/CT
.
Neurocomputing
392
,
277
295
(
2020
)
[86]
Lei
,
L.
,
Xun
,
Du.
,
Zan
,
Zhang.
, et al
:
Fuzzy-Constrained Graph Pattern Matching in Medical Knowledge Graphs
.
Data Intelligence
4
(
3
),
599
619
(
2022
)
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.