Purpose: Current phantoms used for the dose reconstruction of long-term childhood cancer survivors lack individualization. We design a method to predict highly individualized abdominal three-dimensional (3-D) phantoms automatically.
Approach: We train machine learning (ML) models to map (2-D) patient features to 3-D organ-at-risk (OAR) metrics upon a database of 60 pediatric abdominal computed tomographies with liver and spleen segmentations. Next, we use the models in an automatic pipeline that outputs a personalized phantom given the patient’s features, by assembling 3-D imaging from the database. A step to improve phantom realism (i.e., avoid OAR overlap) is included. We compare five ML algorithms, in terms of predicting OAR left–right (LR), anterior–posterior (AP), inferior–superior (IS) positions, and surface Dice–Sørensen coefficient (sDSC). Furthermore, two existing human-designed phantom construction criteria and two additional control methods are investigated for comparison.
Results: Different ML algorithms result in similar test mean absolute errors: ∼8 mm for liver LR, IS, and spleen AP, IS; ∼5 mm for liver AP and spleen LR; ∼80 % for abdomen sDSC; and ∼60 % to 65% for liver and spleen sDSC. One ML algorithm (GP-GOMEA) significantly performs the best for 6/9 metrics. The control methods and the human-designed criteria in particular perform generally worse, sometimes substantially (+5-mm error for spleen IS, −10 % sDSC for liver). The automatic step to improve realism generally results in limited metric accuracy loss, but fails in one case (out of 60).
Conclusion: Our ML-based pipeline leads to phantoms that are significantly and substantially more individualized than currently used human-designed criteria.
The advent of Machine Learning (ML) is proving extremely beneficial in many healthcare applications. In pediatric oncology, retrospective studies that investigate the relationship between treatment and late adverse effects still rely on simple heuristics. To capture the effects of radiation treatment, treatment plans are typically simulated on virtual surrogates of patient anatomy called phantoms. Currently, phantoms are built to represent categories of patients based on reasonable yet simple criteria. This often results in phantoms that are too generic to accurately represent individual anatomies. We present a novel approach that combines imaging data and ML to build individualized phantoms automatically. We design a pipeline that, given features of patients treated in the pre-3D planning era when only 2D radiographs were available, as well as a database of 3D Computed Tomography (CT) imaging with organ segmentations, uses ML to predict how to assemble a patient-specific phantom. Using 60 abdominal CTs of pediatric patients between 2 to 6 years of age, we find that our approach delivers significantly more representative phantoms compared to using current phantom building criteria, in terms of shape and location of two considered organs (liver and spleen), and shape of the abdomen. Furthermore, as interpretability is often central to trust ML models in medical contexts, among other ML algorithms we consider the Gene-pool Optimal Mixing Evolutionary Algorithm for Genetic Programming (GP-GOMEA), that learns readable mathematical expression models. We find that the readability of its output does not compromise prediction performance as GP-GOMEA delivered the best performing models.
Performing large-scale three-dimensional radiation dose reconstruction for patients requires a large amount of manual work. We present an image processing-based pipeline to automatically reconstruct radiation dose. The pipeline was designed for childhood cancer survivors that received abdominal radiotherapy with anterior-to-posterior and posterior-to-anterior field set-up. First, anatomical landmarks are automatically identified on two-dimensional radiographs. Second, these landmarks are used to derive parameters to emulate the geometry of the plan on a surrogate computed tomography. Finally, the plan is emulated and used as input for dose calculation. For qualitative evaluation, 100 cases of automatic and manual plan emulations were assessed by two experienced radiation dosimetrists in a blinded comparison. The two radiation dosimetrists approved 100%/100% and 92%/91% of the automatic/manual plan emulations, respectively. Similar approval rates of 100% and 94% hold when the automatic pipeline is applied on another 50 cases. Further, quantitative comparisons resulted in on average <5 mm difference in plan isocenter/borders, and <0.9 Gy in organ mean dose (prescribed dose: 14.4 Gy) calculated from the automatic and manual plan emulations. No statistically significant difference in terms of dose reconstruction accuracy was found for most organs at risk. Ultimately, our automatic pipeline results are of sufficient quality to enable effortless scaling of dose reconstruction data generation.
3D dose reconstruction for radiotherapy (RT) is the estimation of the 3D radiation dose distribution patients received during RT. Big dose reconstruction data is needed to accurately model the relationship between the dose and onset of adverse effects, to ultimately gain insights and improve today’s treatments. Dose reconstruction is often performed by emulating the original RT plan on a surrogate anatomy for dose estimation. This is especially essential for historically treated patients with long-term follow-up, as solely 2D radiographs were used for RT planning, and no 3D imaging was acquired for these patients. Performing dose reconstruction for a large group of patients requires a large amount of manual work, where the geometry of the original RT plan is emulated on the surrogate anatomy, by visually comparing the latter with the original 2D radiograph of the patient. This is a labor-intensive process that for practical use needs to be automated. This work presents an image-processing pipeline to automatically emulate plans on surrogate computational tomography (CT) scans. The pipeline was designed for childhood cancer survivors that historically received abdominal RT with anterior-to-posterior and posterior-to-anterior RT field set-up. First, anatomical landmarks are automatically identified on 2D radiographs. Next, these landmarks are used to derive parameters needed to finally emulate the plan on a surrogate CT. Validation was performed by an experienced RT planner, visually assessing 12 cases of automatic plan emulations. Automatic emulations were approved 11 out of 12 times. This work paves the way to effortless scaling of dose reconstruction data generation.
Optical coherence tomography (OCT) is used to produce high-resolution three-dimensional images of the retina, which permit the investigation of retinal irregularities. In dry age-related macular degeneration (AMD), a chronic eye disease that causes central vision loss, disruptions such as drusen and changes in retinal layer thicknesses occur which could be used as biomarkers for disease monitoring and diagnosis. Due to the topology disrupting pathology, existing segmentation methods often fail. Here, we present a solution for the segmentation of retinal layers in dry AMD subjects by extending our previously presented loosely coupled level sets framework which operates on attenuation coefficients. In eyes affected by AMD, Bruch’s membrane becomes visible only below the drusen and our segmentation framework is adapted to delineate such a partially discernible interface. Furthermore, the initialization stage, which tentatively segments five interfaces, is modified to accommodate the appearance of drusen. This stage is based on Dijkstra's algorithm and combines prior knowledge on the shape of the interface, gradient and attenuation coefficient in the newly proposed cost function. This prior knowledge is incorporated by varying the weights for horizontal, diagonal and vertical edges. Finally, quantitative evaluation of the accuracy shows a good agreement between manual and automated segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.