Purpose: Automation of organ segmentation, via convolutional neural networks (CNNs), is key to facilitate the work of medical practitioners by ensuring that the adequate radiation dose is delivered to the target area while avoiding harmful exposure of healthy organs. The issue with CNNs is that they require large amounts of data transfer and storage which makes the use of image compression a necessity. Compression will affect image quality which in turn affects the segmentation process. We address the dilemma involved with handling large amounts of data while preserving segmentation accuracy.
Approach: We analyze and improve 2D and 3D U-Net robustness against JPEG 2000 compression for male pelvic organ segmentation. We conduct three experiments on 56 cone beam computed tomography (CT) and 74 CT scans targeting bladder and rectum segmentation. The two objectives of the experiments are to compare the compression robustness of 2D versus 3D U-Net and to improve the 3D U-Net compression tolerance via fine-tuning.
Results: We show that a 3D U-Net is 50% more robust to compression than a 2D U-Net. Moreover, by fine-tuning the 3D U-Net, we can double its compression tolerance compared to a 2D U-Net. Furthermore, we determine that fine-tuning the network to a compression ratio of 64:1 will ensure its flexibility to be used at compression ratios equal or lower.
Conclusions: We reduce the potential risk involved with using image compression on automated organ segmentation. We demonstrate that a 3D U-Net can be fine-tuned to handle high compression ratios while preserving segmentation accuracy.
For prostate cancer patients, large organ deformations occurring between the sessions of a fractionated radiotherapy treatment lead to uncertainties in the doses delivered to the tumour and the surrounding organs at risk. The segmentation of those structures in cone beam CT (CBCT) volumes acquired before every treatment session is desired to reduce those uncertainties. In this work, we perform a fully automatic bladder segmentation of CBCT volumes with u-net, a 3D fully convolutional neural network (FCN). Since annotations are hard to collect for CBCT volumes, we consider augmenting the training dataset with annotated CT volumes and show that it improves the segmentation performance. Our network is trained and tested on 48 annotated CBCT volumes using a 6-fold cross-validation scheme. The network reaches a mean Dice similarity coefficient (DSC) of 0:801 ± 0:137 with 32 training CBCT volumes. This result improves to 0:848 ± 0:085 when the training set is augmented with 64 CT volumes. The segmentation accuracy increases both with the number of CBCT and CT volumes in the training set. As a comparison, the state-of-the-art deformable image registration (DIR) contour propagation between planning CT and daily CBCT available in RayStation reaches a DSC of 0:744 ± 0:144 on the same dataset, which is below our FCN result.
External beam radiation therapy (EBRT) treats cancer by delivering daily fractions of radiation to a target volume. For prostate cancer, the target undergoes day-to-day variations in position, volume, and shape. For stereotactic photon and for proton EBRT, endorectal balloons (ERBs) can be used to limit variations. To date, patterns of non-rigid variations for patients with ERB have not been modeled. We extracted and modeled the patient-specific patterns of variations, using regularly acquired CT-images, non-rigid point cloud registration, and principal component analysis (PCA). For each patient, a non-rigid point-set registration method, called Coherent Point Drift, (CPD) was used to automatically generate landmark correspondences between all target shapes. To ensure accurate registrations, we tested and validated CPD by identifying parameter values leading to the smallest registration errors (surface matching error 0.13±0.09 mm). PCA demonstrated that 88±3.2% of the target motion could be explained using only 4 principal modes. The most dominant component of target motion is a squeezing and stretching in the anterior-posterior and superior-inferior directions. A PCA model of daily landmark displacements, generated using 6 to 10 CT-scans, could explain well the target motion for the CT-scans not included in the model (modeling error decreased from 1.83±0.8 mm for 6 CT-scans to 1.6±0.7 mm for 10 CT-scans). PCA modeling error was smaller than the naive approximation by the mean shape (approximation error 2.66±0.59 mm). Future work will investigate the use of the PCA-model to improve the accuracy of EBRT techniques that are highly susceptible to anatomical variations such as, proton therapy
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.