Magnetic Resonance Imaging (MRI) is increasingly used to localize prostate cancer, but the subtle features of cancer vs. normal tissue renders the interpretation of MRI challenging. Computational approaches have been proposed to detect prostate cancer, yet variation in intensity distribution across different scanners, and even on the same scanner, poses significant challenges to image analysis via computational tools, such as deep learning. In this study, we developed a conditional generative adversarial network (GAN) to normalize intensity distributions on prostate MRI. We used three methods to evaluate our GAN-normalization. First, we qualitatively compared the intensity of GAN-normalized images to the intensity distributions of statistically normalized images. Second, we visually examined the GAN-normalized images to ensure the appearance of the prostate and other structures were preserved. Finally, we quantitatively evaluated the performance of deep learning holistically nested edge detection (HED) networks to identify prostate cancer on MRI when using raw, statistically normalized, and GAN-normalized images. We found the detection network trained on GAN-normalized images achieved similar accuracy and area under the curve (AUC) scores when compared to the detection networks trained on raw and statistically normalized images. Conditional GANs may hence be an effective tool for normalizing intensity distribution on MRI and can be utilized to train downstream deep learning tasks.
Prostate MRI is increasingly used to help localize and target prostate cancer. Yet, the subtle differences in MRI appearance of cancer compared to normal tissue renders MRI interpretation challenging. Deep learning methods hold promise in automating the detection of prostate cancer on MRI, however such approaches require large, well-curated datasets. Although existing methods that employed fully convolutional neural networks have shown promising results, the lack of labeled data can reduce the generalization of these models. Self-supervised learning provides a promising avenue to learn semantic features from unlabeled data. In this study, we apply the self-supervised strategy of image context restoration to detect prostate cancer on MRI and show this improves model performance for two different architectures (U-Net and Holistically Nested Edge Detector) compared to their purely supervised counterparts. We train our models on MRI exams from 381 men with biopsy confirmed cancer. Our study showed self-supervised models outperform randomly initialized models on an independent test set in a variety of training settings. We performed 3 experiments, where we trained with 5%, 25% and 100% of our labeled data, and observed that the U-Net based pre-training and downstream task outperformed other models. We observed the best improvements when training with 5% of the labeled training data, our selfsupervised U-Nets improve per-pixel Area Under the Curve (AUC, 0.71 vs 0.83) and Dice Similarity coefficient (0.19 vs 0.53). When training with 100% of the data, our U-Net-based pretraining and detection achieved an AUC of 0.85 and Dice similarity coefficient of 0.57.
The use of magnetic resonance-ultrasound fusion targeted biopsy improves diagnosis of aggressive prostate cancer. Fusion of ultrasound & magnetic resonance images (MRI) requires accurate prostate segmentations. In this paper, we developed a 2.5 dimensional deep learning model, ProGNet, to segment the prostate on T2-weighted magnetic resonance imaging (MRI). ProGNet is an optimized U-Net model that weighs three adjacent slices in each MRI sequence to segment the prostate in a 2.5D context. We trained ProGNet on 529 cases where experts annotated the whole gland (WG) on axial T2-weighted MRI prior to targeted prostate biopsy. In 132 cases, experts also annotated the central gland (CG) on MRI. After five-fold cross-validation, we found that for WG segmentation, ProGNet had a mean Dice similarity coefficient (DSC) of 0.91±0.02, sensitivity of 0.89±0.03, specificity of 0.97±0.00, and an accuracy of 0.95±0.01. For CG segmentation, ProGNet achieved a mean DSC 0.86±0.01, sensitivity of 0.84±0.03, specificity of 0.99±0.01, and an accuracy of 0.96±0.01. We then tested the generalizability of the model on the 60-case NCI-ISBI 2013 challenge dataset and on a local, independent 61-case test set. We achieved DSCs of 0.81±0.02 and 0.72±0.02 for WG and CG segmentation on the NCI-ISBI 2013 challenge dataset, and 0.83±0.01 and 0.75±0.01 for WG and CG segmentation on the local dataset. Model performance was excellent and outperformed state-of-art U-Net and holistically-nested edge detector (HED) networks in all three datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.