KEYWORDS: Tumors, Education and training, Deep learning, Voxels, Magnetic resonance imaging, Data modeling, Tissues, Resection, Brain, Cross validation
PurposeGlioblastoma (GBM) is the most common and aggressive primary adult brain tumor. The standard treatment approach is surgical resection to target the enhancing tumor mass, followed by adjuvant chemoradiotherapy. However, malignant cells often extend beyond the enhancing tumor boundaries and infiltrate the peritumoral edema. Traditional supervised machine learning techniques hold potential in predicting tumor infiltration extent but are hindered by the extensive resources needed to generate expertly delineated regions of interest (ROIs) for training models on tissue most and least likely to be infiltrated.ApproachWe developed a method combining expert knowledge and training-based data augmentation to automatically generate numerous training examples, enhancing the accuracy of our model for predicting tumor infiltration through predictive maps. Such maps can be used for targeted supra-total surgical resection and other therapies that might benefit from intensive yet well-targeted treatment of infiltrated tissue. We apply our method to preoperative multi-parametric magnetic resonance imaging (mpMRI) scans from a subset of 229 patients of a multi-institutional consortium (Radiomics Signatures for Precision Diagnostics) and test the model on subsequent scans with pathology-proven recurrence.ResultsLeave-one-site-out cross-validation was used to train and evaluate the tumor infiltration prediction model using initial pre-surgical scans, comparing the generated prediction maps with follow-up mpMRI scans confirming recurrence through post-resection tissue analysis. Performance was measured by voxel-wised odds ratios (ORs) across six institutions: University of Pennsylvania (OR: 9.97), Ohio State University (OR: 14.03), Case Western Reserve University (OR: 8.13), New York University (OR: 16.43), Thomas Jefferson University (OR: 8.22), and Rio Hortega (OR: 19.48).ConclusionsThe proposed model demonstrates that mpMRI analysis using deep learning can predict infiltration in the peri-tumoral brain region for GBM patients without needing to train a model using expert ROI drawings. Results for each institution demonstrate the model’s generalizability and reproducibility.
Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.
Brain metastases are the most common malignant form of tumors and occur in 10%-30% of adult patients with systematic cancer. With recent advances in treatment options, there is an increasing evidence that automated detection and segmentation from MRI can assist clinicians for diagnosis and therapy planning. In this study, we investigate the impact of data domain on self-supervised learning (SSL) for pretraining a deep learning network to detect and segment brain metastases on 3D post-contrast T1-weighted images. We performed pretraining a 3D patch-based U-Net using the Model Genesis framework on three subject cohorts that have different data domain. The pretrained networks were then finetuned on brain MR scans from patients with metastases as a downstream task dataset. We analyzed the impact of data domain on SSL by examining validation metric evolution, FROC analyses and testing performance of early-trained models and best-validated models. Our results suggested that, in the early stage of finetuning for the target task, SSL is crucial for faster training convergence and similar data domain on SSL could be helpful to attain improved detection and segmentation performance earlier. However, we observed that the importance of data domain similarity for SSL progressively diminished as training continued with sufficient amount of iterations in our relatively large data regime. After training convergence, the best-validated models pretrained with SSL provided enhanced detection performance over the model without pretraining regardless of data domain.
Recent technological advances in deep learning (DL) have led to more accurate brain metastasis (BM) detection. As a data driven approach, DL’s performance highly relies on the size and quality of the training data. However, collecting large amount of medical data is costly, and it’s difficult to include BMs with various locations, sizes, and structures etc. Thus, we propose a 3D-2D GAN for fully 3D BM synthesis with configurable parameters. First, two 3D networks are used to synthesize the mask and quantized intensity map of a lesion from 3 concentric spheres, which are used to control the lesion’s location, size and structure. Then, a 2D network is used to synthesize the final lesion with proper appearance from the quantized intensity map and the background MR image. With this 3D-2D design, the 3D networks enable the synthetic metastasis to be spatially continuous in all 3 dimensions through the guidance of the 3D intermediate presentation of the lesion, while the 2D network enables the use of 2D perceptual loss to make the final synthesized lesion look realistic. In addition, different network up-sampling strategies and postprocessing are used to control the heterogeneity and contrast of the synthetic lesion. All the synthesized images were reviewed by a radiologist. The indistinguishability rate of the synthesized lesion is above 70%. The configurable parameters for the lesion’s location, size, and structure, heterogeneity and contrast were reviewed to be effective. Our work demonstrates the feasibility of synthesizing configurable 3D BM lesions for fully 3D data augmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.