KEYWORDS: Magnetic resonance imaging, Image quality, Brain, Neuroimaging, Motion models, Denoising, Signal to noise ratio, Medical imaging, Convolutional neural networks, Medical research
Motion artifacts in MRI degrade image quality typically with blurring or ghosting across the image along the phase encoding direction, and severe motion artifacts may result in non-diagnostic exams. We developed a deep learning model based on a densely connected residual network (DRN) with K-space blending (DRN-KB) method in a purpose to reduce motion artifacts in brain MRI. Our DRN model took advantage of residual learning and dense connection to achieve higher performance in motion reduction compared with denoising convolutional neural network (DnCNN). In addition, to overcome over-smoothing and reduced tissue contrast in the motion-reduction images produced by DRN, K-space blending was performed that part of central K-space of the original motion image were reserved whereas the remaining Kspace was replaced by the motion-reduction images from DRN. The final MRI images were reconstructed using the blended K-space. The optimal blending ratio of the K-space was determined during the training process. Our DRN model was trained and tested with the axial T1-weighted (T1W) images with simulated motion. Two clinical cases (50 images) were used in training and validation and 16 cases (417 images) were used in testing. Structural SIMilarity (SSIM) index and improvement in signal-to-noise ratio (ISNR) were calculated to evaluate the image quality. Our DRN-KB method reduced motion artifacts with an increased SSIM of motion-reduction images (SSIM:0.95) compared with that of original motion images (SSIM:0.83) and demonstrated an improved SNR (ISNR: 2.89 dB). The performance of DRN-KB method was significantly better than that of the conventional DnCNN (P-value < 0.05).
To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed neural network convolution (NNC) deep learning for radiation dose reduction in digital breast tomosynthesis (DBT). Our NNC deep learning employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw-projection images and corresponding “teaching” higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, Inc, Bedford, MA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term “virtual” HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. Our cadaver phantom experiment demonstrated up to 79% dose reduction. For further testing, we collected half-dose (50% of the standard dose: 32±14 mAs at 33±5 kVp) and full-dose (100% of the standard dose: 68±23 mAs at 33±5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. Our NNC converted half-dose DBT images of the 10 clinical cases to VHD DBT images that were equivalent to full-dose DBT images, according our observer rating study of 10 breast radiologists. Thus, we achieved 50% dose reduction without sacrificing the image quality.
KEYWORDS: Digital breast tomosynthesis, Breast, Image quality, Mammography, Breast cancer, Image quality standards, Image processing, Neural networks, Denoising, Cancer
To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding “teaching” higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term “virtual” HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32±14 mAs at 33±5 kVp) and full-dose (standard dose: 68±23 mAs at 33±5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.
Consolidation and ground-glass opacity (GGO) are two major types of opacities associated with diffuse lung diseases. Accurate detection and classification of such opacities are crucially important in the diagnosis of lung diseases, but the process is subjective, and suffers from interobserver variability. Our study purpose was to develop a deep neural network convolution (NNC) system for distinguishing among consolidation, GGO, and normal lung tissue in high-resolution CT (HRCT). We developed ensemble of two deep NNC models, each of which was composed of neural network regression (NNR) with an input layer, a convolution layer, a fully-connected hidden layer, and a fully-connected output layer followed by a thresholding layer. The output layer of each NNC provided a map for the likelihood of being each corresponding lung opacity of interest. The two NNC models in the ensemble were connected in a class-selection layer. We trained our NNC ensemble with pairs of input 2D axial slices and “teaching” probability maps for the corresponding lung opacity, which were obtained by combining three radiologists’ annotations. We randomly selected 10 and 40 slices from HRCT scans of 172 patients for each class as a training and test set, respectively. Our NNC ensemble achieved an area under the receiver-operating-characteristic (ROC) curve (AUC) of 0.981 and 0.958 in distinction of consolidation and GGO, respectively, from normal opacity, yielding a classification accuracy of 93.3% among 3 classes. Thus, our deep-NNC-based system for classifying diffuse lung diseases achieved high accuracies for classification of consolidation, GGO, and normal opacity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.