PurposeInterpreting echocardiographic exams requires substantial manual interaction as videos lack scan-plane information and have inconsistent image quality, ranging from clinically relevant to unrecognizable. Thus, a manual prerequisite step for analysis is to select the appropriate views that showcase both the target anatomy and optimal image quality. To automate this selection process, we present a method for automatic classification of routine views, recognition of unknown views, and quality assessment of detected views.ApproachWe train a neural network for view classification and employ the logit activations from the neural network for unknown view recognition. Subsequently, we train a linear regression algorithm that uses feature embeddings from the neural network to predict view quality scores. We evaluate the method on a clinical test set of 2466 echocardiography videos with expert-annotated view labels and a subset of 438 videos with expert-rated view quality scores. A second observer annotated a subset of 894 videos, including all quality-rated videos.ResultsThe proposed method achieved an accuracy of 84.9%±0.67 for the joint objective of routine view classification and unknown view recognition, whereas a second observer reached an accuracy of 87.6%. For view quality assessment, the method achieved a Spearman’s rank correlation coefficient of 0.71, whereas a second observer reached a correlation coefficient of 0.62.ConclusionThe proposed method approaches expert-level performance, enabling fully automatic selection of the most appropriate views for manual or automatic downstream analysis.
Purpose: Coronary artery calcium (CAC) score, i.e., the amount of CAC quantified in CT, is a strong and independent predictor of coronary heart disease (CHD) events. However, CAC scoring suffers from limited interscan reproducibility, which is mainly due to the clinical definition requiring application of a fixed intensity level threshold for segmentation of calcifications. This limitation is especially pronounced in non-electrocardiogram-synchronized computed tomography (CT) where lesions are more impacted by cardiac motion and partial volume effects. Therefore, we propose a CAC quantification method that does not require a threshold for segmentation of CAC.
Approach: Our method utilizes a generative adversarial network (GAN) where a CT with CAC is decomposed into an image without CAC and an image showing only CAC. The method, using a cycle-consistent GAN, was trained using 626 low-dose chest CTs and 514 radiotherapy treatment planning (RTP) CTs. Interscan reproducibility was compared to clinical calcium scoring in RTP CTs of 1662 patients, each having two scans.
Results: A lower relative interscan difference in CAC mass was achieved by the proposed method: 47% compared to 89% manual clinical calcium scoring. The intraclass correlation coefficient of Agatston scores was 0.96 for the proposed method compared to 0.91 for automatic clinical calcium scoring.
Conclusions: The increased interscan reproducibility achieved by our method may lead to increased reliability of CHD risk categorization and improved accuracy of CHD event prediction.
Detection and quantification of atherosclerotic plaque in the coronary arteries is important in cardiovascular risk analysis. Atherosclerotic plaque can be visualized using coronary computed tomography angiography (CCTA). Manual identification and segmentation of plaque is a complex task that requires high level of expertise. Hence, several automatic approaches have been designed. Automatic methods employing deep learning have shown to outperform conventional approaches, but they are hampered by the requirement on the availability of large and diverse training data. To address this, we designed a method that synthesizes calcified and non-calcified atherosclerotic plaque in the coronary arteries. First, we generate plaque geometry using conventional image analysis approach that varies the radius, length and angle of a plaque and we use this to generate a crude inpainting of a plaque on a target artery. Thereafter, we employ a conditional generative adversarial network (GAN) to synthesize the plaque texture in CCTA. The generator is trained to generate fake images with realistic appearance. The discriminator is trained to distinguish the synthesized fake and the real images. The data set for training and evaluation of the plaque synthesis contained CCTA scans of 102 patients (50 training, 52 testing) with manually annotated calcified and non-calcified plaque. To evaluate performance of the synthesis method, we compared CCTA patches with real and synthesized plaque. The evaluation resulted in mean (standard deviation) structural similarity index of 0.99 (0.01), peak signal noise ratio of 73.99 (5.52) and mean absolute error of 5.56 (3.23) HU. To evaluate whether synthesized data enables plaque segmentation, an additional set of CCTA scans of 92 patients without visible plaque was collected. In these scans, plaque was synthesized using the developed approach, containing in total 615 calcified and 544 non-calcified plaque lesions. The synthesized data was used to train a 3D UNet for segmenting calcified and non-calcified plaque lesions. Automatic segmentation which was trained with real data only resulted in Dice coefficients of 0.68 and 0.35 for calcified and non-calcified plaque, respectively. This was significantly improved by pretraining the network with synthetic data and refining it with real data, which resulted in a Dice coefficients of 0.70 (p=0.03) and 0.36 (p=0.02) for calcified and non-calcified plaque, respectively. The results demonstrate that training with CCTA scans with automatically synthesized calcified and non-calcified plaque improves the performance of plaque segmentation.
A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson’s and Alzheimer’s disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each olfactory bulb in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left olfactory bulb, right olfactory bulb, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right olfactory bulb, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs.
Although high resolution isotropic 3D medical images are desired in clinical practice, their acquisition is not always feasible. Instead, lower resolution images are upsampled to higher resolution using conventional interpolation methods. Sophisticated learning-based super-resolution approaches are frequently unavailable in clinical setting, because such methods require training with high-resolution isotropic examples. To address this issue, we propose a learning-based super-resolution approach that can be trained using solely anisotropic images, i.e. without high-resolution ground truth data. The method exploits the latent space, generated by autoencoders trained on anisotropic images, to increase spatial resolution in low-resolution images. The method was trained and evaluated using 100 publicly available cardiac cine MR scans from the Automated Cardiac Diagnosis Challenge (ACDC). The quantitative results show that the proposed method performs better than conventional interpolation methods. Furthermore, the qualitative results indicate that especially finer cardiac structures are synthesized with high quality. The method has the potential to be applied to other anatomies and modalities and can be easily applied to any 3D anisotropic medical image dataset.
Current unsupervised deep learning-based image registration methods are trained with mean squares or nor- malized cross correlation as a similarity metric. These metrics are suitable for registration of images where a linear relation between image intensities exists. When such a relation is absent knowledge from conventional image registration literature suggests the use of mutual information. In this work we investigate whether mutual information can be used as a loss for unsupervised deep learning image registration by evaluating it on two datasets: breast dynamic contrast-enhanced MR and cardiac MR images. The results show that training with mutual information as a loss gives on par performance compared with conventional image registration in contrast enhanced images, and the results show that it is generally applicable since it has on par performance compared with normalized cross correlation in single-modality registration.
Coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) that can be quantified in CT scans showing the heart. CAC lesions are defined as lesions in the coronary arteries with image intensity above 130 HU. The use of a threshold may lead to under- or over-estimation of the amount of CAC and, hence, to incorrect cardiovascular categorization of patients. This is especially pronounced in CT scans without ECG-synchronization where lesions are more subject to cardiac motion and partial volume effects. To address this, we propose a method for quantification of CAC without a threshold. A set of 373 cardiac and 1181 chest CT scans was included to develop the method and a set of 21 scan-rescan pairs (42 scans) was included for final evaluation. Assuming that the attenuation of CAC is superimposed on the attenuation of the artery, we aimed to separate the CAC from the coronary arteries by employing a CycleGAN to generate a synthetic image without CAC from an image containing CAC and vice versa. By subtracting the synthetic image without CAC from the image with CAC, a CAC map is created. The CAC-map can subsequently be used to identify and quantify CAC. The ground truth, i.e. the true amount of CAC, can not be established, therefore, in this work the results generated by the method are compared with clinical calcium scoring in terms of reproducibility. The average relative difference between the calcium scores in scan-rescan pairs of scans was 50% with the proposed method and 86% for the conventional method. Moreover, the correlation between CAC pseudo masses in scan-rescan pairs was 0.92 with the proposed method and 0.89 with conventional calcium scoring. Our proposed method is able to identify and quantify CAC lesions in CT scans without using an intensity level thresholding. This might allow for more reproducible quantification of CAC in CT scans made without ECG synchronization, and, therefore, it might allow more accurate CVD risk prediction.
Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain, regarding the obtained segmentation, almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs.
Response of breast cancer to neoadjuvant chemotherapy (NAC) can be monitored using the change in visible tumor on magnetic resonance imaging (MRI). In our current workflow, seed points are manually placed in areas of enhancement likely to contain cancer. A constrained volume growing method uses these manually placed seed points as input and generates a tumor segmentation. This method is rigorously validated using complete pathological embedding. In this study, we propose to exploit deep learning for fast and automatic seed point detection, replacing manual seed point placement in our existing and well-validated work ow. The seed point generator was developed in early breast cancer patients with pathology-proven segmentations (N=100), operated shortly after MRI. It consisted of an ensemble of three independently trained fully convolutional dilated neural networks that classified breast voxels as tumor or non-tumor. Subsequently, local maxima were used as seed points for volume growing in patients receiving NAC (N=10). The percentage of tumor volume change was evaluated against semi-automatic segmentations. The primary cancer was localized in 95% of the tumors at the cost of 0.9 false positive per patient. False positives included focally enhancing regions of unknown origin and parts of the intramammary blood vessels. Volume growing from the seed points showed a median tumor volume decrease of 70% (interquartile range: 50%{77%), comparable to the semi-automatic segmentations (median: 70%, interquartile range 23%{76%). To conclude, a fast and automatic seed point generator was developed, fully automating a well-validated semi-automatic work ow for response monitoring of breast cancer to neoadjuvant chemotherapy.
The amount of coronary artery calcification (CAC) quantified in computed tomography (CT) scans enables prediction of cardiovascular disease (CVD) risk. However, interscan variability of CAC quantification is high, especially in scans made without ECG synchronization. We propose a method for automatic detection of CACs that are severely affected by cardiac motion. Subsequently, we evaluate the impact of such CACs on CAC quantification and CVD risk determination. This study includes 1000 baseline and 585 one-year follow-up low-dose chest CTs from the National Lung Screening Trial. About 415 baseline scans are used to train and evaluate a convolutional neural network that identifies observer determined CACs affected by severe motion artifacts. Therefore, 585 paired scans acquired at baseline and follow-up were used to evaluate the impact of severe motion artifacts on CAC quantification and risk categorization. Based on the CAC amount, the scans were categorized into four standard CVD risk categories. The method identified CACs affected by severe motion artifacts with 85.2% accuracy. Moreover, reproducibility of CAC scores in scan pairs is higher in scans containing mostly CACs not affected by severe cardiac motion. Hence, the proposed method enables identification of scans affected by severe cardiac motion, where CAC quantification may not be reproducible.
Morphological analysis and identification of pathologies in the aorta are important for cardiovascular diagnosis and risk assessment in patients. Manual annotation is time-consuming and cumbersome in CT scans acquired without contrast enhancement and with low radiation dose. Hence, we propose an automatic method to segment the ascending aorta, the aortic arch and the thoracic descending aorta in low-dose chest CT without contrast enhancement. Segmentation was performed using a dilated convolutional neural network (CNN), with a receptive field of 131 × 131 voxels, that classified voxels in axial, coronal and sagittal image slices. To obtain a final segmentation, the obtained probabilities of the three planes were averaged per class, and voxels were subsequently assigned to the class with the highest class probability. Two-fold cross-validation experiments were performed where ten scans were used to train the network and another ten to evaluate the performance. Dice coefficients of 0.83 ± 0.07, 0.86 ± 0.06 and 0.88 ± 0.05, and Average Symmetrical Surface Distances (ASSDs) of 2.44 ± 1.28, 1.56 ± 0.68 and 1.87 ± 1.30 mm were obtained for the ascending aorta, the aortic arch and the descending aorta, respectively. The results indicate that the proposed method could be used in large-scale studies analyzing the anatomical location of pathology and morphology of the thoracic aorta.
Coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events (CVEs). CAC can be quantified in chest CT scans acquired in lung screening. However, in these images the reproducibility of CAC quantification is compromised by cardiac motion that occurs during scanning, thereby limiting the reproducibility of CVE risk assessment. We present a system for the identification of CACs strongly affected by cardiac motion artifacts by using a convolutional neural network (CNN).
This study included 125 chest CT scans from the National Lung Screening Trial (NLST). Images were acquired with CT scanners from four different vendors (GE, Siemens, Philips, Toshiba) with varying tube voltage, image resolution settings, and without ECG synchronization. To define the reference standard, an observer manually identified CAC lesions and labeled each according to the presence of cardiac motion: strongly affected (positive), mildly affected/not affected (negative). A CNN was designed to automatically label the identified CAC lesions according to the presence of cardiac motion by analyzing a patch from the axial CT slice around each lesion.
From 125 CT scans, 9201 CAC lesions were analyzed. 8001 lesions were used for training (19% positive) and the remaining 1200 (50% positive) were used for testing. The proposed CNN achieved a classification accuracy of 85% (86% sensitivity, 84% specificity).
The obtained results demonstrate that the proposed algorithm can identify CAC lesions that are strongly affected by cardiac motion. This could facilitate further investigation into the relation of CAC scoring reproducibility and the presence of cardiac motion artifacts.
The amount of calcifications in the coronary arteries is a powerful and independent predictor of cardiovascular events and is used to identify subjects at high risk who might benefit from preventive treatment. Routine quantification of coronary calcium scores can complement screening programs using low-dose chest CT, such as lung cancer screening. We present a system for automatic coronary calcium scoring based on deep convolutional neural networks (CNNs). The system uses three independently trained CNNs to estimate a bounding box around the heart. In this region of interest, connected components above 130 HU are considered candidates for coronary artery calcifications. To separate them from other high intensity lesions, classification of all extracted voxels is performed by feeding two-dimensional 50 mm × 50 mm patches from three orthogonal planes into three concurrent CNNs. The networks consist of three convolutional layers and one fully-connected layer with 256 neurons. In the experiments, 1028 non-contrast-enhanced and non-ECG-triggered low-dose chest CT scans were used. The network was trained on 797 scans. In the remaining 231 test scans, the method detected on average 194.3 mm3 of 199.8 mm3 coronary calcifications per scan (sensitivity 97.2 %) with an average false-positive volume of 10.3 mm3 . Subjects were assigned to one of five standard cardiovascular risk categories based on the Agatston score. Accuracy of risk category assignment was 84.4 % with a linearly weighted κ of 0.89. The proposed system can perform automatic coronary artery calcium scoring to identify subjects undergoing low-dose chest CT screening who are at risk of cardiovascular events with high accuracy.
Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D.
In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs — heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it.
Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was ≥0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.
Arterial calcification has been related to cardiovascular disease (CVD) and osteoporosis. However, little is known about the role of genetics and exact pathways leading to arterial calcification and its relation to bone density changes indicating osteoporosis. In this study, we conducted a genome-wide association study of arterial calcification burden, followed by a look-up of known single nucleotide polymorphisms (SNPs) for coronary artery disease (CAD) and myocardial infarction (MI), and bone mineral density (BMD) to test for a shared genetic basis between the traits. The study included a subcohort of the Dutch-Belgian lung cancer screening trial comprised of 2,561 participants. Participants underwent baseline CT screening in one of two hospitals participating in the trial. Low-dose chest CT images were acquired without contrast enhancement and without ECG-synchronization. In these images coronary and aortic calcifications were identified automatically. Subsequently, the detected calcifications were quantified using coronary artery calcium Agatston and volume scores. Genotype data was available for these participants. A genome-wide association study was conducted on 10,220,814 SNPs using a linear regression model. To reduce multiple testing burden, known CAD/MI and BMD SNPs were specifically tested (45 SNPs from the CARDIoGRAMplusC4D consortium and 60 SNPS from the GEFOS consortium). No novel significant SNPs were found. Significant enrichment for CAD/MI SNPs was observed in testing Agatston and coronary artery calcium volume scores. Moreover, a significant enrichment of BMD SNPs was shown in aortic calcium volume scores. This may indicate genetic relation of BMD SNPs and arterial calcification burden.
CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.
Novelty detection is concerned with identifying test data that differs from the training data of a classifier. In the case of brain MR images, pathology or imaging artefacts are examples of untrained data. In this proof-of-principle study, we measure the behaviour of a classifier during the classification of trained labels (i.e. normal brain tissue). Next, we devise a measure that distinguishes normal classifier behaviour from abnormal behavior that occurs in the case of a novelty. This will be evaluated by training a kNN classifier on normal brain tissue, applying it to images with an untrained pathology (white matter hyperintensities (WMH)), and determine if our measure is able to identify abnormal classifier behaviour at WMH locations. For our kNN classifier, behaviour is modelled as the mean, median, or q1 distance to the k nearest points. Healthy tissue was trained on 15 images; classifier behaviour was trained/tested on 5 images with leave-one-out cross-validation. For each trained class, we measure the distribution of mean/median/q1 distances to the k nearest point. Next, for each test voxel, we compute its Z-score with respect to the measured distribution of its predicted label. We consider a Z-score ≥4 abnormal behaviour of the classifier, having a probability due to chance of 0.000032. Our measure identified >90% of WMH volume and also highlighted other non-trained findings. The latter being predominantly vessels, cerebral falx, brain mask errors, choroid plexus. This measure is generalizable to other classifiers and might help in detecting unexpected findings or novelties by measuring classifier behaviour.
Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.