PurposeThe prevalence of type 2 diabetes mellitus (T2DM) has been steadily increasing over the years. We aim to predict the occurrence of T2DM using mammography images within 5 years using two different methods and compare their performance.ApproachWe examined 312 samples, including 110 positive cases (developed T2DM after 5 years) and 202 negative cases (did not develop T2DM) using two different methods. In the first method, a radiomics-based approach, we utilized radiomics features and machine learning (ML) algorithms. The entire breast region was chosen as the region of interest for extracting radiomics features. Then, a binary breast image was created from which we extracted 668 features and analyzed them using various ML algorithms. In the second method, a complex convolutional neural network (CNN) with a modified ResNet architecture and various kernel sizes was applied to raw mammography images for the prediction task. A nested, stratified five-fold cross-validation was done for both parts A and B to compute accuracy, sensitivity, specificity, and area under the receiver operating curve (AUROC). Hyperparameter tuning was also done to enhance the model’s performance and reliability.ResultsThe radiomics approach’s light gradient boosting model gave 68.9% accuracy, 30.7% sensitivity, 89.5% specificity, and 0.63 AUROC. The CNN method achieved an AUROC of 0.58 over 20 epochs.ConclusionRadiomics outperformed CNN by 0.05 in terms of AUROC. This may be due to the more straightforward interpretability and clinical relevance of predefined radiomics features compared with the complex, abstract features learned by CNNs.
The incidence rate for Type 2 Diabetes Mellitus (T2DM) has been increasing over the years. T2DM is a common lifestyle-related disease and predicting its occurrence before five years could help patients to alter their lifestyle ahead and hence prevent T2DM. We intend to investigate the feasibility of radiomics features in predicting the occurrence of T2DM using screening mammography images which could benefit us in terms of the preventability of the disease. This study has examined the prevalence of T2DM using 110 positive samples (developed T2DM after 5 years) and 202 negative samples (did not develop T2DM after five years). The whole breast region was selected as the Region Of Interest (ROI), from which radiomics features were to be extracted. The mask was created from every image using a modified threshold value (by Otsu's binarization method) to obtain a binary image of the breast. 668 radiomics features were then extracted and analyzed using different machine learning algorithms built in the Python programming language such as Random Forest (RF), Gradient Boosting Classifier (GBC), and Light-Gradient Boosting Model (LGBM) as they could give excellent classification and prediction results. A five-fold cross-validation method was carried out; the accuracy, sensitivity, specificity and AUC were calculated when implementing each of the algorithms, and hyperparameter tuning was carried out to tune the models for better performance. The RF and GBC produced good accuracy results (⪆ 70%), but low sensitivity values. LGBM’s accuracy is almost 70% but it has the highest sensitivity (43.9%) and decent specificity (74.4%).
Purpose: The target disorders of emergency head CT are wide-ranging. Therefore, people working in an emergency department desire a computer-aided detection system for general disorders. In this study, we proposed an unsupervised anomaly detection method in emergency head CT using an autoencoder and evaluated the anomaly detection performance of our method in emergency head CT. Methods: We used a 3D convolutional autoencoder (3D-CAE), which contains 11 layers in the convolution block and 6 layers in the deconvolution block. In the training phase, we trained the 3D-CAE using 10,000 3D patches extracted from 50 normal cases. In the test phase, we calculated abnormalities of each voxel in 38 emergency head CT volumes (22 abnormal cases and 16 normal cases) for evaluation and evaluated the likelihood of lesion existence. Results: Our method achieved a sensitivity of 68% and a specificity of 88%, with an area under the curve of the receiver operating characteristic curve of 0.87. It shows that this method has a moderate accuracy to distinguish normal CT cases to abnormal ones. Conclusion: Our method has potentialities for anomaly detection in emergency head CT.
In this study, we propose a novel method of lung lesion detection in FDG-PET/CT volumes without labeling lesions. In our method, the probability distribution over normal standardized uptake values (SUVs) is estimated from the features extracted from the corresponding volume of interest (VOI) in the CT volume, which include gradient-based and texture-based features. To estimate the distribution, we use Gaussian process regression with an automatic relevance determination kernel, which provides the relevance of feature values to estimation. Our model was trained using FDG-PET/CT volumes of 121 normal cases. In the lesion detection phase, the actual SUV is judged as normal or abnormal by comparison with the estimated SUV distribution. According to the validation using 28 FDG-PET/CT volumes with 34 lung lesions, the sensitivity of the proposed method at 5.0 false positives per case was 81.9%.
The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.
The detection of anatomical landmarks (LMs) often plays a key role in medical image analysis. In our previous study,
we reported an automatic LM detection method for CT images. Despite its high detection sensitivity, the distance errors of the detection results for some LMs were relatively large as they sometimes exceeded 10 mm. Naturally, it is desirable to minimize LM detection error, especially when the LM detection results are used in image analysis tasks such as image segmentation. In this study, we introduce a novel method of coarse-to-fine localization to increase accuracy, which refines the LM positions detected by our previous method. The proposed LM localization is performed by both multiscale local image pattern recognition and likelihood estimation from prior knowledge of the spatial distribution of multiple LMs. Classifier ensembles for recognizing local image patterns are trained by the cost-sensitive MadaBoost. The cost of each sample is altered depending on its distance from the ground truth LM position. The spatial LM distribution likelihood, calculated from a statistical model of inter-landmark distances between all LM pairs, is also used in the localization. The evaluation experiment was performed with 15 LMs in 39 CT images. The average distance error of the pre-detected LM position was improved by 2.05 mm by the proposed localization method. The proposed method was shown to be effective for reducing LM detection error.
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused
on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower
thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a
deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation
method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines
the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge
on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained
from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component
vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image
by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by
using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical
area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939
mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation
algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar
vertebrae.
KEYWORDS: Medical imaging, Principal component analysis, Sensors, Range imaging, Computed tomography, Tissues, Statistical analysis, 3D image processing, Lung, Image processing
Anatomical landmarks are useful as the primitive anatomical knowledge for medical image understanding. In this study,
we construct a unified framework for automated detection of anatomical landmarks distributed within the human body.
Our framework includes the following three elements; (1) initial candidate detection based on a local appearance
matching technique based on appearance models built by PCA and the generative learning, (2) false positive elimination
using classifier ensembles trained by MadaBoost, and (3) final landmark set determination based on a combination
optimization method by Gibbs sampling with a priori knowledge of inter-landmark distances. In evaluation of our
methods with 50 data sets of body trunk CT, the average sensitivity in detecting candidates of 165 landmarks was 0.948
± 0.084 while 55 landmarks were detected with 100 % sensitivity. Initially, the amount of false positives per landmark
was 462.2 ± 865.1 per case on average, then they were reduced to 152.8 ± 363.9 per case by the MadaBoost classifier
ensembles without miss-elimination of the true landmarks. Finally 89.1 % of landmarks were correctly selected by the
final combination optimization. These results showed that our framework is promising for an initial step for the
subsequent anatomical structure recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.