Deep learning has revolutionized medical image analysis, promising to significantly improve the precision of diagnoses and therapies through advanced segmentation methods. However, the efficacy of deep neural networks is often compromised by the prevalence of imperfect medical labels, while acquiring large-scale, accurately labeled data remains a prohibitive challenge. To address the imperfect label issue, we introduce a novel learning framework that iteratively optimizes both a neural network and its label set to enhance segmentation accuracy. This framework operates through two steps: initially, it robustly trains on a dataset with label noise, distinguishing between clean and noisy labels, and subsequently, it refines noisy labels based on high-confidence predictions from the robust network. By applying this method, not only is the network trained more effectively on imperfect data, but the dataset is progressively cleaned and expanded. Our evaluations are conducted on retina Optical Coherence Tomography datasets using U-Net and SegNet architectures, and demonstrate substantial improvements in segmentation accuracy and data quality, advancing the capabilities of weakly supervised segmentation in medical imaging.
SignificanceThere is a significant need for the generation of virtual histological information from coronary optical coherence tomography (OCT) images to better guide the treatment of coronary artery disease (CAD). However, existing methods either require a large pixel-wise paired training dataset or have limited capability to map pathological regions.AimThe aim of this work is to generate virtual histological information from coronary OCT images, without a pixel-wise paired training dataset while capable of providing pathological patterns.ApproachWe design a structurally constrained, pathology-aware, transformer generative adversarial network, namely structurally constrained pathology-aware convolutional transformer generative adversarial network (SCPAT-GAN), to generate virtual stained H&E histology from OCT images. We quantitatively evaluate the quality of virtual stained histology images by measuring the Fréchet inception distance (FID) and perceptual hash value (PHV). Moreover, we invite experienced pathologists to evaluate the virtual stained images. Furthermore, we visually inspect the virtual stained image generated by SCPAT-GAN. Also, we perform an ablation study to validate the design of the proposed SCPAT-GAN. Finally, we demonstrate 3D virtual stained histology images.ResultsCompared to previous research, the proposed SCPAT-GAN achieves better FID and PHV scores. The visual inspection suggests that the virtual histology images generated by SCPAT-GAN resemble both normal and pathological features without artifacts. As confirmed by the pathologists, the virtual stained images have good quality compared to real histology images. The ablation study confirms the effectiveness of the combination of proposed pathological awareness and structural constraining modules.ConclusionsThe proposed SCPAT-GAN is the first to demonstrate the feasibility of generating both normal and pathological patterns without pixel-wisely supervised training. We expect the SCPAT-GAN to assist in the clinical evaluation of treating the CAD by providing 2D and 3D histopathological visualizations.
SignificanceOptical coherence tomography (OCT) has become increasingly essential in assisting the treatment of coronary artery disease (CAD). However, unidentified calcified regions within a narrowed artery could impair the outcome of the treatment. Fast and objective identification is paramount to automatically procuring accurate readings on calcifications within the artery.AimWe aim to rapidly identify calcification in coronary OCT images using a bounding box and reduce the prediction bias in automated prediction models.ApproachWe first adopt a deep learning-based object detection model to rapidly draw the calcified region from coronary OCT images using a bounding box. We measure the uncertainty of predictions based on the expected calibration errors, thus assessing the certainty level of detection results. To calibrate confidence scores of predictions, we implement dependent logistic calibration using each detection result’s confidence and center coordinates.ResultsWe implemented an object detection module to draw the boundary of the calcified region at a rate of 140 frames per second. With the calibrated confidence score of each prediction, we lower the uncertainty of predictions in calcification detection and eliminate the estimation bias from various object detection methods. The calibrated confidence of prediction results in a confidence error of ∼0.13, suggesting that the confidence calibration on calcification detection could provide a more trustworthy result.ConclusionsGiven the rapid detection and effective calibration of the proposed work, we expect that it can assist in clinical evaluation of treating the CAD during the imaging-guided procedure.
Inpainting shadowed regions cast by superficial blood vessels in retinal optical coherence tomography (OCT) images is critical for accurate and robust machine analysis and clinical diagnosis. Traditional sequence-based approaches such as propagating neighboring information to gradually fill in the missing regions are cost-effective. But they generate less satisfactory outcomes when dealing with larger missing regions and texture-rich structures. Emerging deep learning-based methods such as encoder-decoder networks have shown promising results in natural image inpainting tasks. However, they typically need a long computational time for network training in addition to the high demand on the size of datasets, which makes it difficult to be applied on often small medical datasets. To address these challenges, we propose a novel multi-scale shadow inpainting framework for OCT images by synergically applying sparse representation and deep learning: sparse representation is used to extract features from a small amount of training images for further inpainting and to regularize the image after the multi-scale image fusion, while convolutional neural network (CNN) is employed to enhance the image quality. During the image inpainting, we divide preprocessed input images into different branches based on the shadow width to harvest complementary information from different scales. Finally, a sparse representation-based regularizing module is designed to refine the generated contents after multi-scale feature aggregation. Experiments are conducted to compare our proposal versus both traditional and deep learning-based techniques on synthetic and real-world shadows. Results demonstrate that our proposed method achieves favorable image inpainting in terms of visual quality and quantitative metrics, especially when wide shadows are presented.
In this project, we propose a deep learning-based method to detect calcified tissue within coronary optical coherence tomography (OCT) images. Conventionally, diseased tissue was manually checked on a frame (Bscan)-by-frame (Bscan) basis. Based on faster region-based convolutional neural network, our proposed method can automatically detect the calcified regions from diseased coronary artery. Our method achieves promising result of mean average precision (0.74) and recall (0.79) in detecting calcified regions. The proposed method could provide valuable information for locating calcified tissue within a large volume of OCT images. It has great potential to aid the treatment of coronary artery disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.