Lung cancer is the leading cause of cancer-related death among both men and women and second most commonly diagnosed cancer, accounting for 18% of the total cancer deaths world-wide [4]. Screening high-risk patients with low dose Computed Tomography (CT) can lead to earlier treatment and increase the survival rate [5]. However, cancer diagnosis remains a challenging problem due to the subtle visual differences between benign and malignant nodules in CT images. Hence, computer-aided diagnosis (CADx) systems may prove useful in assisting radiologists in the malignancy prediction task. Previously we developed a convolutional attention-based network that allows for use of pre-trained 2-D convolutional feature extractors and is extendable to multi-timepoint classification in a Siamese structure [6]. The use of a sequence of parallel 2-D CNNs in place of a 3D CNN will result in significant reduction in the number of network parameters. In this paper, we keep the same overall structure utilized in [6] including the attention mechanism. However, herein, we report on use of Efficient-Net [1] for 2-D feature extractors due to its success on the Image-Net classification challenge. Variations of the Efficient-Net B0 to B7 pretrained on Image-Net were fine-tuned and applied to NLSTx data [6] a subset of data acquired in the National Lung Screening Trial (NLST) [2]. NLSTx includes data from biopsy confirmed scans in 650 benign and 207 malignant nodules at up to 3 time points. In our study, the performance of the best Efficient- Net reached an area under ROC curve of .7896 for the benign/malignant classification.
The use of low-dose Computed Tomography (CT) has been effective in reducing the mortality rate due to lung cancer. With the rapid increase in the number of studies, computer aided diagnosis (CAD) systems need to be developed to further assist radiologists in detecting lung nodules and determining their malignancy in CT scans. An important factor for determining malignancy in a sequential of scans is the rate of change and growth. In the past, deformable registration techniques have been widely applied to assess longitudinal change in data. In this paper we propose a new deep learning system based on Convolutional Neural Networks and U-Net architecture to assess longitudinal change in CT scans for a nodule over time. The VoxelMorph network [1] has been effective in deformable registration of brain MRIs. This paper extends the application of VoxelMorph to registration of lung nodules in multiple time point scans based on a subset of data from the National Lung Screening Trial (NLST), referred to as NLSTx. VoxelMorph has been modified to include both a sum-of-squared-differences (SSD) image similarity loss and spatial domain regularization to reduce the number of fold-overs and non-invertible transformations, indicated by the presence of negative Jacobian determinants. The network effectively performs 3D Deformable Registration of longitudinal nodule CT scans, providing the means to quantify nodule growth over time.
Low dose CT screening has been shown to significantly reduce mortality rates due to lung cancer. To assist radiologists, CAD systems continue to be developed for automatically detecting, segmenting, and categorizing potentially malignant lung nodules. Deep learning with the U-Net architecture has shown to be effective for automatic segmentation of 2-D images. The network consists of a down-sampling and up-sampling path, similar to an auto-encoder. However, the concept driving its success is skip-connections between down-sampling and up-sampling, allowing the network to preserve details and for easier backpropagation of error to deep layers. This concept has previously been brought into 3-D and successfully applied to image volumes such as MRI and CT scans. This paper applies concepts from these works (skip connections, batch normalization, Dice similarity coefficient loss, and stride convolution for down sampling) to a 3-D Convolutional Neural Network for segmenting nodule image patches in thoracic CT scans from the LIDC-IDRI database. This database contains scans from 1018 cases with annotations from up to 4 human experts. Within each scan, nodules are delineated by each of these experts. It is well known that manual delineation can be subjective between annotators. This paper proposes a model trained on a ground truth estimation from the four expert annotations using the STAPLE algorithm. Experiments in this paper show that when trained on STAPLE, automatic segmentation with a 3-D U-Net can result in improved similarity scores to human annotators than similarity scores between human annotators.
Accurate and timely segmentation of coronary vessels in quantitative coronary angiography (QCA) may be important to ensure accurate patient diagnosis. This paper compares three variations of graph search algorithms for use in segmenting coronary arteries in X-ray angiographic images. For comparing these algorithms, we propose a semi-automatic vessel segmentation technique that combines Hessian-based filtering, Gabor filtering, and graph-based search routines for tracing the boundaries1,2. This allows for a more automated procedure by incorporating automatic centerline detection while the use of Gabor filtering promotes a more natural and geometrically continuous border segmentation1. The method requires minimal effort by the user; the only manual input required is a start and end-point along the vessel of interest. Three graph search methods were compared by analyzing the accuracy and computational speed of the segmentations while using each search technique: Dijkstra’s algorithm, a restricted Dijkstra’s algorithm, and the A* search algorithm were compared. The restricted Dijkstra’s and A* approaches reduced the computational time but resulted in low accuracies or outright segmentation failures. As outlined in the paper, Dijkstra’s algorithm results in a superior segmentation with only a marginal increase in computational time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.