In this work, we present a memory-efficient fully convolutional network (FCN) incorporated with several memory-optimized techniques to reduce the run-time GPU memory demand during training phase. In medical image segmentation tasks, subvolume cropping has become a common preprocessing. Subvolumes (or small patch volumes) were cropped to reduce GPU memory demand. However, small patch volumes capture less spatial context that leads to lower accuracy. As a pilot study, the purpose of this work is to propose a memory-efficient FCN which enables us to train the model on full size CT image directly without subvolume cropping, while maintaining the segmentation accuracy. We optimize our network from both architecture and implementation. With the development of computing hardware, such as graphics processing unit (GPU) and tensor processing unit (TPU), now deep learning applications is able to train networks with large datasets within acceptable time. Among these applications, semantic segmentation using fully convolutional network (FCN) also has gained a significant improvement against traditional image processing approaches in both computer vision and medical image processing fields. However, unlike general color images used in computer vision tasks, medical images have larger scales than color images such as 3D computed tomography (CT) images, micro CT images, and histopathological images. For training these medical images, the large demand of computing resource become a severe problem. In this paper, we present a memory-efficient FCN to tackle the high GPU memory demand challenge in organ segmentation problem from clinical CT images. The experimental results demonstrated that our GPU memory demand is about 40% of baseline architecture, parameter amount is about 30% of the baseline.
The purpose of this paper is to present multi-organ segmentation method using spatial information-embedded fully convolutional networks (FCNs). Semantic segmentation of major anatomical structure from CT volumes is promising to apply in clinical work ows. A multitude of deep-learning-based approaches have been proposed for 3D image processing. With the rapid development of FCNs, the encoder-decoder network architecture is proved to achieved acceptable performance on segmentation tasks. However, it is hard to obtain the spatial information from sub-volumes during training. In this paper, we extend the spatial position information-embeded FCNs which designed for binary segmentation tor multi-class organ segmentation. We introduced gamma correction in data augmentation to improve the FCNs robustness. We compared the FCNs performance with different normalization methods, including batch normalization and instance normalization. Experiment results showed that our modifications positively influence the segmentation performance on abdominal CT dataset. Our highest average dice score achieves 87.2%, while the previous method achieved 86.2%.
Accurate classification and precise quantification of interstitial lung disease (ILD) types on CT images remain important challenges in clinical diagnosis. Multi-modality image information is required to assist diagnosing diseases. To build scalable deep-learning solutions for this problem, how to take full advantage of existing large-scale datasets in modern hospitals has become a critical task. In this paper, we present DeepILD, as a novel computer-aided diagnostic framework to address the ILD classification task only from single modality (CT image) using a deep neural network. More specifically, we propose integrating spherical semi-supervised K- means clustering and convolutional neural networks for ILD classification and disease quantification. We firstly use semi-supervised spherical K-means to divide the CT lung area into normal and abnormal sub-regions. A convolutional neural network (CNN) is subsequently invoked to perform training using image patches extracted from the abnormal regions. Here, we focus on the classification of three chronic fibrosing ILD types: idiopathic pulmonary fibrosis (IPF), idiopathic non-specific interstitial pneumonia (iNSIP), and chronic hypersensitivity pneumonia (CHP). Excellent classification accuracy has been achieved using a dataset of 188 CT scans; in particular, our IPF classification reached about 88% accuracy.
KEYWORDS: Blood vessels, Image segmentation, 3D modeling, Data modeling, Medical imaging, Selenium, 3D image processing, Statistical analysis, Arteries, Image processing
In this paper, we present an efficient trainable conditional random field (CRF) model using a newly proposed scale-targeted loss function to improve the segmentation accuracy on tiny blood vessels in 3D medical images. Blood vessel segmentation is still a big challenge in medical image processing field due to its elongated structure and low contrast. Conventional local neighboring CRF model has poor segmentation performance on tiny elongated structures due to its poor capability capturing pairwise potentials. To overcome this drawback, we use a fully-connected CRF model to capture the pairwise potentials. This paper also introduces a new scale-targeted loss function aiming to improve the segmentation accuracy on tiny blood vessels. Experimental results on both phantom data and clinical CT data showed that the proposed approach contributes to the segmentation accuracy on tiny blood vessels. Compared to previous loss function, our proposed loss function improved about 10% sensitivity on phantom data and 14% on clinical CT data.
This paper presents a novel renal artery segmentation method combining graph-cut and template-based tracking methods and its application to estimation of renal vascular dominant region. For the purpose of giving a computer assisted diagnose for kidney surgery planning, it is important to obtain the correct topological structures of renal artery for estimation of renal vascular dominant regions. Renal artery has a low contrast, and its precise extraction is a difficult task. Previous method utilizing vesselness measure based on Hessian analysis, still cannot extract the tiny blood vessels in low-contrast area. Although model-based methods including superellipsoid model or cylindrical intensity model are low-contrast sensitive to the tiny blood vessels, problems including over-segmentation and poor bifurcations detection still remain. In this paper, we propose a novel blood vessel segmentation method combining a new Hessian-based graph-cut and template modeling tracking method. Firstly, graph-cut algorithm is utilized to obtain the rough segmentation result. Then template model tracking method is utilized to improve the accuracy of tiny blood vessel segmentation result. Rough segmentation utilizing graph-cut solves the bifurcations detection problem effectively. Precise segmentation utilizing template model tracking focuses on the segmentation of tiny blood vessels. By combining these two approaches, our proposed method segmented 70% of the renal artery of 1mm in diameter or larger. In addition, we demonstrate such precise segmentation can contribute to divide renal regions into a set of blood vessel dominant regions utilizing Voronoi diagram method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.