Given the prevalence of cardiovascular diseases (CVDs), the segmentation of the heart on cardiac computed tomography (CT) remains of great importance. Manual segmentation is time-consuming and intra-and inter-observer variabilities yield inconsistent and inaccurate results. Computer-assisted, and in particular, deep learning approaches to segmentation continue to potentially offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve accurate enough results to compete with expert segmentation. Thus, we focus on a semi-automated deep learning approach to cardiac segmentation that bridges the divide between a higher accuracy from manual segmentation and higher efficiency from fully automated methods. In this approach, we selected a fixed number of points along the surface of the cardiac region to mimic user interaction. Points-distance maps were then generated from these points selections, and a three-dimensional (3D) fully convolutional neural network (FCNN) was trained using points-distance maps to provide a segmentation prediction. Testing our method with different numbers of selected points, we achieved a Dice score from 0.742 to 0.917 across the four chambers. Specifically. Dice scores averaged 0.846 ± 0.059, 0.857 ± 0.052, 0.826 ± 0.062, and 0.824 ± 0.062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively across all points selections. This point-guided, image-independent, deep learning segmentation approach illustrated a promising performance for chamber-by-chamber delineation of the heart in CT images.
Surgery is a major treatment method for squamous cell carcinoma (SCC). During surgery, insufficient tumor margin may lead to local recurrence of cancer. Hyperspectral imaging (HSI) is a promising optical imaging technique for in vivo cancer detection and tumor margin assessment. In this study, a fully convolutional network (FCN) was implemented for tumor detection and margin assessment in hyperspectral images of SCC. The FCN was trained and tested with hyperspectral images of 25 ex vivo SCC surgical specimens from 20 different patients. The network was evaluated per patient and achieved pixel-level tissue classification with an average AUC of 0.88, 0.83 accuracy, 0.84 sensitivity and 0.70 specificity. The 95% Hausdorff distance of assessed tumor margin in 17 patients was less than 2 mm, and the classification time of each tissue specimen took less than 10 seconds. The proposed method potentially facilitates intraoperative tumor margin assessment and improves surgical outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.