Publisher’s Note: This paper, originally published on 15 February 2021, was replaced with a corrected/revised version on 13 May 2021. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
Mandibular meshes segmented from computerized tomography (CT) images contain rich information of the dentition conditions, which impairs the performance of shape completion algorithms relying on such data, but can benefit virtual planning for oral reconstructive surgeries. To locate the alveolar process and remove the dentition area, we propose a tooth segmentation method including a preprocessing step using non-rigid registration, an active contour model, and constructive solid geometry (CSG) operations. An easy-to-use interactive tool is developed, allowing users to adjust the tooth crown contour position. A validation study and a comparison study were conducted for method evaluation. In the validation study, we removed teeth for 28 models acquired from Vancouver General Hospital (VGH) and ran a shape completion test. Regarding 95th percentile Hausdorff distance (HD95), using edentulous models produced significantly better predictions of the premorbid shapes of diseased mandibles than using models with inconsistent dentition conditions(Z = -2.484, p = 0.01). The volumetric Dice score (DSC) shows no significant difference. In the second study, we compared the proposed method to manual removal in terms of manual processing time, symmetric HD95, and symmetric root mean square deviation (RMSD). The result indicates that our method reduced the manual processing time by 40% on average and approached the accuracy of manual tooth segmentation. It is promising to warrant further efforts towards clinical usage. This work forms the basis of a useful tool for coupling jaw reconstruction and restorative dentition for patient treatment planning.
Transthoracic echocardiography (echo) is the most common imaging modality for diagnosis of cardiac conditions. Echo is acquired from a multitude of views, each of which distinctly highlights specific regions of the heart anatomy. In this paper, we present an approach based on knowledge distillation to obtain a highly accurate lightweight deep learning model for classification of 12 standard echocardiography views. The knowledge of several deep learning architectures based on the three common state-of-the-art architectures, VGG-16, DenseNet, and Resnet, are distilled to train a set of lightweight models. Networks were developed and evaluated using a dataset of 16,612 echo cines obtained from 3,151 unique patients across several ultrasound imaging machines. The best accuracy of 89.0% is achieved by an ensemble of the three very deep models while we show an ensemble of lightweight models has a comparable accuracy of 88.1%. The lightweight models have approximately 1% of the very deep model parameters and are six times faster in run-time. Such lightweight view classification models could be used to build fast mobile applications for real-time point-of-care ultrasound diagnosis.
Echocardiography (echo) is the most common test for diagnosis and management of patients with cardiac condi- tions. While most medical imaging modalities benefit from a relatively automated procedure, this is not the case for echo and the quality of the final echo view depends on the competency and experience of the sonographer. It is not uncommon that the sonographer does not have adequate experience to adjust the transducer and acquire a high quality echo, which may further affect the clinical diagnosis. In this work, we aim to aid the operator during image acquisition by automatically assessing the quality of the echo and generating the Automatic Echo Score (AES). This quality assessment method is based on a deep convolutional neural network, trained in an end-to-end fashion on a large dataset of apical four-chamber (A4C) echo images. For this project, an expert car- diologist went through 2,904 A4C images obtained from independent studies and assessed their condition based on a 6-scale grading system. The scores assigned by the expert ranged from 0 to 5. The distribution of scores among the 6 levels were almost uniform. The network was then trained on 80% of the data (2,345 samples). The average absolute error of the trained model in calculating the AES was 0.8 ± 0:72. The computation time of
the GPU implementation of the neural network was estimated at 5 ms per frame, which is sufficient for real-time
deployment.
As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of >93% in Dice similarity, specificity, and sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.