Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose Bayesian Multiple Instance Learning (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.
Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist’s TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.
Previous studies suggest that respiratory events are the leading cause of airway management-related deaths. Successful prediction of difficult-to-intubate (DI) patients can help clinicians precisely allocate the respiratory resources for good airway management. However, the current clinical bedside tests are highly biased with respect to anesthesiologists’ experience and yield moderate accuracy with low sensitivity. Therefore, a diagnostic tool with high accuracy, especially sensitivity, is in critical need. For these reasons, we propose an AI-based method to predict DI patients based on their facial images from frontal and profile views. Our ensemble model based on frontal and profile views of the face achieve an AUC of 0.713 and a sensitivity of 68.5%. In addition, our ensemble model increases sensitivity of thyromental distance from 26.9% to 79.2% while maintaining an acceptable trade-off in specificity. Overall, our model can meaningfully augment the accuracy of current clinical tests for DI. We envision that our model may be embedded on smartphones to serve as a bedside test for DI and that our study provides a basis for future studies for AI-based methods on facial images for DI prediction.
Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.
Breast cancer is the most common cancer diagnosed in women and causes over 40,000 deaths annually in the United States. In early-stage, HR+, HER2- invasive breast cancer, the Oncotype DX (ODX) Breast Cancer Recurrence Score Test predicts the risk of recurrence and the benefit of chemotherapy. However, this gene assay is costly and time-consuming, making it inaccessible to many patients. This study proposes a novel deep-learning approach, Deep-ODX, which performs ODX recurrence risk prediction based on routine H&E histopathology images. Deep-ODX is a multiple-instance learning model that leverages a cross-attention neural network, for instance, aggregation. We train and evaluate Deep-ODX on a whole slide image dataset collected from 151 breast cancer patients. As a result, Deep-ODX achieves 0.862 AUC on our dataset, outperforming the existing deep learning models. This study indicates that deep learning methods can predict ODX results from histopathology images, offering a potentially cost-effective prognostic solution with broader accessibility.
KEYWORDS: Machine learning, Data modeling, Polysomnography, Education and training, Sleep apnea, Deep learning, Performance modeling, Pulmonary disorders, Neurological disorders, Medicine
Obstructive sleep apnea (OSA) is a prevalent disease affecting 10 to 15% of Americans and nearly one billion people worldwide. It leads to multiple symptoms including daytime sleepiness; snoring, choking, or gasping during sleep; fatigue; headaches; non-restorative sleep; and insomnia due to frequent arousals. Although polysomnography (PSG) is the gold standard for OSA diagnosis, it is expensive, not universally available, and time-consuming, so many patients go undiagnosed due to lack of access to the test. Given the incomplete access and high cost of PSG, many studies are seeking alternative diagnosis approaches based on different data modalities. Here, we propose a machine learning model to predict OSA severity from 2D frontal view craniofacial images. In a cross-validation study of 280 patients, our method achieves an average AUC of 0.780. In comparison, the craniofacial analysis model proposed by a recent study only achieves 0.638 AUC on our dataset. The proposed model also outperforms the widely used STOP-BANG OSA screening questionnaire, which achieves an AUC of 0.52 on our dataset. Our findings indicate that deep learning has the potential to significantly reduce the cost of OSA diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.