Presentation + Paper
3 April 2024 Few-shot tumor bud segmentation using generative model in colorectal carcinoma
Author Affiliations +
Abstract
Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ziyu Su, Wei Chen, Preston J. Leigh, Usama Sajjad, Shuo Niu, Mostafa Rezapour, Wendy L. Frankel, Metin N. Gurcan, and M. Khalid Khan Niazi "Few-shot tumor bud segmentation using generative model in colorectal carcinoma", Proc. SPIE 12933, Medical Imaging 2024: Digital and Computational Pathology, 129330A (3 April 2024); https://doi.org/10.1117/12.3006418
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Tumors

Data modeling

Cancer detection

Deep learning

Back to Top