Presentation + Paper
29 March 2024 SkinSAM: adapting the segmentation anything model for skin cancer segmentation
Author Affiliations +
Abstract
Skin cancer is a prevalent and potentially fatal disease that requires accurate and efficient diagnosis and treatment. Although manual tracing is the current standard in clinics, automated tools are desired to reduce human labor and improve accuracy. However, developing such tools is challenging due to the highly variable appearance of skin cancers and complex objects in the background. In this paper, we present SkinSAM, a fine-tuned model based on the Segment Anything Model that showed outstanding segmentation performance. The models are validated on HAM10000 dataset which includes 10015 dermatoscopic images. While larger models (ViT_L, ViT_H) performed better than the smaller one (ViT_b), the finetuned model (ViT_b_finetuned) exhibited the greatest improvement, with a Mean pixel accuracy of 0.945, Mean dice score of 0.8879, and Mean IoU score of 0.7843. Among the lesion types, vascular lesions showed the best segmentation results. Our research demonstrates the great potential of adapting SAM to medical image segmentation tasks.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Mingzhe Hu, Yuheng Li, and Xiaofeng Yang "SkinSAM: adapting the segmentation anything model for skin cancer segmentation", Proc. SPIE 12929, Medical Imaging 2024: Image Perception, Observer Performance, and Technology Assessment, 129290U (29 March 2024); https://doi.org/10.1117/12.3006837
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Skin cancer

Tumor growth modeling

Performance modeling

Visual process modeling

Skin

Education and training

Back to Top