Paper
13 March 2019 Transfer learning for automatic cancer tissue detection using multispectral photoacoustic imaging
Author Affiliations +
Abstract
Pathology diagnosis is usually done by a human pathologist observing tissue stained glass slide under a microscope. In the case of multi-specimen study to locate cancer region, such as in thyroidectomy, significant labor-intensive processing is required at high cost. Multispectral photoacoustic (MPA) specimen imaging, has proven successful in differentiating photoacoustic (PA) signal characteristics between a histopathology defined cancer region and normal tissue. A more pragmatic research question to ask is, can MPA imaging data predict, whether a sectioned tissue slice has cancer region(s)? We propose to use inception-resnet-v2 convolutional neural networks (CNNs) on the thyroid MPA data to evaluate this potential by transfer learning. The proposed algorithm first extracts features from the thyroid MPA image data using CNN and then detects cancer using the softmax function, the last layer of the network. The AUCs (area under curve) of the receiver operating characteristic (ROC) curve of cancer, benign nodule and normal are 0.73, 0.81, and 0.88 respectively with a limited number of the MPA dataset.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kamal Jnawali, Bhargava Chinni, Vikram Dogra, and Navalgund Rao "Transfer learning for automatic cancer tissue detection using multispectral photoacoustic imaging", Proc. SPIE 10950, Medical Imaging 2019: Computer-Aided Diagnosis, 109503W (13 March 2019); https://doi.org/10.1117/12.2506950
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cancer

Tissues

Tissue optics

Photoacoustic imaging

Convolutional neural networks

Tumor growth modeling

Multispectral imaging

Back to Top