Janek Gröhlhttps://orcid.org/0000-0002-5332-4856,1 Melanie Schellenberg,1 Kris K. Dreher,1 Niklas Holzwarth,1 Minu D. Tizabi,1 Alexander Seitel,1 Lena Maier-Hein1
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Photoacoustic imaging (PAI) has the potential to revolutionize healthcare due to the valuable information on tissue physiology that is contained in multispectral signals. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral PA images to facilitate interpretability of recorded images. Based on a validation study with experimentally acquired data of healthy human volunteers, we show that a combination of tissue segmentation, sO2 estimation, and uncertainty quantification can create powerful analyses and visualizations of multispectral photoacoustic images.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Janek Gröhl, Melanie Schellenberg, Kris K. Dreher, Niklas Holzwarth, Minu D. Tizabi, Alexander Seitel, Lena Maier-Hein, "Semantic segmentation of multispectral photoacoustic images using deep learning," Proc. SPIE 11642, Photons Plus Ultrasound: Imaging and Sensing 2021, 116423F (5 March 2021); https://doi.org/10.1117/12.2578135