Automated lesion segmentation is essential to provide fast, reproducible tumor load estimates. Though deep learning methods have achieved unprecedented results in this field, they are often difficult to interpret, hampering their potential integration in the clinic. An interpretable deep learning approach is proposed for segmenting melanoma lesions on whole-body fluorine-18 fluorodeoxyglucose ([18F]FDG) positron emission tomography (PET) / computed tomography (CT). This consists of an automated PET thresholding step to identify FDGavid regions, followed by a three-channel nnU-Net considering the binary mask in addition to the PET and CT images. This segmentation step differentiates healthy from malignant tissue and removes the restriction on lesion boundaries imposed by the thresholding. The proposed method, trained on 267 images and evaluated on two sets acquired at the same institute, achieved mean Dice similarity coefficients (DSC) of 0.779 and 0.638 with mean absolute volume differences of 15.2mL and 22.0 mL. The DSC proved significantly higher compared to a direct, two-channel nnU-Net considering only the PET and CT. The same was observed when retraining and testing on subsets of the public data of the autoPET challenge, containing melanoma, lung cancer and lymphoma patients. In addition, overall results proved superior to a previously proposed two-step approach, where a classification network categorized each component of increased tracer uptake as healthy or malignant. The proposed lesion segmentation method for whole-body [18F]FDG PET/CT incorporates prior thresholding information while allowing more flexibility in the lesion delineation than a pure thresholding approach and increased interpretability over a direct segmentation network.
PET/CT is widely used in oncology. Yet the identification of lesions, as described by the PET response criteria in solid tumors (PERCIST), still relies on manual identification of a volume of interest (VOI), typically in the liver, for determining the optimal threshold. The process requires expert knowledge and is prone to errors and inter-observer variability. A fully automated procedure for the application of the PERCIST criteria for whole- body images is proposed. The method relies on automated localization of the liver on whole-body CT using a dense V-net trained on large field-of-view images. Inside the liver, a spherical VOI is determined which exhibits the lowest product of the coefficients of variation (defined as the standard deviation over the mean) in PET and CT. The liver segmentation achieved a median dice score of 0.87 ± 0.12 in 10-fold cross-validation, which proved to be sufficient for reliable identification of a VOI. The full pipeline was evaluated on an external PET/CT dataset of 18 patients. To assess reproducibility, geometric and intensity variations were applied, simulating potential image differences when scanning the same person under different conditions. The variability of the resulting threshold was evaluated and compared to the manual approach performed by three observers. The proposed method exhibited superior reproducibility with a mean threshold of 4.01 ± 0.02 SUVbw, compared to 4.11 ± 0.16 SUVbw for the manual method. The automated procedure renders the analysis of large amounts of PET/CT data feasible or could be used to detect anomalies in the manual approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.