SignificanceTraditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.AimWe address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.ApproachWe designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.ResultsTransitioning from simulation and phantom data to clinical patients’ data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.ConclusionsThe APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.
SignificanceWe evaluate the efficiency of integrating ultrasound (US) and diffuse optical tomography (DOT) images for predicting pathological complete response (pCR) to neoadjuvant chemotherapy (NAC) in breast cancer patients. The ultrasound-diffuse optical tomography (USDOT)-Transformer model represents a significant step toward accurate prediction of pCR, which is critical for personalized treatment planning.AimWe aim to develop and assess the performance of the USDOT-Transformer model, which combines US and DOT images with tumor receptor biomarkers to predict the pCR of breast cancer patients under NAC.ApproachWe developed the USDOT-Transformer model using a dual-input transformer to process co-registered US and DOT images along with tumor receptor biomarkers. Our dataset comprised imaging data from 60 patients at multiple time points during their chemotherapy treatment. We used fivefold cross-validation to assess the model’s performance, comparing its results against a single modality of US or DOT.ResultsThe USDOT-Transformer model demonstrated excellent predictive performance, with a mean area under the receiving characteristic curve of 0.96 (95%CI: 0.93 to 0.99) across the fivefold cross-validation. The integration of US and DOT images significantly enhanced the model’s ability to predict pCR, outperforming models that relied on a single imaging modality (0.87 for US and 0.82 for DOT). This performance indicates the potential of advanced deep learning techniques and multimodal imaging data for improving the accuracy (ACC) of pCR prediction.ConclusionThe USDOT-Transformer model offers a promising non-invasive approach for predicting pCR to NAC in breast cancer patients. By leveraging the structural and functional information from US and DOT images, the model offers a faster and more reliable tool for personalized treatment planning. Future work will focus on expanding the dataset and refining the model to further improve its accuracy and generalizability.
KEYWORDS: Histograms, Tumor growth modeling, Image classification, Breast cancer, Feature extraction, Breast, Data modeling, Image restoration, Deep learning, Education and training
SignificanceUltrasound (US)-guided diffuse optical tomography (DOT) has demonstrated great potential for breast cancer diagnosis in which real-time or near real-time diagnosis with high accuracy is desired.AimWe aim to use US-guided DOT to achieve an automated, fast, and accurate classification of breast lesions.ApproachWe propose a two-stage classification strategy with deep learning. In the first stage, US images and histograms created from DOT perturbation measurements are combined to predict benign lesions. Then the non-benign suspicious lesions are passed through to the second stage, which combine US image features, DOT histogram features, and 3D DOT reconstructed images for final diagnosis.ResultsThe first stage alone identified 73.0% of benign cases without image reconstruction. In distinguishing between benign and malignant breast lesions in patient data, the two-stage classification approach achieved an area under the receiver operating characteristic curve of 0.946, outperforming the diagnoses of all single-modality models and of a single-stage classification model that combines all US images, DOT histogram, and imaging features.ConclusionsThe proposed two-stage classification strategy achieves better classification accuracy than single-modality-only models and a single-stage classification model that combines all features. It can potentially distinguish breast cancers from benign lesions in near real-time.
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated success in breast cancer diagnosis. However, DOT data pre-processing and reconstruction still require some level of manual operation, for example, contralateral reference selection and data cleaning. In this study, we introduce an automated data pre-processing and reconstruction pipeline to accelerate the DOT clinical translation. The pipeline has integrated several data pre-processing modules and reconstruction methods that are adapted to data. The pipeline is implemented using a graphical user interface. Initial testing has shown that it can automate DOT right after the data acquisition and provides an accurate diagnostic score on cancer vs. benign probability.
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated potential for breast cancer diagnosis. Previous diagnostic strategies all require image reconstruction, which hindered real-time diagnosis. In this study, we propose a deep learning approach to combine DOT frequency-domain measurement data and co-registered US images to classify breast lesions. The combined deep learning model achieved an AUC of 0.886 in distinguishing between benign and malignant breast lesions in patient data without reconstructing images.
Significance: “Difference imaging,” which reconstructs target optical properties using measurements with and without target information, is often used in diffuse optical tomography (DOT) in vivo imaging. However, taking additional reference measurements is time consuming, and mismatches between the target medium and the reference medium can cause inaccurate reconstruction.Aim: We aim to streamline the data acquisition and mitigate the mismatch problems in DOT difference imaging using a deep learning-based approach to generate data from target measurements only.Approach: We train an artificial neural network to output data for difference imaging from target measurements only. The model is trained and validated on simulation data and tested with simulations, phantom experiments, and clinical data from 56 patients with breast lesions.Results: The proposed method has comparable performance to the traditional approach using measurements without mismatch between the target side and the reference side, and it outperforms the traditional approach using measurements when there is a mismatch. It also improves the target-to-artifact ratio and lesion localization in patient data.Conclusions: The proposed method can simplify the data acquisition procedure, mitigate mismatch problems, and improve reconstructed image quality in DOT difference imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.