KEYWORDS: Ultrasonography, Signal to noise ratio, Signal attenuation, Image enhancement, Medical imaging, Interference (communication), Network architectures, Image segmentation, Image processing, Data acquisition
Ultrasound image quality is strongly dependent on penetration depth and attenuation. The transmit voltage in ultrasonic systems can be increased to increase output power and improve the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in deeper regions. However, the utility of using high transmit voltages and thus high output power is limited due to associated thermal and mechanical bioeffects. Additionally, the ability to increase the output power is limited in portable and low-cost ultrasound devices which have lower power. We propose a software-based approach, using a conditional generative adversarial network (cGAN) to amplify signals in deeper regions and enhance image quality without increasing the transmit voltage. The cGAN was customized and trained with beamformed radio frequency phantom data pairs (n=288) acquired with a Verasonics Vantage System; with input data taken at a low output voltage (20V) and corresponding output data taken at a high output voltage (70V). We trained and tested the performance of different loss functions and cGAN architectures. Our proposed model, tested on a hold-out phantom data set (n=73) was able to improve the average penetration depth by roughly 16% (1 cm gain in penetration depth) when compared to the low-voltage images. We found an average increase in CNR of 160.45%±117.64, increase in Peak SNR of 5675%±1.89dB, and increase in SNR of 32.68%±4.84 dB, relative to the low voltage images, for selected hyper and hypoechoic regions of interest. This work has potential applications in fetal imaging, where safety guidelines limit transmit voltage significantly, and portable devices.
For cervical cancer screening in low-HDI countries, the WHO recommends that a diagnosis is made immediately upon cervical visualization. To address variability in provider visual interpretations, we use CNNs to classify images from a low-cost, FDA-certified, portable Pocket colposcope images positive for high-grade precancer from a triaged population. We show that the combination of white-light acetic acid and green-light image stacks improves the AUC to 0.9. Pocket CARE can be used at the community level without the need for specialized physicians or inaccessible equipment, broadening access to early detection and treatment of pre-cursor lesions before they advance to cancer.
For cervical cancer screening in low-HDI countries, the WHO recommends that a diagnosis is made immediately upon cervical visualization. To address variability in provider visual interpretations, we use CNNs to classify images from a low-cost, FDA-certified, portable Pocket colposcope images positive for high-grade precancer from a triaged population. We show that the combination of white-light acetic acid and green-light image stacks improves the AUC to 0.9. Pocket CARE can be used at the community level without the need for specialized physicians or inaccessible equipment, broadening access to early detection and treatment of pre-cursor lesions before they advance to cancer.
The world health organization recommends visual inspection with acetic acid (VIA) and/or Lugol’s Iodine (VILI) for cervical cancer screening in low-resource settings. Human interpretation of diagnostic indicators for visual inspection is qualitative, subjective, and has high inter-observer discordance, which could lead both to adverse outcomes for the patient and unnecessary follow-ups. In this work, we a simple method for automatic feature extraction and classification for Lugol’s Iodine cervigrams acquired with a low-cost, miniature, digital colposcope. Algorithms to preprocess expert physician-labelled cervigrams and to extract simple but powerful color-based features are introduced. The features are used to train a support vector machine model to classify cervigrams based on expert physician labels. The selected framework achieved a sensitivity, specificity, and accuracy of 89.2%, 66.7% and 80.6% with majority diagnosis of the expert physicians in discriminating cervical intraepithelial neoplasia (CIN +) relative to normal tissues. The proposed classifier also achieved an area under the curve of 84 when trained with majority diagnosis of the expert physicians. The results suggest that utilizing simple color-based features may enable unbiased automation of VILI cervigrams, opening the door to a full system of low-cost data acquisition complemented with automatic interpretation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.