Auscultation is an established technique in clinical assessment of symptoms for respiratory disorders. Auscultation is safe and inexpensive, but requires expertise to diagnose a disease using a stethoscope during hospital or office visits. However, some clinical scenarios require continuous monitoring and automated analysis of respiratory sounds to pre-screen and monitor diseases, such as the rapidly spreading COVID-19. Recent studies suggest that audio recordings of bodily sounds captured by mobile devices might carry features helpful to distinguish patients with COVID-19 from healthy controls. Here, we propose a novel deep learning technique to automatically detect COVID-19 patients based on brief audio recordings of their cough and breathing sounds. The proposed technique first extracts spectrogram features of respiratory recordings, and then classifies disease state via a hierarchical vision transformer architecture. Demonstrations are provided on a crowdsourced database of respiratory sounds from COVID-19 patients and healthy controls. The proposed transformer model is compared against alternative methods based on state-of-the-art convolutional and transformer architectures, as well as traditional machine-learning classifiers. Our results indicate that the proposed model achieves on par or superior performance to competing methods. In particular, the proposed technique can distinguish COVID-19 patients from healthy subjects with over 94% AUC.
In many settings, multiple Magnetic Resonance Imaging (MRI) scans are performed with different contrast
characteristics at a single patient visit. Unfortunately, MRI data-acquisition is inherently slow creating a persistent
need to accelerate scans. Multi-contrast reconstruction deals with the joint reconstruction of different contrasts
simultaneously. Previous approaches suggest solving a regularized optimization problem using group sparsity and/or
color total variation, using composite-splitting denoising and FISTA. Yet, there is significant room for improvement
in existing methods regarding computation time, ease of parameter selection, and robustness in reconstructed image
quality. Selection of sparsifying transformations is critical in applications of compressed sensing. Here we propose
using non-convex p-norm group sparsity (with p < 1), and apply color total variation (CTV). Our method is readily
applicable to magnitude images rather than each of the real and imaginary parts separately. We use the constrained
form of the problem, which allows an easier choice of data-fidelity error-bound (based on noise power determined
from a noise-only scan without any RF excitation). We solve the problem using an adaptation of Alternating
Direction Method of Multipliers (ADMM), which provides faster convergence in terms of CPU-time. We
demonstrated the effectiveness of the method on two MR image sets (numerical brain phantom images and SRI24
atlas data) in terms of CPU-time and image quality. We show that a non-convex group sparsity function that uses the
p-norm instead of the convex counterpart accelerates convergence and improves the peak-Signal-to-Noise-Ratio
(pSNR), especially for highly undersampled data.
Conference Committee Involvement (6)
Image Processing
17 February 2025 | San Diego, California, United States
Image Processing
19 February 2024 | San Diego, California, United States
Image Processing
20 February 2023 | San Diego, California, United States
Image Processing
20 February 2022 | San Diego, California, United States
Image Processing
15 February 2021 | Online Only, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.