Presentation
13 March 2024 Scalable and generalizable deep learning reconstructions for computational imaging by local conditional neural fields
Author Affiliations +
Abstract
Deep learning has revolutionized computational imaging, offering powerful solutions for performance enhancement and addressing diverse challenges. However, the traditional discrete pixel-based representations limit their ability to capture continuous, multiscale details of objects. Here, we introduce a novel Local Conditional Neural Fields (LCNF) framework, leveraging a continuous implicit neural representation. We demonstrate the capabilities of LCNF in solving the highly ill-posed inverse problem in Fourier ptychographic microscopy (FPM) with multiplexed measurements. Our LCNF achieves versatile and generalizable continuous-domain super-resolution image reconstruction by combining a CNN-based encoder and an MLP-based decoder conditioned on a learned local latent vector. We show LCNF can accurately reconstruct wide field-of-view, high-resolution phase images, robustly capture the continuous object priors and eliminate various phase artifacts even trained imperfect datasets. We further demonstrate that LCNF exhibits strong generalization, reconstructing diverse biological samples with limited training data or dataset simulated using natural images.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hao Wang and Lei Tian "Scalable and generalizable deep learning reconstructions for computational imaging by local conditional neural fields", Proc. SPIE PC12857, Computational Optical Imaging and Artificial Intelligence in Biomedical Sciences, PC128570R (13 March 2024); https://doi.org/10.1117/12.3000111
Advertisement
Advertisement
KEYWORDS
Image restoration

Computational imaging

Deep learning

Education and training

Data modeling

Inverse problems

Multiplexing

Back to Top