PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Abhijit Mahalanobis,1 Amit Ashok,2 Lei Tian,3 Jonathan C. Petruccelli,4 Kenneth S. Kubala5
1Lockheed Martin Missiles and Fire Control (United States) 2College of Optical Sciences, The Univ. of Arizona (United States) 3Boston Univ. (United States) 4Univ. at Albany (United States) 5FiveFocal LLC (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 10222, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Structured illumination has been utilized to super-resolve microscopic objects and provide topographic information in computer vision applications. Motivated by the achievements in these fields and leveraging techniques found in astronomical sparse aperture systems, an approach is developed to super-resolve macroscopic objects in typical real world scenarios. The challenges of super-resolving uncontrolled 3D environments are addressed. An approach is presented which enables the collection of 3D topographic information while super-resolving. These techniques use incoherent illumination to resolve spatial detail in an intensity image. For indirect imaging scenarios, this approach is adapted with structured coherent illumination to super-resolve phase at a distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase-space refers to simultaneous space-frequency information (e.g. Wigner functions, light fields), which is directly related to spatial coherence properties (e.g. Mutual Intensity). We introduce a binary pupil masking technique that allows us to computationally reconstruct the phase space distribution of optical beams from a series of images. Previous work has shown phase space to be useful for 3D imaging and localization in a multiple scattering environment. Binary masks are easy to implement compared to gray masks or phase masks and the proposed scheme requires no interferometry. After designing the masks with nonredundant arrays, we measure an intensity image for each aperture mask and reconstruct the phase space through an auxiliary coherence function. We demonstrate experimentally the reconstruction of the phase space of a collection of 3D incoherent sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution for infrared imaging is motivated by the high cost and practical limitations of obtaining large focal plane arrays. Methods in the literature require the optical system to be modified. Here, we propose a compressive sensing based method for super-resolution using the inherent point spread function of the camera. The proposed method produces high resolution images and is robust against missing pixels. We then compare our method to other super-resolution methods in the literature and show that our method performs well for practical usage without any modification to the optical system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This Conference Presentation, “Computational imaging: beyond the limits imposed by lenses,” was recorded at SPIE Commercial + Scientific Sensing and Imaging 2017 held in Anaheim, California, United States.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show experimental results from a prototype multiplexed imaging spectrometer. Spectral information is encoded via a dual-dispersive architecture using a digital micromirror spatial light modulator (SLM) and decoded on-chip at the focal plane with a computational imaging array. Light from the scene is dispersed through a first prism and imaged onto the SLM, which applies a unique time-varying binary (1,0) encoding to each spectral bin. The encoded light is then recombined through a second prism and imaged onto a computational imaging array, where the multiplexed image is decoded on-chip. The computational imaging array is comprised of a 32x32 array of pixels with the capability of acquiring eight concurrent measurements that can be modulated with a time-varying duo-binary signal (+1,-1,0) at MHz rates. This results in eight decoded images per frame at a maximum frame rate of 1600 frames per second. The frame rate of the system depends on the number of encoded spectral bins. At the high end it is limited by the switching speed of the DMD SLM, and at the low end it is limited by the readout rate of the imaging array. We explore these trades as well as discuss areas for future improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, which moves the complexity of the system away from optical subcomponents and into a calibration process whereby the measurement matrix is estimated.
We report on the design, simulation, and prototyping of a lensless imaging system that utilizes a 3D printed optically transparent random scattering element. Development of end-to-end system simulations, which includes simulations of the calibration process, as well as the data processing algorithm used to generate an image from the raw data are presented. These simulations utilize GPU-based raytracing software, and parallelized minimization algorithms to bring complete system simulation times down to the order of seconds.
Hardware prototype results are presented, and practical lessons such as the effect of sensor noise on reconstructed image quality are discussed. System performance metrics are proposed and evaluated to discuss image quality in a manner that is relatable to traditional image quality metrics. Various hardware instantiations are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, a method for reducing the atmospheric effects on SAR interferometric products is proposed. The method exploits MODIS data, as well as the Saastamoinen model for the estimation of the atmospheric component and the generation of spatially continuous data for this component. Then it recovers the interferometric signal from delays caused by the atmospheric component, through the appropriate modelling of the interferometric phase.
Performance of the method depends on MODIS data resolution; however, it always improves results. Experiments showed that the accuracy of DEMs that are produced by interferometry is improved when the proposed method is applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method for partially blind-deconvolution with prior information on the lens characteristics. There is a permanent demand for higher resolution for applications such as tracking, recognition, and identification. Limitations of available methods for practical systems are generally due to computational cost and power. Therefore a computationally efficient method for blind-deconvolution is desirable for practical systems. Total-variation (TV) minimization method proposed by Vogel and Oman is used to recover the image from noisy data and eliminated some of the blurs. Another approach called split augmented Lagrangian shrinkage algorithm uses alternating direction method of multipliers (ADMM) in which an unconstrained optimization problem including ℓ1 data fidelity and a non-smooth regularization term are solved. Although successful, the excessive computational requirements present a challenge for practical usage of these methods. Here, we propose a parametric blind-deconvolution method with prior knowledge on the point spread function (PSF) of the camera lens. We model the PSF of the circular optics as Jinc-squared function and determine the best PSF by solving optimization problem containing TV-norm along with Wavelet-sparsity objectives using an ADMM based algorithm. We use a convolutional model and work in Fourier domain for efficient implementation, and avoid circular effects by extending the unknown image region. First, we show that PSF function of the lenses can be modeled with Jinc function in experimental data. Next, we point out that our algorithm improves resolution of the image and compared to classical blind-deconvolution methods while remaining feasible in terms of computation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The values of the unwrapped phase produced by interferometric pairs can be parameterized for phase components, such as height, atmospheric path delay and deformation term, and estimated through DInSAR techniques. In this study, a method is proposed which estimates the atmospheric path delay using a single interferometric pair and an atmospheric path delay estimator. The estimator relies on the minimization of the outage probability, which is the probability that the Mean Square Error (MSE) of the estimated atmospheric component exceeds a desired MSE value. Outage minimization is equivalent to the minimization of the MSE of the atmospheric component for a fixed outage probability. The minimization of the MSE of the atmospheric component is determined by the second-order statistics of the topography and atmospheric components. For a specific SAR image geometry, second-order statistics of the topography component are satisfactorily approximated by the mean squared height errors of a high quality InSAR DEM for various height and slope classes, whereas second-order statistics of the atmospheric component are approximated by the inverse coherence value of the dataset which provides the high quality InSAR DEM. The proposed approach is validated to real satellite images and meteorological measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Direct image formation in synthetic aperture radar (SAR) involves processing of data modeled as Fourier coefficients along a polar grid. Often in such data acquisition processes, imperfections in the data cannot simply be modeled as additive or even multiplicative noise errors. In the case of SAR, errors in the data can exist due to imprecise estimation of the round trip wave propagation time, which manifests as phase errors in the Fourier domain. To correct for these errors, we propose a phase correction scheme that relies on both the on smoothness characteristics of the image and the phase corrections associated with neighboring pulses, which are possibly highly correlated due to the nature of the data off setting. Our model takes advantage of these correlations and smoothness characteristics simultaneously for a new autofocusing approach, and our algorithm for the proposed model alternates between approximate image feature and phase correction minimizers to the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging systems can become measurement limited under various conditions resulting in strict limits to the amount of acquired image information. For example, a high-speed imager is limited in the number of measurements that can be acquired per second. Similarly, an ultra-small imager such as a micro-endoscope is limited by the number of measurements that can be acquired in a given cross-sectional area. In this talk, we will discuss our recent research in applying optical signal processing and computational imaging to enhance imaging performance in measurement-limited applications. Specifically, we will discuss our research into high-throughput flow microscopes for imaging flow cytometry and ultra-small fiber imagers for minimally-invasive micro-endoscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital in-line holography serves as a useful encoder for spatial information. This allows three-dimensional reconstruction from a two-dimensional image. This is applicable to the tasks of fast motion capture, particle tracking etc. Sampling high resolution holograms yields a spatiotemporal tradeoff. We spatially subsample holograms to increase temporal resolution. We demonstrate this idea with two subsampling techniques, periodic and uniformly random sampling. The implementation includes an on-chip setup for periodic subsampling and a DMD (Digital Micromirror Device) -based setup for pixel-wise random subsampling. The on-chip setup enables direct increase of up to 20 in camera frame rate. Alternatively, the DMD-based setup encodes temporal information as high-speed mask patterns, and projects these masks within a single exposure (coded exposure). This way, the frame rate is improved to the level of the DMD with a temporal gain of 10. The reconstruction of subsampled data using the aforementioned setups is achieved in two ways. We examine and compare two iterative reconstruction methods. One is an error reduction phase retrieval and the other is sparsity-based compressed sensing algorithm. Both methods show strong capability of reconstructing complex object fields. We present both simulations and real experiments. In the lab, we image and reconstruct structure and movement of static polystyrene microspheres, microscopic moving peranema, macroscopic fast moving fur and glitters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Snapshot compressive imaging aims to capture high resolution images using low resolution detectors. The challenge is the generation of simultaneous optical projections that fulfill the compressed sensing reconstruction requirements. We propose the use of controlled aberrations through wavefront coding to produce point spread functions that can simultaneously code and multiplex the scene in a variety of ways. Apart from light efficiency, we can analytically characterize the system matrix response. We explore combinations of Zernike modes and analyze the corresponding coherence parameter. Simulation results using natively sparse and natural scenes demonstrate the feasibility of using controlled aberrations for compressive imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video compressive imaging system reconstructs high-speed image sequence from a single, coded snapshot. In this paper, we report a compressive video sensing system that captures the side information in addition to the main measurement to aid the reconstruction of high-speed scenes. The integration of the side information not only improves the quality of reconstruction, but also reduces the dependence of the reconstruction on regularization. We have implemented a system prototype, which splits the field of view of a single camera into two channels: one channel captures the coded, low-frame-rate measurement for high speed video reconstruction; the other channel captures a direct measurement without coding as the side information. A joint reconstruction model is developed to recover the high-speed videos from the two channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a unified optimization framework for L2, L1, and/or L0 constrained image reconstruction. First, we generalize cost functions for image reconstruction, which consist of a fidelity term with L2 norm and constraint terms with L2, L1, and/or L0 norms. This generalized cost function covers many types of existing cost functions for image reconstruction. Then, we show that this generalized cost function can be optimized by the alternating direction method of multipliers (ADMM). The ADMM is a well-known iterative optimization approach for convex problems. Experimental results demonstrate that the proposed unified optimization framework is applicable to a wide range of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A stick-shaped TOMBO (thin observation module by bound optics) was developed for intra-oral diagnosis. The module consists of 3×3 imaging units which are designed to capture different optical signals. Embedded functions were stereo 3D monitoring, depth estimation, and tissue assessment. Illumination equipments and a pattern projector were integrated in the module. Teeth and gingiva of several subjects were observed. 3D shape of gingiva was retrieved from a couple of unit images. The boundary between the attached gingiva and the alveolar mucosa as well as the spatial distribution of melanin were estimated using multiple linear regression analysis. The observed signals were confirmed to be useful for odontotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A one-shot multi-directional ultra-small angle X-ray scattering imaging successfully resolves the fiber orientation of a wood sample. This 2D structured illumination enables the retrieval of scattering signals in multiple directions simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A current focus of art conservation research seeks to accurately identify materials, such as oil paints or pigments, used in a work of art. Since many of these materials are fluorescent, measuring the fluorescence lifetime following an excitation pulse is a useful non-contact, quantitative method to identify pigments. In this project, we propose a simple method using a dynamic vision sensor to efficiently characterize the fluorescence lifetime of a common pigment named Egyptian Blue, which is consistent with x-ray techniques. We believe our fast, compact and cost-effective method for fluorescence lifetime analysis is useful in art conservation research and potentially a broader range of applications in chemistry and materials science.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the mathematical foundations of a special case of the general problem of partitioning an end-to-end sensing algorithm for implementation by optics and by a digital processor for minimal electrical power dissipation. Specifically, we present a non-iterative algorithm for factoring a general k × k real matrix A (describing the end-to-end linear pre-processing) into the product BC, where C has no negative entries (for implementation in linear optics) and B is maximally sparse, i.e., has the fewest possible non-zero entries (for minimal dissipation of electrical power). Our algorithm achieves a sparsification of B: i.e., the number s of non-zero entries in B: of s ≤ 2k, which we prove is optimal for our class of problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Propagation-based phase retrieval using the contrast transfer function (CTF) allows images at any propagation distance to be used when recovering the phase of slowly-varying objects. The CTF suffers from artifacts due to nulls in the transfer function at low spatial frequency and at higher, propagation-distance-dependent frequencies, though the latter can be alleviated by combining measurements at multiple distances. We demonstrate that the use of extended sources can improve low frequency performance. In addition, this method offers source shape as a parameter that can be used when optimizing combinations of measurements to produce robust phase reconstructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.