KEYWORDS: Light sources and illumination, Dark current, Pulse signals, Sensors, Interference (communication), Photocurrent, Logic, Linear filtering, Information visualization, Contour extraction
Biologically inspired event-based vision sensors (EVS) are growing in popularity due to performance benefits including ultra-low power consumption, high dynamic range, data sparsity, and fast temporal response. They efficiently encode dynamic information from a visual scene through pixels that respond autonomously and asynchronously when the per-pixel illumination level changes by a user-selectable contrast threshold ratio, θ. Due to their unique sensing paradigm and complex analog pixel circuitry, characterizing Event-based Vision Sensor (EVS) is non-trivial. The step-response probability curve (S-curve) is a key measurement technique that has emerged as the standard for measuring θ. Though the general concept is straightforward, obtaining accurate results requires a thorough understanding of pixel circuitry and non-idealities to correctly obtain and interpret results. Furthermore, the precise measurement procedure has not been standardized across the field, and resulting parameter estimates depend strongly on methodology, measurement conditions, and biasing – which are not generally discussed. In this work, we detail the method for generating accurate S-curves by applying an appropriate stimulus and sensor configuration to decouple 2nd-order effects from the parameter being studied. We use an EVS pixel simulation to demonstrate how noise and other physical constraints can lead to error in the measurement, and develop two techniques that are robust enough to obtain accurate estimates. We then apply best practices derived from our simulation to generate S-curves for the latest generation Sony IMX636 and interpret the resulting family of curves to correct the apparent anomalous result of previous reports suggesting that θ changes with illumination. Further, we demonstrate that with correct interpretation, fundamental physical parameters such as dark current and RMS noise can be accurately inferred from a collection of S-curves, leading to more accurate parameterization for high-fidelity EVS simulations.
Shadow imaging has been used for decades in astronomical observation of distant space objects. Synthetic Aperture Silhouette Imaging applies this technology to space domain awareness to enable fine resolution silhouette images of satellites in the Geosynchronous (GEO) belt to be collected with a linear array of hobby telescopes. As a satellite passes between a star and the observer on the ground, a North-South telescope array can detect the reduced stellar intensity as the shadow of the satellite passes over from West to East. This paper discusses the resolution advantages of collecting and stacking shadow images at multiple wavelengths to arrive at a multispectral improvement factor. A laboratory model is scaled to GEO according to the Fresnel diffraction integral before the silhouette is recovered through a phase retrieval algorithm. The recovered silhouettes are stacked and evaluated against the image of the original laboratory target to determine how closely the images match. The best Percent Difference (PD) between the reconstructed silhouette and the target silhouette is found by scaling the intensity of the diffraction pattern using a look up table to the fourth power. The best PD from a stacked image is using five layers between 475 nm and 675 nm. The five layers produce a resolution of approximately 50 cm. Each additional layer improves resolution from the expected value by approximately 4.23 cm from two layers to six layers.
Event-based camera (EBC) technology provides high-dynamic range operation and shows promise for efficient capture of spatio-temporal information, producing a sparse data stream and enabling consideration of nontraditional data processing solutions (e.g., new algorithms, neuromorphic processors, etc.). Given the fundamental difference in camera architecture, the EBC response and noise behavior differ considerably compared to standard CCD/CMOS framing sensors. These differences necessitate the development of new characterization techniques and sensor models to evaluate hardware performance and elucidate the trade-space between the two camera architectures. Laboratory characterization techniques reported previously include noise level as a function of static scene light level (background activity) and contrast responses referred to as S-curves. Here we present further progress on development of basic characterization methods and test capabilities for commercial-off-the-shelf (COTS) visible EBCs, with a focus on measurement of pixel deadtime (refractory period) including results for the 4th-generation sensor from Prophesee and Sony. Refractory period is empirically determined from analysis of the interspike intervals (ISIs), and results visualized using log-histograms of the minimum per-pixel ISI values for a subset of pixels activated by a controlled dynamic scene. Our tests of the Prophesee gen4 EVKv2 yield refractory period estimates ranging from 6.1 msec to 6.8 μsec going from the slowest (20) to fastest (100) settings of the relevant bias parameter, bias_refr. We also introduce and demonstrate the concept of pixel bandwidth measurement from data captured while viewing a static scene – based on recording data at a range of refractory period setting and then analyzing noise-event statistics. Finally, we present initial results for estimating and correcting EBC clock drift using a GPS PPS signal to generate special timing events in the event-list data streams generated by the DAVIS346 and DVXplorer EBCs from iniVation.
Event-based camera (EBC) technology shows promise for efficient capture of spatio-temporal information, producing a sparse data stream and enabling consideration of nontraditional data processing solutions (e.g., new algorithms, neuromorphic processors). Given the fundamental difference in camera architecture, the EBC response and noise behavior differ considerably compared to standard CCD/CMOS framing sensors. These differences necessitate development of new characterization techniques to quantify performance and assess if the EBC technology produces benefits relative to traditional imaging sensors. Here we present progress on development of basic sensor performance modeling and test capabilities for commercial-off-the-shelf visible EBCs. Laboratory characterization techniques include noise level as a function of static scene light level (termed background activity) and EBC temporal contrast response to dynamic signals. Initial environmental tests of the Prophesee PPS3MVCD event-based sensor found several addressable areas of concern but identified no showstoppers that would prevent use of this device in a high-reliability aerospace application. Two independent radiation tolerance test efforts, one for the PPS3MVCD and another for the iniVation DAVIS346 EBC (both based on 180 nm CMOS technology), indicate functional issues for total ionizing dose (TID) of greater than 30 krad(Si), and show background activity increasing with TID. However, no significant change in contrast response was observed. One DAVIS346 exhibited functional failure following final gamma radiation dose from 20 krad(Si) to 50 krad(Si), and the readout saturated during doses dominated by negative-polarity events (by a factor of 10 or greater). A second DAVIS346 locked-up during proton dose but recovered normal operation following a brief rest period and power cycling. DAVIS346 pixels include both change detection (DVS) and standard grayscale frames (APS) functionalities – driven by a single photodiode; results show a 70% increase in dark current and 23% increase in dark event noise after proton exposure to 20 krad(Si). As new versions of EBC technology are developed for infrared wavelengths, we anticipate these characterization techniques will be largely translatable to IR EBCs.
Various techniques and algorithms have been developed to improve the resolution of sensor-aliased imagery captured with an under-sampled pixelated image plane. In the literature these de-aliasing algorithms are sometimes included under the broad umbrella of super-resolution. One basic approach to multiframe de-aliasing is the well-known noniterative algorithm termed variable pixel linear reconstruction (VPLR) or “drizzling.” Many modern techniques are based on iterative optimization of a forward model (objective function). Regardless, both iterative and noniterative techniques rely on estimation of frame-to-frame displacements and rotations to subpixel accuracy. Weights are then solved for and used to distribute low-resolution (LR) pixel values to a high-resolution (HR) grid. One approach used in both VPLR and iterative methods to determine weights is to calculate pixel overlap areas. Well-known spatial domain approaches based on computational geometry exist to perform such calculations. Here we present a novel approach based on exactly calculating overlap areas in the spectral domain, which we call the spectral-overlap (SO) method, and include a comparison with the geometric approach of O’Rourke. All spatial spectra in the SO method are calculated analytically once and for all, resulting in expressions devoid of quadratures. Initial studies indicate that this new algorithm executes about 20 times faster than using the O’Rourke algorithm. The speedup is partly explained by the ability to precompute many quantities involved in the SO approach and apply these quantities to the computation of many distinct spatial overlaps. Application of the algorithm to multiframe de-aliasing is demonstrated using simulated imagery.
Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.
This paper addresses the fundamental performance limits of object reconstruction methods using intensity interferometry measurements. It shows examples of reconstructed objects obtained with the FIIRE (Forward-model Interferometry Image Reconstruction Estimator) code developed by Boeing for AFRL. It considers various issues when calculating the multidimensional Cramér-Rao lower bound (CRLB) when the Fisher information matrix (FIM) is singular. In particular, when comparing FIIRE performance, characterized as the root mean square difference between the estimated and pristine objects with the CRLB, we found that FIIRE performance improved as the singularity became worse, a result not expected. We found that for invertible FIM, FIIRE yielded lower root mean squared error than the square root of the CRLB (by a factor as large as 100). This may be due to various regularization constraints (positivity, support, sharpness, and smoothness) included in FIIRE, rendering it a biased estimator, as opposed to the unbiased CRLB framework used. Using the sieve technique to mitigate false high frequency content inherent in point-by-point object reconstruction methods, we also show further improved FIIRE performance on some generic objects. It is worth noting that since FIIRE is an iterative algorithm searching to arrive at an object estimate consistent with the collected data and various constraints, an initial object estimate is required. In our case, we used a completely random initial object guess consisting of a 2-D array of uniformly distributed random numbers, sometimes multiplied with a 2-D Gaussian function.
Many imaging techniques provide measurements proportional to Fourier magnitudes of an object, from which one attempts to form an image. One such technique is intensity interferometry which measures the squared Fourier modulus. Intensity interferometry is a synthetic aperture approach known to obtain high spatial resolution information, and is effectively insensitive to degradations from atmospheric turbulence. These benefits are offset by an intrinsically low signal-to-noise (SNR) ratio. Forward models have been theoretically shown to have best performance for many imaging approaches. On the other hand, phase retrieval is designed to reconstruct an image from Fourier-plane magnitudes and object-plane constraints. So it’s natural to ask, “How well does phase retrieval perform compared to forward models in cases of interest?” Image reconstructions are presented for both techniques in the presence of significant noise. Preliminary conclusions are presented for attainable resolution vs. DC SNR.
Many imaging modalities measure magnitudes of Fourier components of an object. Given such data, reconstruction of an image from data that is also noisy and sparse is especially challenging, as may occur in some forms of intensity interferometry, Fourier telescopy, and speckle imaging. In such measurements, the Fourier magnitudes must be positive, and moreover must be less than 1 given the usual normalization, scaling the magnitudes so that the magnitude is one at zero spatial frequency in the u-v plane data. The Cramér-Rao formalism is applied to single Fourier magnitude measurements to ascertain whether a reduction in variance is possible given these constraints. An extension of the Cramér-Rao formalism is used to address the value of relatively general prior information. The impact of this knowledge is also shown for simulated image formation for a simple disk, with varying measurement SNR and sampling in the (u,v) plane.
A new remote sensing approach based on polarimetric wavelet fractal detection principles is introduced and the Mueller
matrix formalism is defined, aimed at enhancing the detection, identification, characterization, and discrimination of
unresolved space objects at different aspect angles. The design principles of a multifunctional liquid crystal monostatic
polarimetric ladar are introduced and related to operating conditions and system performance metrics. Backscattered
polarimetric signal contributions from different space materials were detected using a laboratory ladar testbed, and then
analyzed using techniques based on wavelets and fractals. The depolarization, diattenuation, and retardance of the
materials were estimated using Mueller matrix decomposition for different aspect angles. The outcome of this study
indicates that polarimetric fractal wavelet principles may enhance the capabilities of the ladar to provide characterization
and discrimination of unresolved space objects.
Intensity interferometery (II) holds tremendous potential for remote sensing of space objects. We investigate the
properties of a hybrid intensity interferometer concept where information from an II is fused with information from a
traditional imaging telescope. Although not an imager, hybrid intensity interferometery measurements can be used to
reconstruct an image. In previous work we investigated the effects of poor SNR on this image formation process. In this
work, we go beyond the obviously deleterious effects of SNR, to investigate reconstructed image quality as a function of
the chosen support constraint, and the resultant image quality issues. The benefits to fusion of assumed perfect-yet-partial
a priori information with traditional intensity interferometery measurements are explored and shown to result in
increased sensitivity and improved reconstructed-image quality.
Various image de-aliasing techniques and algorithms have been developed to improve the resolution of pixel-limited
imagery acquired by an optical system having an undersampled point spread function. These techniques are sometimes
referred to as multi-frame or geometric super-resolution, and are valuable tools because they maximize the imaging
utility of current and legacy focal plane array (FPA) technology. This is especially true for infrared FPAs which tend to
have larger pixels as compared to visible sensors. Geometric super-resolution relies on knowledge of subpixel frame-toframe
motion, which is used to assemble a set of low-resolution frames into one or more high-resolution (HR) frames.
Log-polar FFT image registration provides a straightforward and relatively fast approach to estimate global affine
motion, including translation, rotation, and uniform scale changes. This technique is also readily extended to provide
subpixel translation estimates, and is explored for its potential combination with variable pixel linear reconstruction
(VPLR) to apportion a sequence of LR frames onto a HR grid. The VPLR algorithm created for this work is described,
and HR image reconstruction is demonstrated using calibrated 1/4 pixel microscan data. The HR image resulting from
VPLR is also enhanced using Lucy-Richardson deconvolution to mitigate blurring effects due to the pixel spread
function. To address non-stationary scenes, image warping, and variable lighting conditions, optical flow is also
investigated for its potential to provide subpixel motion information. Initial results demonstrate that the particular
optical flow technique studied is able to estimate shifts down to nearly 1/10th of a pixel, and possibly smaller. Algorithm
performance is demonstrated and explored using laboratory data from visible cameras.
Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and
a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope.
Partially resolved imagery provides an additional constraint for the iterative phase retrieval process, as well as an
improved starting point. The benefits of this additional a priori information are explored and include lower residual
phase error for SNR values above 0.01, increased sensitivity, and improved image quality. Results are also presented for
image reconstruction from II measurements alone, via current state-of-the-art phase retrieval techniques. These results
are based on the standard hybrid input-output (HIO) algorithm, as well as a recent enhancement to HIO that optimizes
step lengths in addition to step directions. The additional step length optimization yields a reduction in residual phase
error, but only for SNR values greater than about 10. Image quality for all algorithms studied is quite good for SNR≥10,
but it should be noted that the studied phase-recovery techniques yield useful information even for SNRs that are much
lower.
Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and
a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope.
Partially resolved imagery provides an additional constraint for the phase retrieval process, as well as an improved
starting point for the algorithm. The benefits of this additional a priori information are explored, and when combined
with standard constraints such as positivity and compact support include faster convergence, increased sensitivity, and
improved image quality.
The purpose of this study is to explore novel monostatic ladar detection principles utilizing polarimetric Bidirectional
Reflectance Distribution Function (BRDF) and single-pixel detection parameters. The depolarization of backscattered
elliptical polarized light beams, from extended area space materials, was studied at different sample orientations.
Specifically, the depolarization ratio for both linearly and circularly polarized light waves was estimated under quasimonostatic
transceiver geometry. The experimental results indicate that space object materials exhibit distinct
depolarization signatures, which provide enhanced discrimination capabilities. The outcome of this study would enhance
the monostatic-ladar detection and discrimination capabilities.
This paper investigates binary wavefront control in the focal plane to compensate for atmospheric turbulence
in fiber-coupled free-space laser communication (LaserCom) systems. Traditional approaches to turbulence
compensation (i.e., adaptive optics) modify optical phase in the pupil plane to improve the focal plane image
or increase energy on target in the far field. For high-energy laser applications, focal plane phase modulation is
problematic due to high power densities and device damage thresholds. However, LaserCom systems aim to use
minimal power for reasons such as eye safety and covert communication. Thus, focal plane wavefront control is
a reasonable approach for this application. Numerical results show that in an air-to-air scenario, binary phase
modulation provides mean fiber coupling efficiency nearly identical to that resulting from ideal least-squares
adaptive optics, but without the requirement for direct wavefront sensing. The binary phase commands are
derived from a single imaging camera and an assumption about the nature of spot breakup. The use of binary
wavefront control suggests that existing ferro-electric spatial light modulator technology may support real-time
correction. Coupling efficiency results are also compared to those for the Strehl ratio, highlighting the importance
of metric-driven design.
Correlation tracking is investigated as a method for reducing fade probability in free-space laser communication (LaserCom) systems. Challenging operating scenarios can lead to spot breakup in the focal plane image. During moments of spot breakup a traditional centroid tracker can force an intensity valley to the on-axis position and cause unnecessary drops in received power. We investigate alternate tracking schemes to specifically prevent the occurrence of this detrimental mode of operation. Correlation tracking is proposed and evaluated as one approach for improving performance in terms of fade probability. From a broader perspective, we also begin to investigate the phenomenology of deep fades. This approach may lead to additional methods to improve performance in situations where fade probability is the metric of interest.
KEYWORDS: Target recognition, Super resolution, Visualization, Integration, Signal processing, Data processing, Eye, Condition numbers, 3D acquisition, Organisms
Regions of interest that contain small targets often cover a small number of pixels, e.g., 100 or fewer. For such regions vision-based super-resolution techniques are feasible that would be infeasible for regions that cover a large number of pixels. One such technique centers basis functions (such as Gaussians) of the same width on all pixels and adjusts their amplitudes so that the sum of the basis functions integrated over each pixel is its gray value. This technique implements super-resolution in that the sum of basis functions determines the gray values of sub-pixels of any size. The resulting super-resolved visualizations, each characterized by a different basis function width, may enable the recognition of small targets that would otherwise remain unrecognized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.