Open Access
5 August 2021 Toward snapshot correction of 3D flash LiDAR imagers
Author Affiliations +
Abstract

We present methods enabling rapid non-uniformity and range walk error correction of 3D flash LiDAR imagers that exhibit electronic crosstalk caused by simultaneously triggering too many detectors. This additional electronic crosstalk is referred to as simultaneous ranging crosstalk noise (SRCN). Using a method in which the 3D flash LiDAR imager views a checkerboard target downrange, the SRCN is largely mitigated. Additionally, processing techniques for computing the non-uniformity correction (NUC) and range walk error correction are described; these include an in-situ thermally compensated dark-frame non-uniformity correction, image processing and filtering techniques for the creation of a photo-response non-uniformity correction, and characterization and correction of the range walk error using data collected across the full focal plane array without the need for sampling or windowing. These methods result in the ability to correct noisy test validation data to a range precision of 8.04 cm and a range accuracy of 1.73 cm and to improve the signal-to-noise ratio of the intensity return by 15 to 49 dB. Visualization of a 3D scene corrected by this process is additionally presented.

1.

Introduction

This paper presents methods to correct both non-uniformity and range walk error for 3D flash LiDAR imagers that exhibit electronic crosstalk caused by simultaneously triggering too many detectors. The methods presented provide the groundwork for a snapshot correction process for such imagers, in which a single frame or set of frames can be captured and processed to create a calibrated correction table. The experimental methods focused on the reduction of time and effort required for capturing data to be used for the correction of the 3D flash LiDAR imager. Presently, 3D flash LiDAR imagers using time-of-flight ranging often lack the capability to simultaneously image a single range at once without introduction of significant, possibly damaging, simultaneous ranging crosstalk noise (SRCN), that is, these imagers are fundamentally limited from imaging a large, on-face, flat object in the scene. Due to the substantial range walk error associated with this sensor, the possibility of using a flat checkerboard target for data collection was tested with the goal of circumventing this issue. The checkerboard target was comprised of a repeating pattern of four different grayscale values. This target was found to enable snapshot correction. This work presents the methods and results from using such a target for correcting a 3D flash LiDAR imager exhibiting this noise source.

This work presents new methods, including an in-situ thermally compensated dark-frame non-uniformity correction (TC-DFNUC), image processing and filtering methods for photo-response non-uniformity correction (PRNUC), and methods for global characterization of range walk error. Corrections are shown to improve range precision to 8.04 from 65.5 cm and signal-to-noise ratio (SNR) to 49 dB, while minimally effecting image integrity. Range accuracy was improved to 1.73 from 84.2 cm. The entire data collection process can be completed as four sequential frame grabs, subsequently rotating the target by a right angle between grabs, that takes less than 10 min to complete for a PIN camera if the target is fully illuminated. Results from camera characterization show promise for future efforts.

Correction of non-uniformity and range walk error for 3D flash LiDAR imagers is an essential first step for enabling their usage. Unlike with scanning 3D LiDAR, a 3D flash LiDAR imager illuminates the entire scene. Such systems image using a wide field of view (FOV) optic to illuminate a scene and capture the return photons on a focal plane array (FPA). This creates a 3D image by timestamping the return pulse at an intensity threshold. The time of flight of the this pulse is estimated as r=c2Δt. The trigger timing between the camera and the laser is typically operated in sync. Some system delays typically need to be factored in, such as any delay caused by a function generator and any asymmetry in the length of the optical path from the laser to the target versus the target to the camera.1,2 As with all imaging arrays, 3D flash LiDAR imagers display image non-uniformities, such as dark-frame non-uniformity, otherwise known as fixed pattern noise, or photo-response non-uniformity, otherwise known as gain errors.3,4

Unlike with 2D imagers, 3D flash LiDAR imagers cannot use an integrating sphere to produce the uniform illumination required for PRNUC as an integrating sphere will randomize pulse time of flight information.4 Additionally, range walk error acts to correlate the 2D intensity return of the imager with the 3D range return, thus complicating the correction process.1,57 In this sense, while a 2D imager can be corrected through calibration methods by obtaining a set of dark frames and a set of photo-response frames,3 the same is generally assumed to not be true of a 3D flash LiDAR imager. Another method is required to correct non-uniformity for a 3D flash LiDAR imager and to correct range walk error. Previous work for correcting imagery from 3D flash LiDAR imagers has been mostly focused on image processing, filtering, or machine learning methods.713 Some work has focused on sampling the FPA to produce a correction, though this process is tedious and requires significant effort.

This team has previously published work focused on the reduction of SRCN for a time-of-flight ranging imager by reducing the footprint of the active FPA through windowed region of interest framing, with the goal of reducing time and effort for capturing data to correct non-uniformity and range walk error in 3D flash LiDAR imagers.4,14,15 This effort focuses on using the full frame of the return, with the goal of illuminating as many detectors simultaneously as possible. This method is a significant improvement over previous work by this team and any previously published methods. Methods capable of producing a non-uniformity correction (NUC) in intensity and range returns are presented; these enable range walk error correction, imaging in a full frame mode of operation, and illuminating an average of 28% of the FPA at a time. A thermal compensation for dark-frame NUC is incorporated into the correction process. The data to develop the thermal compensation algorithm were collected from the work published by Hecht et al.16,17

2.

Methods

Experimental and computational post-processing methods for creating non-uniformity and range walk error corrections are described here. Three separate experiments were conducted in correction, validation, and 3D imaging. The correction and validation experiments used a flat checkerboard target at an average range of 11.78 m. The remaining experiment 3D imaged a scene constructed out of boxes in the same FOV as the target. The data for correcting the imager were collected over nine illuminated patches of the FOV and viewed under a full frame mode of operation. Image processing and filtering methods were required to obtain usable information from the raw frames in intensity and range; these methods included image filtering, masking, and Gaussian blurring. A method leading to thermally compensated dark-frame non-uniformity correction (TC-DFNUC) is described, as is a method for global characterization and correction of range walk error (Fig. 1).

Fig. 1

Flowchart showing the experimental process for frames collected for range walk error correction and NUC. Frames were collected for nine illuminated regions across the FPA (numbered 1 to 9 in this flowchart). The target was rotated to four positions separated by 90 deg, while independently being attenuated at each rotation so as to minimize errors from misalignment. After each set of frames was collected for a position on the FPA, the process was conducted again at a new position, in the order described in this flowchart, until all data were collected.

OE_60_8_083101_f001.png

2.1.

Model

An estimate of the single shot range precision is provided using the model provided by Reinhardt et al.,4 which is derived from the timing jitter and timing resolution of the detector. The timing resolution, σres2, is provided by

Eq. (1)

σres2=(VnoiseVDRΔtgate)2,
where Vnoise is the noise due to dark current for the range return and VDR is the detector dynamic range, which is determined by the properties of the detector time to digital conversion, in particular, the range gate, Δtgate. Equation (1) is provided in time units; thus the range gate in this equation must be in units of time as well. For the experiments performed for this research, a 1-μs range gate was used, equivalent to 150 m. This is the minimum operational range gate of the imager and was chosen as the experiments were performed in a closed lab-space.

The timing jitter, σjitter2, is provided by

Eq. (2)

σjitter2=σt,ref2nrefnsig,
where σt,ref2 is the reference jitter, nref is the reference signal, and nsig is the input signal on the detector.4 The range precision, σr, is therefore provided as a Pythagorean summation of these two previously mentioned noise terms:

Eq. (3)

σr=σjitter2+σres2,

Eq. (4)

σr=σt,ref2nrefnsig+(VnoiseVDRΔtgate)2,
where σjitter2, σres2, and their respective terms are defined previously in Eqs. (1) and (2).4 Through this model, an estimation of the mean expected value for range precision is provided, enabling a point of comparison later when viewing results. This estimated range precision value is 7.36 cm.

2.2.

Experimental Methods

The experimental methods are discussed in greater detail here. Three experiments were conducted. One experiment involved collecting data with the goal of correction of the camera. The other two experiments were designed to collect data for statistical and visual validation of the correction process. Both experiments conducted for the statistical validation and the correction of the camera imaged a flat checkerboard target at the same range, imaging the target through four right-angle rotations. However, for the correction process, significantly more data were collected: an attenuator was used to control the beam intensity through seven different illumination levels, adding another layer of data. The seven attenuation levels are displayed along with mean signal and noise, in units of photons, in Table 1.

Table 1

Attenuation, mean signal, and noise for seven illumination levels generated from an attenuator used for the collection of data in this experiment.

Attenuation (%)Signal (photons)Noise (photons)
98.173488.6750.45
94.733465.8749.36
89.653437.7749.29
83.103410.8648.96
75.353388.6548.65
66.663374.4048.23
57.363368.0948.18

The intention was to enable range walk error correction; however, analysis shows that these additional data were unnecessary for accomplishing this task. In fact, the results imply that range walk error and non-uniformity are corrected with a single orientation of the target and a single set of frames, if carefully calibrated and chosen.

Table 2

Equipment list for this research.

EquipmentManufacturerPart number
Detector, test system
CameraVoxtelVOX3D
Illumination source
DPSS laserVoxtelLANN-F0BC
Receive optics
Telephoto lensEdmunds optics#83-165
Variable attenuator
Half wave plateThorlabsWPH10M-1550
Motorized rotation stageZaberT-RSW60C-KT04U
Rotation stage controllerZaberX-MCB1
Polarization beam splitterThorlabsPBS124
Beam trapThorlabs
Beam expander
Engineered diffuserRPC photonicsEDC-4-09042-A

The experimental setup, as described in Fig. 2, uses a Voxtel (a subsidiary of Allegro Microsystems) DPSS laser with a 1535-nm central wavelength, 5-ns pulse width, 842-μJ pulse energy, and variable pulse repetition frequency (PRF) of up to 10 Hz; this setup used a PRF of 5 Hz, in consideration of long-term pulse stability. This laser was chosen partially because it is designed to ship with the LiDAR imager. Part of this effort was aimed at reducing or removing the need for specialized equipment in calibration and characterization setups of this class of imager. The laser, after exiting a 17× collimating beam expander, has a 400-μrad divergence angle and 6.8-mm beam diameter. The beam travels 20 cm downrange to an RPC Photonics engineered diffuser with a 4.33-deg divergence angle, which top-hats the beam to expand the beam to fill part of the target. A critical limitation of the system is derived from the timing system of the 3D flash LiDAR imager, in which the internal system timing of the imager produces a significant enough delay to inhibit ranging within 5 m of the time-of-flight imager. Due to non-uniformity, this 5-m dead-range was found to effect returns through a range of 9.5 m. An engineered diffuser with a 16.3-deg divergence angle was tested and determined to attenuate the signal through beam expansion too much at the appropriate ranges. Smaller divergence angle diffusers were investigated as possible options, but calculations suggest that, to fully fill the FOV, greater signal is required. The target is a 4  ft×4  ft (1.22  m×1.22  m) checkerboard pattern that was printed off a large poster printer and attached to two poster boards with additional structural support backing. Analysis of the data shows that, at 1535 nm, the checkerboard squares have reflectivities of 1, 0.953, 0.929, and 0.656 relative to the brightest possible return. A possible uncontrolled, external source of non-uniformity included the physical printing of the pattern onto a paper material, which is imperfect and may introduce streaking and other print artifacts into the target itself. The scene was imaged using an Edmund Optics 50-mm focal length telephoto lens with the camera operating in a full frame mode of operation, contrary to prior work by this team in which a windowed region of interest approach was used (Table 2).4,14

Fig. 2

The experimental setup (a), in which a 1535-nm laser illuminates a target downrange after passing through an attenuator (b). This attenuator consists of a half-wave plate, motorized rotation stage, beam dump, 50/50 polarization beam splitter; the beam, upon exiting, passes through an engineered diffuser that top-hats the beam while expanding the beam at a uniform 4.33-deg divergence angle. This fills approximately 20% of the FOV at a range of 11.78 m. The 3D flash LiDAR images the checkerboard target using a telephoto lens with a 50-mm focal length.

OE_60_8_083101_f002.png

The flat checkerboard target (Fig. 3) was illuminated 11.82 m downrange from the camera or 11.73 m downrange from the laser; the mean range was 11.78 m (Fig. 4). The distance between the camera and the laser was 23 cm, and thus the bistatic angle was provided by θbistatic=90  degtan1(zcamerazbistatic), such that θbistatic=1.12  deg. The camera FOV, θFOV, was provided by, θFOV=2tan1(h2f), such that θFOV=6.15  deg. At 11.82 m, the FOV filled a square defined by hFOV=2ztgttan(θFOV2) or 1.27 m to a side. The laser passed through an RCP Photonics engineered diffuser, with a divergence angle of 4.33 deg, located 11.53 m from the target, resulting in a beam that was 87.2 cm in diameter; thus the beam should, at most, encompass 36.96% of the target area in the FOV.

Fig. 3

The target used was a checkerboard target with 8×8 squares across the array; each square was originally designed to match a 16×16 region of detectors in the FOV at a range of 8 m; however, upon optimizing the return, the range was pushed back to 11.82 m from the camera, with each square matching 11×11 detectors. Note that any discrepancy may be caused by adjustments in the position of the engineered diffuser, which acts to top-hat the beam and diverges the top-hatted beam at an angle of 4.33 deg, thus requiring realignment from the original intended position at 8 m. This target was physically rotated by 90 deg after the required samples were collected at each position.

OE_60_8_083101_f003.png

Fig. 4

The flat checkerboard target was illuminated 11.82 m downrange from the camera or 11.73 m downrange from the laser; the mean range was 11.78 m. The distance between the camera and the laser was 23 cm, and thus the bistatic angle was θbistatic=1.12  deg. The laser passes through an RCP Photonics engineered diffuser, with a divergence angle of 4.33 deg and located 11.53 m from the target, resulting in a beam that is 87.2 cm in diameter; thus the beam should, at most, encompass 36.96% of the target area in the FOV.

OE_60_8_083101_f004.png

Thus, data were collected for the target at nine positions to ensure full, overlapping coverage of the entire FOV; each square on the checkerboard pattern covered approximately 11×11 detectors, with the illumination filling an average of 31.5% of the FOV. This procedure was performed for both the data collected to be processed into a correction and the data collected for statistical validation purposes. For correction purposes, the illumination was varied using an attenuator; this attenuator, pictured in Fig. 2(b), was constructed out of a Zaber motorized rotation stage housing a half-wave plate, a 50/50 polarization beam splitter, and a beam dump. As the stage rotated from the fast axis to the slow axis, the linear polarization of the beam also rotated, incrementally changing the amount of light transmitted versus redirected to the beam dump; thus, by rotating the half-wave plate, incremental control of beam power was achieved. This incremental attenuation was used to capture the data at seven levels of illumination (Table 3).

Table 3

The area of the FPA that is illuminated is 28.1% on average; typical reasons for variations in the area include illuminating the edge or a corner versus the center of the array. A built-in function in MATLAB used to outline circles was used to determine the radii and centroids of the illuminated area for each position; subsequent compensation for overfilling the FPA allowed for estimation of the area illuminated relative to the full FPA.

PositionArea of FPA illuminated (%)Signal at minimum attenuation (photons)
a30.153256.0
b30.263142.9
c29.163256.1
d32.543323.0
e37.703590.0
f33.663616.4
g27.393035.3
h31.722798.3
i31.023010.5

The data for validation purposes were collected using a near-identical setup and via methods identical to that used for collecting data for the corrections. The primary differences involved not varying the intensity of the incident illumination and any statistical uncertainties associated with realigning the target and repositioning the beam. While the correction data required significant variations of the windowing used to crop the frame before stitching together a final, full set of frames to be used for processing the corrections (Fig. 5), the validation data were able to use generally more uniform windowing, as seen in Fig. 6.

Fig. 5

The nine separate data captures are displayed (a)–(i), along with the cropped region of interest for the correction frames. The region window was chosen to minimize noise in the return; critical to this selection process were the frames collected in (e) and (f), which were closer to saturation in the intensity return compared with the other frames, possibly due to the larger area of the FOV being illuminated, 35.68% versus 30.32% for all other frames exclusive of these two, which thus required careful selection of a minimized window for both.

OE_60_8_083101_f005.png

Fig. 6

The nine separate data captures are displayed (a)–(i), along with the cropped region of interest for the validation frames. Unlike with the data collected for corrections, the region window was chosen without regard to noise in the return.

OE_60_8_083101_f006.png

Previously collected 3D imagery was also included for visual validation.17 This imagery displays a scene with four boxes in a similar location and range as the target; the boxes were at a range between 11.5 and 12.5 m and in the same line of sight and platform as the target (Fig. 7).

Fig. 7

Photograph of scene used for validation purposes: four boxes between 11.5 and 12.5 m range (a). The leftmost box in the foreground has a different reflectivity than the others. Photograph of scene used for calibration purposes (b). The target is at 11.78 m.

OE_60_8_083101_f007.png

2.3.

3D Image Correction

A series of image filters and masking operations are required to create the PRNUC from the raw data. The raw frame is corrected for thermally compensated dark-frame non-uniformities. The data collected for this thermal compensation were used in two prior works by this team, and the experimental process for thermal compensation of dark-frame NUC has been partially described.16,17 However, the thermal compensation algorithm used for this paper presents a unique approach to the problem of using an interpolated lookup table to compensate for thermal drift by enabling this approach without any active knowledge of the in-situ sensor temperature. This correction algorithm is somewhat limited in capability as it requires the scene to optimize toward being structurally flat. This is accomplishable by the data containing enough samples within the frame at the dark level that are assured to trend toward optimal correction at the correct temperature. By comparing the trend of the spatial standard deviation as the raw frames are corrected using the TC-DFNUC, an estimation of the optimum compensation is achieved, as is an estimation of the temperature of the sensor. The spatial standard deviation, sxy, is described as

Eq. (5)

sxy2=1MNi=1Mj=1N(Aijμxy)2
where Aij is the i’th and j’th element of an array, Axy; μAxy is the mean of the array Axy; and M and N are the number of columns and rows, respectively, of the array, Axy. This allows for the description of the variations across the frame in the x-y plane; as shown in the results and discussion, this definition of the standard deviation is also useful for computing the range precision. The minimum of the spatial standard deviation, sxy2 of the TC-DFNUC corrected frame, as described in Eq. (6), defines what temperature, T, the algorithm registers from the lookup table:

Eq. (6)

{min[sxy],Totherwise,NaN
such that the dark-frame NUC is performed for every lookup table value to seek the minimum spatial standard deviation value and indicate the optimized table value (Fig. 8).

Fig. 8

Trend line of global range precision with DFNUC applied at different interpolated temperatures.

OE_60_8_083101_f008.png

By interpolating the returns in range and intensity, per detector, using a smoothing spline across the six temperature datasets collected, an estimation of the dark-frame non-uniformity was achieved. To correct a set of frames, dark-frame subtraction was performed for the mean frame using all interpolated thermal compensation frames. The frame with the lowest value of the spatial standard deviation, sxy, was indexed and provided an estimation of the temperature and the value for the TC-DFNUC.

Two masks were generated; M1 to compensate for dark returns,

Eq. (7)

M1={112.5  m>μxy(r)>0  m,1otherwise,NaN,
and M2 to compensate for jitter induced by misalignment between rotations (Fig. 9),

Eq. (8)

M2={300  photons>σxy(n)>0  photons,1otherwise,NaN,
where σxy(n) is the noise in the intensity return and μxy(r) is the mean frame in the range return. The upper threshold value for M1, 112.5 m, was chosen from manual inspection of the histogram of the range return to remove outliers. The upper threshold value for M2, 300 photons, was chosen to reduce the contribution of misalignment induced jitter when averaging the four rotated sets of frames. In both cases, the lower threshold values, set to 0 (meters for M1 or photons for M2), were chosen to prevent non-physical statistical artifacts from being introduced into the computation of the correction frames.

Fig. 9

Misalignment between rotations displays as additional noise on the FPA. In this case, the intensity return is displayed from 0 to 300 photons.

OE_60_8_083101_f009.png

Values that met the filtering criteria for creating the masks were assigned a value of 1, whereas all other values were assigned a non-numerical value in MATLAB. These two masks were subsequently used to create a total mask, M=M1M2, such that any given photo-response frame, P, was transformed by M as P=MP. Further analysis required filtering out of non-numerical values in the process of calculating statistical quantities such as standard deviation and mean.

Once the photo-response frame was multiplied by the mask, the frames were prepared for image filtering; this process required the non-numerical values for the frame to be transformed to the mean value for the respective frame. Also, to reduce noise, the mean value across the frame was substituted for any return below 2300 photons for the intensity return; for the range return, any return above 45 m was substituted with the mean value for the respective frame. This frame, Yxy, was then processed with a Gaussian image filter with σxy=0.65 to produce a smoothed image following variations in intensity or range, without underlying structure, such as that which would be found in gain errors. The ratio of the Gaussian blurred image and the filtered image provided an estimation of the gain errors for the FPA:

Eq. (9)

Pxy=Yxy/Gσ=0.65{Yxy},
where Yxy is either the range or intensity photo-response frame that was corrected of dark-frame non-uniformities and Gσxy{Yxy} is the Gaussian blur operation for the photo-response frame with σxy determining the strength of the blurring operation.

Range walk error characterization was accomplished by vectorizing and sorting the data across the thermally compensated dark-frame non-uniformity corrected frame. This provided a global characterization for range walk error encompassing both range walk error and photo-response non-uniformities. A characterization curve was generated by using linear least squares to optimize the vectorized and sorted DFNUC corrected frame in intensity to the corresponding vectorized and sorted frame computed for the absolute error for range relative to the measured true range of 11.775 m.

3.

Results and Discussion

Here, the results are displayed and a detailed discussion is given. Raw and corrected frames in intensity and range returns are displayed for the validation target to display statistical validation of the correction process. Raw and correction frames in intensity and range returns are displayed for a set of 3D imagery as well.

3.1.

Results

The average of 500 frames for the range return of a single orientation of the validation target is displayed. The range return is displayed as uncorrected [Fig. 10(a)] and corrected [Fig. 10(b) and 10(c)]. Both the uncorrected and corrected averaged frames are displayed from 0 to 40 m, while the corrected averaged frame is also displayed from 5 to 15 m [Fig. 10(c)].

Fig. 10

Range return of a single rotation of the checkerboard target before (a) and after (b) non-uniformity and range walk error corrections. Both the uncorrected and corrected averaged frames are displayed from 0 to 40 m, while the corrected averaged frame is also displayed from 5 to 15 m (c).

OE_60_8_083101_f010.png

The average of 500 frames for the median return of all four rotations of the validation target data is displayed in Figs. 11 and 12. The intensity return is displayed as uncorrected [Fig. 11(a)] and corrected [Fig. 11(b)] and from 2048 to 4096 photons.

Fig. 11

Intensity return, averaged over the four right-angle rotations of the checkerboard target, before (a) and after (b) NUC.

OE_60_8_083101_f011.png

Fig. 12

Range return, averaged over the four right-angle rotations of the checkerboard target, before (a) and after (b) non-uniformity and range walk error corrections. Both the uncorrected and corrected averaged frames are displayed from 0 to 40 m, while the corrected averaged frame is also displayed from 5 to 15 m (c).

OE_60_8_083101_f012.png

The SNR for the intensity return is given in Table 4. The SNR, in dB, is calculated by

Eq. (10)

SNR=20log10(μ(n)/σ(n)),
where μ(n) is the mean of the image and σ(n) is the frame-wise standard deviation of the image, both in units of photons. SNR improves from 33.19 dB without correction to 49.02 dB with correction. Notably, signal noise reduces from 70.70 to 11.42 photons after application of NUC.

Table 4

SNR, intensity return.

Methodμ (photons)σ (photons)SNR (dB)
Uncorrected325470.7033.19
NUC329311.4249.02

The range return for the median of the set of orientations is displayed in Fig. 12. The range return is displayed as uncorrected [Fig. 12(a)] and corrected [Figs. 12(b) and 12(c)]. Both the uncorrected and corrected averaged frames are displayed from 0 to 40 m, while the corrected averaged frame is also displayed from 5 to 15 m [Fig. 10(c)].

The range accuracy for the data shown in Figs. 12 and 13 is displayed in Table 5. The range accuracy, sA,xy(Rxy) is calculated as

Eq. (11)

sA,xy2(Rxy)=1MNi=1Mj=1N(Rijμtrue)2,
where Rxy is the range return and μtrue is the measured true value of the range, such that the spatial standard deviation is measured relative to the measured true value of the range rather than the mean value of the range itself.

Fig. 13

Histograms of validation target, displaying marked improvement in range accuracy and precision between uncorrected scene and when NUC and range walk error correction are applied.

OE_60_8_083101_f013.png

Table 5

Region of interest method, range accuracy.

MethodsA,xy (cm)
Uncorrected84.18
NUC56.29
Range walk1.73

The range precision is calculated as the spatial standard deviation of the range return, sxy(Rxy), provided by Eq. (5), sxy2(Rxy)=1MNi=1Mj=1N(Rijμxy(Rxy))2 (Table 6).

Table 6

Region of interest method, single shot range precision.

Methodsxy (cm)
Uncorrected65.51
Range walk8.04
Model7.36

A set of 20 frames was averaged from a 3D scene of boxes, with the results provided here. The average of these 20 frames of the uncorrected frame in intensity is shown in Fig. 14(a), in which the averaged frame corrected of dark-frame and photo-response non-uniformity is provided in Fig. 14(b). Both sets of frames are displayed from 2048 to 4096 photons.

Fig. 14

Intensity return for the PIN 3D flash LiDAR camera, viewing box scene at a median range of 11.75 m; the uncorrected intensity return is displayed in photons in (a), and the intensity return fully corrected of non-uniformities is displayed in (b).

OE_60_8_083101_f014.png

For the same set of data collected from the 3D scene of boxes, the average of the 20 frames of the range return is displayed as both uncorrected of non-uniformity and range walk error [Fig. 15(a)] and corrected of these errors [Fig. 15(b) and 15(c)]. Both the uncorrected and corrected averaged frames are displayed from 0 to 40 m, while the corrected averaged frame is also displayed from 5 to 15 m [Figs. 15(c) and 16].

Fig. 15

Range return for the PIN 3D flash LiDAR camera, viewing box scene at a median range of 11.75 m; the uncorrected range is displayed in (a), as noted with significant range walk error and non-uniformity, and the corrected range is displayed in (b). Both the uncorrected and corrected averaged frames are displayed from 0 to 40 m, while the corrected averaged frame is also displayed from 5 to 15 m (c).

OE_60_8_083101_f015.png

Fig. 16

Histogram of 3D scene with boxes, displaying marked improvement in range accuracy and precision between uncorrected scene and when NUC and range walk error correction are applied.

OE_60_8_083101_f016.png

The range walk was characterized by plotting the range error relative to the true value of 11.78 m as this curve (Fig. 17) trends while intensity increases, from a minimum of 688 photons up to a maximum return of 4096 photons.

Fig. 17

A global characterization of range walk error is shown. The range walk error is nonlinear, insofar as the error has a linear trend until about 2500 photons in the intensity return, when the trend begins to increase in an exponential manner. This is only curtailed by saturation, approaching 4096 photons. Thus, for example, a return with an intensity of 2048 photons hase an associated range walk error of 39 cm, whereas an intensity of 3072 photons has an associated range walk error of 471 cm.

OE_60_8_083101_f017.png

3.2.

Discussion

Range walk error and non-uniformity were corrected for a InGaAs PIN diode 3D flash LiDAR imager; the methods used can be extended to linear-mode avalanche photodiode (LMAPD) 3D flash LiDAR imagers. The threshold for detection was found to be 688 photons. SNR was found to improve by 15.89 to 49.02 dB upon implementing corrections on the intensity return (Table 4). The range precision was found to be 8.04 cm; this matched the single shot range precision computed from a model of 7.36 cm to within 9.24% error relative to the modeled range precision (Table 6). Range accuracy was decreased from a value of 84 cm to less than 2 cm when applying range walk error correction (Table 5). The results for range precision and, in particular, range accuracy suggest that the range walk error correction was successful in significantly improving 3D image quality. This was achieved by sampling the full array and fitting the sorted, myriad data using a linear least squares algorithm.

Limitations of this method included inconsistent patches of dark returns in the data, noisy data, and positional inconsistencies between target orientations which lead to higher than anticipated frame-to-frame jitter. Mitigating all of these issues required extensive image processing and filtering, but additional design considerations have the potential to mitigate them as well. Whether there is a need to capture multiple orientations of the scene or not is of interest for further work, as the method used to correct the range walk and non-uniformity would indicate that neither multiple orientations nor multiple intensity levels are of particular value for this method. This would seem to indicate that, with some modifications of the experimental setup, a single set of frames can be used with the goal of fully correcting a PIN 3D flash LiDAR imager.

4.

Conclusions

Methods were tested and results were presented for optimizing the process of evaluation and correction of 3D flash LiDAR imagers. These methods were focused on NUC and range walk error correction of a PIN 3D flash LiDAR imager. These methods included an algorithm for the compensation of thermal drift in dark-frame NUC using a lookup table without the need for an active sensor temperature readout and methods for experimentally minimizing electronic crosstalk to enable capture of data for the correction of photo-response non-uniformity and range walk error.

Dark-frame NUC was a critical first step in processing throughout this research. It was found that DFNUC was not sufficient for correcting the returns from the imager; significant photo-response non-uniformity existed, and range walk error presented a particular problem. PRNUC is the multiplicative, gain-error part of NUC. This is typically found in 2D cameras by illuminating a scene with an integrating sphere. By doing so, the scene is uniformly illuminated, and the imager produces a uniform response, thus enabling a simple ratio of the dark-frame non-uniformity corrected frame and the mean value of that frame to produce a set of gain values for the imager. However, in the case of a 3D imager that uses time of flight principles for ranging, an integrating sphere will randomize the pulse time of flight information, thus rendering the 3D PRNUC from an integrating sphere useless for such experiments. This prompted the development of new methods, including directly illuminating a physical target downrange; by doing this, however, it became clear that SRCN would be a limiting factor relative to the area of the FPA that is illuminated.

By mitigating SRCN through methods such using a checkerboard pattern, the photo-response non-uniformity could be characterized and then corrected. Range walk error was characterized and corrected throughout this effort. The approach used in this work was to capture a set of frames from a target or scene with significant variation in intensity across the cross range; thus there will also be a significant variation in range walk error across the cross range. Thus, by sorting and vectoring all detector responses in range and intensity, a global mean estimate of the range walk error can be found.

Acknowledgements

This work was performed in a collaboration between Voxtel, Inc. and LOCI under National Aeronautics and Space Administration (NASA) Small Business Technology Transfer (STTR) Contract No. 80NSSC19C0073, “Highly sensitive flash LADAR camera,” under the direction of Dr. Farzin Amzajerdian.

References

1. 

P. F. McManamon, LiDAR Technologies and Systems, SPIE Press(2019). Google Scholar

2. 

P. McManamon, Field Guide to Lidar, SPIE Press(2015). Google Scholar

3. 

European Machine Vision Association, “EMVA Standard 1288, Standard for Characterization of Image Sensors and Cameras,” (2016). Google Scholar

4. 

A. Reinhardt et al., “Windowed region-of-interest non-uniformity correction and range walk error correction of a 3D flash LiDAR camera,” Opt. Eng., 60 (2), 023103 (2021). https://doi.org/10.1117/1.OE.60.2.023103 Google Scholar

5. 

G. M. Williams, “Range‐walk correction using time over threshold,” (2018). Google Scholar

6. 

W. He et al., “Range walk error correction using prior modeling in photon counting 3D imaging lidar,” 89051D (2013). https://doi.org/10.1117/12.2034059 Google Scholar

7. 

W. He et al., “A correction method for range walk error in photon counting 3D imaging LIDAR,” Opt. Commun., 308 211 –217 (2013). https://doi.org/10.1016/j.optcom.2013.05.040 OPCOB8 0030-4018 Google Scholar

8. 

V. E. Roback et al., “3D flash lidar performance in flight testing on the Morpheus autonomous, rocket-propelled lander to a lunar-like hazard field,” 983209 (2016). https://doi.org/10.1117/12.2223916 Google Scholar

9. 

I. Poberezhskiy et al., “Flash lidar performance testing: configuration and results,” Proc. SPIE, 8379 837905 (2012). https://doi.org/10.1117/12.920326 Google Scholar

10. 

M. Georgiev, R. Bregovic and A. Gotchev, “Fixed-pattern noise modeling and removal in time-of-flight sensing,” IEEE Trans. Instrum. Meas., 65 (4), 808 –820 (2016). https://doi.org/10.1109/TIM.2015.2494622 IEIMAO 0018-9456 Google Scholar

11. 

V. Roback et al., “Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing,” 87310H (2013). https://doi.org/10.1117/12.2015961 Google Scholar

12. 

A. Bulyshev et al., “Three-dimensional super-resolution: theory, modeling, and field test results,” Appl. Opt., 53 (12), 2583 (2014). https://doi.org/10.1364/AO.53.002583 APOPAI 0003-6935 Google Scholar

13. 

J. R. McMahon, “Three-dimensional FLASH laser radar range estimation via blind deconvolution,” J. Appl. Remote Sens., 4 (1), 043517 (2010). https://doi.org/10.1117/1.3386044 Google Scholar

14. 

C. Bradley et al., “3D imaging with 128 × 128 eye safe InGaAs p-i-n lidar camera,” Proc. SPIE, 11005 1100510 (2019). https://doi.org/10.1117/12.2521981 PSISDG 0277-786X Google Scholar

15. 

A. Reinhardt et al., “Dark non-uniformity correction and characterization of a 3D flash lidar camera,” Proc. SPIE, 10636 1063608 (2018). https://doi.org/10.1117/12.2302818 PSISDG 0277-786X Google Scholar

16. 

A. Hecht, Thermal Drift Compensation in Non-Uniformity Correction for an InGaAs PIN Photodetector 3D Flash LiDAR Camera, University of Dayton(2020). Google Scholar

17. 

A. Hecht, P. McManamon and A. Reinhardt, “Thermal drift compensation in dark non-uniformity correction for an InGaAs PIN 3D flash lidar camera,” Proc. SPIE, 11744 117440B (2021). https://doi.org/10.1117/12.2585794 PSISDG 0277-786X Google Scholar

Biography

Andrew Reinhardt completed his PhD in Electro-Optics and Photonics from the University of Dayton; he defended and published his dissertation titled Evaluating and Correcting 3D Flash LiDAR Imagers in August 2021. His interests include direct and coherent detection LiDAR systems, passive sensors, 3D image processing techniques, and optical systems design, with a strong focus on simplifying and automating otherwise complicated systems. He is a member of SPIE.

Cullen Bradley is the research operations manager for Exciting Technology and an Electro-Optical Researcher for the University of Dayton Research Institute in Dayton, Ohio. His research interests include lasercom, 3D LiDAR imaging, continuous beam steering, crystal growth, and beam steering efficiency modeling. He earned his MS degree in electro-optics from the University of Dayton in 2013 and his BS degree in physics from St. John Fisher College in 2010.

Anna Hecht is a research engineer for Exciting Technology in Dayton, Ohio. She graduated from the University of Dayton with a BS degree (2019) and MS degree (2020) in electrical engineering with a focus in LiDAR image processing and 3D flash LiDAR.

Paul McManamon was a chief scientist of the AFRL Sensors Directorate until he retired in 2008. He is the president of Exciting Technology, Technical Director of LOCI, and chief scientist for Lyteloop. He chaired the NAS “Laser Radar” (2014), was co-chair of “Optics and Photonics” (2012), and vice chair of the 2010 Seeing Photons. He is a fellow of SPIE, IEEE, OSA, AFRL, DEPs MSS, and AIAA and was president of SPIE in 2006.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Andrew Reinhardt, Cullen P. Bradley, Anna Hecht, and Paul F. McManamon "Toward snapshot correction of 3D flash LiDAR imagers," Optical Engineering 60(8), 083101 (5 August 2021). https://doi.org/10.1117/1.OE.60.8.083101
Received: 19 April 2021; Accepted: 21 July 2021; Published: 5 August 2021
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
3D image processing

Imaging systems

LIDAR

Nonuniformity corrections

3D acquisition

Image filtering

Photons

Back to Top