Open Access
29 October 2019 High spatial resolution hyperspectral camera based on exponentially variable filter
Author Affiliations +
Abstract

The architecture and calibration of a hyperspectral imaging sensor based on an exponentially continuously variable narrow-band transmission filter is described. The system design allows for great flexibility in choice of sensors and lenses to be used. Spectral and radiometric calibration using lenses of different focal length and vignetting characteristics is described. The point-spread-function at different wavelengths depends on the lens design and the f-number. The advantage of using a tilt/shift lens is demonstrated. Low f-number lenses show vignetting, which influences both the spectral and radiometric calibration. Retroeffects in the microlenses of the focal plane array are observed but to a large extent will be remedied by future improvements in the optical filter. Noise properties of the sensor system are discussed, and signal-to-noise ratios estimated. From the model, it is possible to obtain parametric performance variations based on the properties of key components. Finally, the sensor performance is indicated by demonstrating a spectral image.

1.

Introduction

Hyperspectral imaging systems have much to offer with respect to areas such as precision farming, ecosystems and biodiversity, evaluating sustainable development, and disaster assessment. The demand on establishing reliable databases for trend monitoring is likely to increase. It is therefore not surprising that technologies to support these ambitions are continuously being developed. Hyperspectral imaging spectrometers can be very useful with respect to classification of targets and substances.

The magnitude of the large areas that need monitoring will also require an improved coverage rate with high spatial resolution at lower costs. In order to overcome some of the present limitations with respect to three-dimensional (3-D) mapping and increased coverage rate, new hyperspectral camera technologies optimized for large-area coverage at high spatial and adequate spectral resolution have been developed.1 The technology described in this paper has the extra benefit of allowing for determination of the 3-D structure of the scene using image sequences.2

The different methods to obtain a hyperspectral data cube are dependent on the type of imaging spectrometer. The challenge is to record the spectrum of a 3-D terrain using a two-dimensional (2-D) focal plane array. In general, hyperspectral algorithms operate using spectral information. Only rarely is 3-D terrain information available from, e.g., LIDAR data or passive 3-D imaging. In common hyperspectral pushbroom sensor systems, a slit is scanned along the scene. In step-stare systems, the line-of-sight must be stabilized during integration time. In neither case, the 3-D spatial information is simultaneously obtained. With the new sensor technology, the opportunity exists to collect coincident 3-D spatial information together with the spectral information at each pixel. This information can be used to improve the material and target classification.

Besides the added 3-D information, the design also allows for forward motion compensation. This results in both the possibility to use longer integration time at lower light levels and also substantially increased area coverage rate. The forward motion compensation is not treated here, but it can be observed that such compensation is not possible when using a pushbroom system.

When retrieving the spectral reflectance, a Lambertian assumption is often made, ignoring the geometric properties of the scene.3 Including the geometric information, the observed signal can be analyzed using the bidirectional reflectance with respect to the direction of the solar irradiance and the direction of the observer.4 Since the geometric features are not generally available, the algorithms for accomplishing this task are still in development.5

A hyperspectral sensor system based on a linear variable filter allowing for both spectral and geometric measurements was presented by Renhorn et al.1 The sampling was, however, denser in the NIR spectral region with respect to the spectral resolution compared to the visible spectral region. In order to obtain more uniform sampling with respect to spectral resolution, i.e., a constant number of samples per spectral resolution element, an exponentially variable filter (EVF) has been developed. This also improves the spectral resolution in the short wavelength region since the spectral broadening over the blur spot on the filter will be low. This is further explained below.

In the next section, the camera design is presented, followed by lens and filter characterization. The performance of a tilt lens where tilting is used for chromaticity compensation is given. The vignetting effect observed at low f-number is measured and modeled. The EVF is characterized, and the spectral influence of f-number and vignetting is discussed. The spectral and radiometric calibration of the sensor system is described in Secs. 4 and 5, respectively. Finally, the sensor system is applied to a complex scene demonstrating high spatial resolution.

2.

Hyperspectral Camera Design

The key component in the sensor system is the continuously variable filter, in this case, a filter with exponential variation of wavelength as a function of position. The filter is replacing the cover glass of the focal plane array without changing any other functionalities except the spectral transmission. The only constraint is that the sensor must be at least full format, i.e., 24×36  mm2, or larger. The filter can be scaled up to fit very large focal plane arrays if needed. This solution enables the use of many different types of sensor systems, selected to fit the specific purpose, and is easily upgradable when new sensor technology becomes available. The realization described here is based on a machine vision camera. The large availability of cameras makes this solution very competitive with respect to size, weight, cost, and ease of use.

The following important performance properties of the hyperspectral instrument are addressed: sensor efficiency and radiometric responsivity, optics and spatial resolution, spectral transmission, and spectral bandwidth. Coregistration and the relationship between the spatial performance characteristics and spectral characteristics are not addressed in any detail here.

The sensor is a full-format sensor in which, in the present realization, a 16-MP sensor is selected. Here, a CCD sensor (OnSemi, KAI-16070) was selected due to the availability of support with respect to mounting the hyperspectral filter onto the focal plane array. The image size is 4864×3232  pixels with a pixel size of 7.4×7.4  μm2. The pixel with row number nr=1616 and column number nc=2432 is selected as the center of the focal plane array. The geometric distance can be calculated from the number of pixels from the center times the pixel size. Consequently, there is a one-to-one correspondence between pixel number (nr,nc) and geometric position (x,y). Both scales will be used in the modeling and presentation of results. The modified sensor was installed into a Lumenera Lt16059H machine vision camera. The filter can be scaled up to fit very large focal plane arrays if needed. The spectral resolution is chosen to be of the order of 10 nm, which is adequate for most solid materials in remote sensing applications. This also allows recordings at moderately low light levels. The field of view can be changed by simply changing lenses as in any ordinary camera.

The hyperspectral data are obtained by either rotating the sensor around the lens aperture in order to avoid parallax changes or by translating the sensor relative the scene. In both cases, a step-and-stare method is used, with the minimum number of frames corresponding to the number of independent spectral channels within one field of view. With standard oversampling, the number of frames will be twice this number. When scanning around the aperture, the image sequence does not contain any 3-D information and the image registration is simplified. With a moving sensor platform, signal processing similar to passive 3-D imaging is required in order to obtain simultaneous acquisition of 3-D and hyperspectral data. Both methods are further described by Renhorn et al.1 and Ahlberg et al.2

The optical hyperspectral filter is placed close to the focal plane array, replacing the cover glass, in order to make the distance between the filter surface and the sensor surface very short. This makes the blur spot on the filter very small, and therefore the spectral resolution of the filter is not compromised. The filter geometry is shown in Figs. 1 and 2.

Fig. 1

Hyperspectral camera design.

OE_58_10_103106_f001.png

Fig. 2

The spectral filter is replacing the cover glass and placed at a distance of Δx=0.57  mm from the detector plane.

OE_58_10_103106_f002.png

3.

Modelling Camera Parameters

3.1.

Filter Design and Spectral Sampling

For a linear variable filter, the peak transmission wavelength varies linearly with position. This makes the position/wavelength relation easy to model. When the wavelength is altered, the peak transmission position will move in relation to wavelength. For an EVF, this is no longer the case. When changing the wavelength by, say, 1 nm, the position will change more in the blue region of the filter than in the near-infrared region of the filter. The reason for the exponential design is that the same number of detector columns is obtained per spectral resolution element across the spectrum. This allows a constant spatial sampling rate irrespective of the wavelength. In Fig. 3, it can be seen that the spectral bandwidth proportional to Δλ is smaller at shorter wavelengths than at longer wavelengths.

Fig. 3

The relation between spectral resolution and spatial extent is illustrated for an EVF. The parameter x is the position along the continuously variable transmission filter. The parameter λ is the transmitted wavelength as a function of position. The spectral resolution, proportional to Δλ, varies with wavelength while the spatial extent, Δx, is kept constant with respect to spatial distribution. Measured values are shown in this example.

OE_58_10_103106_f003.png

The spectral transmission bandwidth varies linearly with wavelength and can be modeled by

Eq. (1)

ΔλFWHM(x)=k0+k1λ(x),
where k0 is the small offset due to the nature of the coating process. The corresponding distance, Δx, is given as

Eq. (2)

Δx(x)=k0+k1λ(x)λ(x).
We are looking for a solution where Δx(x) is independent of x, i.e.,

Eq. (3)

Δx(x)=D,
where D is the geometric width related to a basic spectral resolution element. With a sampling distance of D, one sample per bandwidth ΔλFWHM is obtained. The differential equation is of the general type

Eq. (4)

λ(x)bλ(x)+a=0,
with the solution

Eq. (5)

λ(x)=a+cex/b.
With original parameters the solution becomes

Eq. (6)

λ(x)=k0k1+cek1x/D.
Using λ(0)=λ0c=λ0+k0/k1. Using λ(L)=λLD=k1Lln(λL+k0/k1λ0+k0/k1) where L is the length of the filter. Introducing λoff=k0/k1,

Eq. (7)

D=k1Lln(λL+λoffλ0+λoff),
where λoff is expected to be relatively small. The number of spectral bands, N, is consequently given by the length of the filter, L, and the distance parameter, D, according to

Eq. (8)

N=LD.
Finally, the transmission wavelength as a function of position can be written

Eq. (9)

λ(x)=λoff+(λ0+λoff)eln(λL+λoffλ0+λoff)x/L,
or

Eq. (10)

λ(x)=λoff+(λ0+λoff)(λL+λoffλ0+λoff)xL.

In practice, the coating will deviate somewhat from the ideal exponential function and a series expansion can therefore be used to higher accuracy. Hence, series expansions are used in the final calibration process.

The filter was designed for transmission between 450 and 950 nm over a 36-mm range with a spectral bandwidth [full-width at half-maximum (FWHM)] of 2% of the center wavelength, as shown in Fig. 4. Fitting to observed spectral transmission data results in an estimate of λoff. Using λ0=450  nm, λL=950  nm, λoff=154  nm, k1=0.02, and L=36  mm, the number of independent spectral bands is estimated to N49. A change in transmission is observed at 800  nm. The transmission is also somewhat low in the blue spectral region. Both effects are subject to improvements. The filter was manufactured by Delta Optical Thin Film A/S and is now commercially available.

Fig. 4

Filter spectral transmission between 450 and 950 nm at several wavelengths showing the change in shape and bandwidth with wavelength. (Measurements by courtesy of Delta Optical Thin Film A/S).

OE_58_10_103106_f004.png

The angle of incidence varies over the filter, and this will introduce some wavelength shifts. It is therefore more convenient to fit the observations using a 2-D polynomial as a function of sensor columns and rows. The following polynomial was used for the spectral calibration

Eq. (11)

λ(nr,nc)=a00+a01nc+a02nc2+a03nc3+a10nr+a11ncnr+a20nr2,
where the coefficients aij are determined in the calibration process. The indices ij indicate the exponent of the corresponding row, nr, and column, nc, variable. The fitting of the reference spectrum was within 10% of the filter linewidth.

Small variations in the parameters are observed as a function of type of lens and the f-number. Each lens and f-number setting will therefore have a unique set of constants, although the deviations mostly correspond to a fraction of the spectral bandwidth.

The change in bandwidth with respect to wavelength is shown in Fig. 5. The number of measured independent spectral bands is 48, corresponding to 96 spectral channels at common sampling density.6 For slowly varying target spectra, lower sampling densities might still work, allowing for fewer samples per field of view. In practice, this must be evaluated for each specific situation.

Fig. 5

Spectral bandwidth shown as a function of wavelength. (Measurements by courtesy of Delta Optical Thin Film A/S).

OE_58_10_103106_f005.png

3.2.

Lens Design and Spatial Sampling

The sensor performance is affected by the lens in several ways. The focal length determines both the instantaneous field of view (IFOV) and the angular field of view (FOV). The AR coating determines the overall spectral transmission of the optics and can cause losses in certain spectral regions. The point spread function (PSF) or spatial resolution varies with lens design and f-number. For high spatial resolution, the spatial contrast is important. A useful measure of spatial resolution is either the PSF FWHM or the value of the modulation transfer function at the Nyquist frequency.

A commercial-off-the-shelf lens is used in the experiments shown here. Since the filter transmission is narrow band at each position, chromatic aberrations could in principle be disregarded. Lenses designed with this in mind could have certain advantages with respect to both image sharpness and the weight of the lens. Here, change in focus with respect to wavelength is partly taken into account by using a tilt/shift lens where tilting now is used to compensate for change in focal length with respect to wavelength.

The spatial resolution is studied using the edge method. The sensor is approximated as a unit box function, and the PSF of the optics is described by a normalized Gaussian function. The resulting line spread function (LSF) is modeled as a convolution of the unit box function and the Gaussian function. The scale is here in units of pixels. The normalized convolved function is given as

Eq. (12)

LSF(nc)=erf(d2nc2σ)+erf(d2+nc2σ)2d,
where d is the detector width (here, d=1), σ is the blur spot size (here in units of pixels), and erf is the error function. The edge response function is obtained by integrating the LSF. The normalized edge spread function (ESF) is given as

Eq. (13)

ESF(nc)=12+2πσ[e(d2+nc)24σ2e(d2nc)24σ2](d2nc)erf(d2nc2σ)+(d2+nc)erf(d2+nc2σ)2d.
The edge response is given for the edge placed at the fictitious position nc=0 (left edge of the filter). For displaced edges, the origin must be adequately shifted, i.e., the response function is given by ESF(nc-n,edge) for the edge positioned at the possibly fractional position nedge.

To measure the LSF, the edge target is placed in a collimator, and the target is imaged at various positions on the focal plane array. The collimator setup is shown in Fig. 6.

Fig. 6

Collimator setup with a target wheel and an integrating sphere source.

OE_58_10_103106_f006.png

In Fig. 7, the signal as a function of distance to the edge is shown. For a single symmetric LSF, a symmetric signal is expected. The asymmetry of the signal indicates that several LSFs are contributing to the observed signal. The model is therefore adapted to a dual line response function. In Fig. 7, the edge is placed at approximately pixel column 172 which is in the blue spectral region.

Fig. 7

Signal as a function of distance to edge. Red curve is a fitted edge-function with two components. The edge is approximately at column 172.

OE_58_10_103106_f007.png

The derivative of the ESF gives the LSF which is close to the PSF. This is shown in Fig. 8, where also the two components are shown separately. The focal plane array is fitted with microlenses in order to improve the fill factor and the efficiency. This also influences the reflective properties of the focal plane array by the corresponding retro- or cat’s-eye effect. Due to further reflections from the filter and back onto the focal plane array, some blurring effects might appear. This explains the second peak, the green curve in Fig. 8. The relative magnitude of this signal can be reduced by improving on the filter transmission in this spectral region.

Fig. 8

Line spread profile at column 172. Red curve is the main peak and the green curve is a retroreflected secondary peak. The secondary peak is strong at short wavelengths as shown here but becomes much weaker at longer wavelengths where the filter transmission is higher.

OE_58_10_103106_f008.png

The shift between the primary focusing spot and the secondary somewhat blurred spot undergo a spatial shift with a magnitude dependent on the angle of incidence. It is therefore expected that the two spots are centered on the same column at the optical axis, i.e., at the center of the image. The shift is expected to increase with increasing off-axes angle with the retroreflected peak shifting toward the center. This is also observed and shown in Fig. 9.

Fig. 9

Position of secondary peak relative to the main peak. Red, dashed line is a linear fit.

OE_58_10_103106_f009.png

The tilted lens focuses quite well over the full spectral region. The secondary peak is substantially broader and constitutes a blurring effect depending on the relative power. This is shown for the Canon TS-E 90 mm lens in Fig. 10.

Fig. 10

PSF for Canon 90-mm TS lens at F/2.8 and 1.3 deg tilt. Blue curve is the main peak. Red curve is a retroscattered secondary peak. The intensity of the secondary peak is lower than the main peak, especially at longer wavelengths or larger column numbers.

OE_58_10_103106_f010.png

3.3.

Vignetting at Low f-Number

Large aperture lenses do exhibit substantial vignetting. This is causing not only an intensity variation over the focal plane array but also the spectral sampling to vary depending on the position.

The vignetting effect is modeled7 by assuming a moving disk with center at kr (xs,ys) and radius (f2fvn):

Eq. (14)

Fvign={1,(xakrxs)2+(yakrys)2(f2fvn)20,else.
The vignetted transmission is given as

Eq. (15)

Tvign(xs,ys)=(xa,ya)ApertureFvign(xs,ys,xa,ya)dxadya.
The model is shown in Fig. 11, where it is compared to measurements using a Zeiss lens with the following parameters:
F=85  mm;Fn=1.4;Fvn=1.4;kr=1.2;ys=0,
where F is the focal length, Fn is the f-number, Fvn is a vignetting parameter of similar size as the f-number of the lens, kr is a vignetting scaling factor, and ys is the y-coordinate on the sensor. The parameter ys=0 corresponds to the center row nr=1616 in the image.

Fig. 11

Center row vignetting of Zeiss 85-mm lens at F/1.4. Blue dots represent measurements and red line is from vignette model.

OE_58_10_103106_f011.png

The signal is 45% lower at the edge as shown in Fig. 11 compared to the center of the image. There might also be other angle-dependent filter effects that can cause intensity variations similar to vignetting. All these effects are corrected in the calibration process.

4.

Spectral Calibration

Spectral calibration is an important step, and it is useful to perform this calibration ahead of the radiometric calibration. It was performed using an integrating sphere as a Lambertian source and a set of laser diodes in combination with a spectrometer (Avantes AvaSpec-2048 XL, with a spectral resolution of 2 nm). In this way, several lines could be strategically selected across the spectral range of the sensor system. Most laser diodes were rather stable but a few in the near-infrared spectral region showed some drift. An example of a set of wavelengths used in this work is 452.6, 530.4, 636.1, 788.7, and 848.6 nm. Figure 12 shows this set of spectral lines using the Canon TS 90 mm at F/2.8 lens. The spectral position shifts slightly as a function of row number. After determining the wavelength at the point of gravity of the laser peak over the 2-D focal plane array, Eq. (11) is used to fit the wavelength versus pixel locations.

Fig. 12

Illustration of five laser lines at 452.6, 530.4, 636.1, 788.7, and 848.6 nm. Color code as a function of wavelength is added for clarity. A small curvature over the rows of the image can be observed.

OE_58_10_103106_f012.png

With lenses intended for focal plane arrays with microlenses, the microlens vignetting must be considered. This means that the angles-of-incidence is limited to a region with high microlens efficiency. This complicates the optical design somewhat but also results in similar angular behavior irrespective of focal length and therefore the spectral calibration functions become similar for many lenses.

As discussed above, the Zeiss 85 mm lens shows substantial vignetting at F/1.4. This influences not only the intensity but also the spectral line shape. The magnitude of the effect varies with sensor position.

The filter transmission properties are now studied as a function of filter coordinates and angles of incidence. In the camera model, the filter position is derived from aperture position (xa,ya), sensor position (xs,ys), the focal length (f), and the short distance between filter and sensor surface (δ). The resulting equation is given as

Eq. (16)

λfilter(xs,ys,xa,ya)={a+b[(xsxa)(fδ)f+xa]+c[(xsxa)(fδ)f+xa]2+d[(xsxa)(fδ)f+xa]3}1(xsxa)2+(ysya)2n2(f2+(xsxa)2+(ysya)2),
where the parameters a to d are given by the filter properties. The square root expression estimates the shift in transmission wavelength due to nonorthogonal angle of incidence. The parameter n is the effective index of refraction of the filter.

The filter transmission is modeled as

Eq. (17)

Tfilter(λfilter,λi)=Exp(2|λfilterλiw|k),
where the transmission function is a generalized Gaussian function and the transmission value is calculated at the shifted wavelength.

The vignette signal is proportional to

Eq. (18)

Svign(xs,ys)1Ap(xa,ya)ApertureFvign(xs,ys,xa,ya)Tfilter[λfilter(xs,ys,xa,ya),λi]dxadya,
where Ap is the aperture area.

In Fig. 13, the observed spectral change close to the center of the image is shown when switching the f-number between 1.4 and 2.8. Both wavelengths are recorded at the same exposure, i.e., the change in aperture is compensated for by changing the exposure time. The peak signal at F/1.4 is decreasing due to line broadening. The integrated signal is decreased by 11% and can only partly be attributed to vignetting.

Fig. 13

Observed line profile at 635 nm and F/1.4 (red) and F/2.8 (blue) curves at the same exposure. The F/1.4 integrated signal decreased by 11% compared to the F/2.8 signal.

OE_58_10_103106_f013.png

Using Eq. (18), the corresponding calculated change in spectral response as observed above is shown in Fig. 14. Relative position 0 corresponds to column 2432 defined as the center of the sensor. The signal decrease due to vignetting is underestimated which points to the possibility of other filter effects contributing to the observed decrease in signal.

Fig. 14

Calculated line profile at 635 nm and F/1.4 (red) and F/2.8 (blue) curves. Integrated signal decrease, estimated from vignetted and unvignetted model, is negligible (1%) at this position.

OE_58_10_103106_f014.png

The spectral profile looks different at the blue and infrared edges of the filter. In Fig. 15, the result is shown in the blue spectral region. The integrated signal at F/1.4 is lowered by 37% compared to what is expected without vignetting as estimated from the F/2.8 signal. This corresponds to an effective f-number of 1.8 instead of 1.4. The spectral shape is also influenced due to a wavelength dependence in the vignetting effect.

Fig. 15

Line profile at 450 nm and F/1.4 (red) and F/2.8 (blue) curves. Observed integrated signal is decreased by 37% and mainly attributed to vignetting at this position.

OE_58_10_103106_f015.png

In Fig. 16, the spectral line profile is calculated including vignetting. Using the vignetting results from above, the signal is expected to be lowered by 31%. This compares quite well with the observed result.

Fig. 16

Calculated line profile at 450 nm as a function of sensor position and F/1.4 (red) and F/2.8 (blue) curves. Estimated signal decrease is 31% due to vignetting.

OE_58_10_103106_f016.png

It is of interest to see what the predicted result would be if vignetting was not present. This situation is simulated in Fig. 17. Not only does the signal increase, the spectral profile also changes.

Fig. 17

Calculated line profile at 450 nm and F/1.4 (red) and F/2.8 (blue) curves assuming no vignetting.

OE_58_10_103106_f017.png

Finally, it is also interesting to study the spectral responsivity at a specific position. Selecting position 16.6  mm which is close to transmission maximum at the wavelength 450 nm, the corresponding spectral signal is shown in Fig. 18.

Fig. 18

Signal as a function of wavelength at the fixed position 16.6  mm and F/1.4 (red) and F/2.8 (blue), including vignetting.

OE_58_10_103106_f018.png

5.

Radiometric Calibration and Noise

A simple functional test can be performed by illuminating a target uniformly at some distance. This test is mostly used in order to determine if a sensor is still working as expected. For more accurate tests, a uniform and radiometrically accurate source must be used, e.g., a calibrated quartz tungsten halogen (QTH) lamp and a Lambertian target. Calibration also serves the purpose of sensor performance verification.

The most common source in hyperspectral remote sensing is the Sun. The Sun has substantially more radiation in the blue spectral region compared to the QTH which has more radiation in the red and near-infrared spectral region. This can cause some concern due to stray light when calibrating in the blue spectral region.

Another concern can be the addition of spectral features inherent in the solar irradiance after filtering through the atmosphere. This can be important if these features fall within bands that are critical to the hyperspectral instrument performance or at a critical feature in the target spectral signature.

The signal in a radiance measurement depends on many components in the sensor system chain. The purpose of the calibration is to relate the signal to the radiance spectrum, L(λ), at the camera entrance aperture. With an entrance pupil area, Ap, and an IFOV, ΩIFOV, the nominal optical throughput, or étendue, is given by Ap ΩIFOV. The main transmission loss is in the continuously variable filter, TF(λ), with some further losses in the lens. The losses in the detection process are represented by the quantum efficiency, η (λ).

When converting charge to voltage, additional “read noise” will be added. Often also an offset is added by design in order to avoid accidental negative signals. Digitization to integer values leads to some quantization errors. By design, these errors should be small. The number of electrons that can be collected, i.e., the “well capacity,” is limited resulting in a maximum saturation signal. After correcting for offset, the digitized signal can be scaled by a calibration factor in accordance to the radiometrically known radiance.

The signal in digital numbers (DN) in the absence of stray light and dark current can be written as

Eq. (19)

SDN(λ)=Rdη(λ)AdTF(λ)πΔt(2F#)2λhcLs(λ)ΔλFWHM(λ),
where Rd is the responsivity (DN/electron), Δt is the exposure time (s), η is the quantum efficiency (electrons/photon), Ad is the detector area (m2), TF is the filter transmission, F# is the f-number, hc/λ is the photon energy (J), Ls is the spectral radiance of the source (Wm3sr1). Often the radiance is given in different units and the equation must be changed accordingly. ΔλFWHM is the spectral resolution (FWHM) (m). It is assumed that the spectral radiance Ls is approximately constant over the spectral bandwidth ΔλFWHM. The étendue, or optical throughput, is given by AdΩ and the solid angle is approximated by Ω=π/(2F#)2. The signal is sometimes shifted to, e.g., 16 bit, requiring a multiplicative factor. When shifting from 14 to 16 bit, the signal is multiplied by a factor 4. This is the case for the present unit under test.

The calibration equation is given as

Eq. (20)

SDN(λ)=ExpmLs(λ),
where SDN is the dark current corrected signal and the exposure

Eq. (21)

Exp=Δt(F#)2,
is related to the exposure value (EV)

Eq. (22)

EV=log21Exp.

The parameter m varies with the pixel indices. The calibration has to be performed for each individual lens at different f-numbers since they might differ in, e.g., vignetting and AR coating. With vignetting as for the Zeiss lens at F/1.4, calibration measurements at different aperture stops are particularly important.

The baseline noise, usable integration time, linearity, and dynamic range are parameters determined by the basic sensor properties. Those can therefore be estimated separate from the characteristics of the spectral imager. Noise is often not specified, making performance estimates under various illumination conditions difficult to judge. Stray light is a system property which is also influenced by the filter close to the sensor. The main cause is radiation retroreflected from the microlens/detector combination being trapped between the detectors and the filter surface. Although this is influencing the spectral resolution, it is best recorded in the spatial domain.

When measuring the dark current, a bias signal and read noise signal are added. This fixed noise is larger than the noise due to the random distribution of the electrons. The read noise is expected to be of the order of 12 electrons or 23 DN. A read noise of the order of 50 DN is consistent with the histogram distribution in Fig. 19, where also the offset is presented.

Fig. 19

Discrete plot of the probability density function of the dark signal at short exposure time (2.46 ms) for two image quadrants representing two outputs with slightly different gain. Mean values are 229.40 and 234.48, respectively.

OE_58_10_103106_f019.png

At short exposures, i.e., exposure times shorter than 100 ms, the deviations are dominated by the varying offsets of the different gain modules and the corresponding gain noise. For Poisson distributed electrons with a mean signal Ne, the standard deviation is given by Ne. This observed noise with added read noise is shown in Fig. 20. The variance has been fitted to the function

Eq. (23)

varDN=readDN2+SDNRd,
where readDN is the readout noise in DNs, SDN is the corresponding signal, and Rd is the responsivity. The value obtained for Rd is Rd=0.523 and the read noise is readDN=39.1. Estimates from camera data are Rd=0.486 and readDN=31.9.

Fig. 20

Signal variance as a function of signal level.

OE_58_10_103106_f020.png

The signal-to-noise ratio (SNR) is given by the signal divided by the standard deviation (RMS). The result is shown in Fig. 21, where also the region is extrapolated over the full 216 range.

Fig. 21

SNR as a function of signal level. The red line is extrapolated over the full 16-bit range.

OE_58_10_103106_f021.png

There is a tradeoff between SNR and pixel size or number of pixels. The emphases are here on large focal plane arrays with corresponding small pixel size is making the SNR somewhat low. Focal plane arrays with larger number of pixels but the same SNR is already available.

6.

Hyperspectral Imaging

The sensor system was tested on a rather cluttered scene using camera rotation in a record-while-scanning procedure. Approximately 100 frames are obtained over a single field of view. From the set of images, the spectral data cube is obtained. An example is shown in Figs. 22 and 23 for two sets of wavelength bands. The recorded images were registered to each other using the random sample consensus method (RANSAC) and geometric affine linear transformation and translation. Each point in the scene is observed by different sensor elements with the corresponding spectral transmission. If a point in the scene is recorded in 100 different frames while continuously rotating the sensor, this point is sampled at 100 different wavelengths. The spectrum is subsequently resampled to a set of standard wavelengths. The spectral properties of the scene can be reconstructed at each pixel. More details on the processing are described by Renhorn et al.1 and Ahlberg et al.2

Fig. 22

Hyperspectral scene presented in false color at RGB={560,530,450}nm. The arrow shows the region represented by the spectrum in Fig. 24.

OE_58_10_103106_f022.png

Fig. 23

Hyperspectral scene presented in false color at RGB={830,630,530}nm

OE_58_10_103106_f023.png

The appearance of the scene when including the increased vegetation reflectance in the NIR is shown in Fig. 23. Detailed analyses of scene elements can be performed but will not be further executed here.

The spectrum of the orange building at the arrow in Fig. 22 is shown in Fig. 24 together with a reference spectrum obtained from ground truth measurement using the Avantes spectrometer.

Fig. 24

Spectrum of the area on the orange building indicated by an arrow in Fig. 22. Dashed line is from ground truth measurements using a spectrometer.

OE_58_10_103106_f024.png

The difference between the hyperspectral imaging sensor and the spectrometer results might indicate that the calibration of this type of hyperspectral sensor is not trivial. Using vicarious calibration, deviations as observed here are still present. Straylight might be a cause to this problem but this has to be studied further.

7.

Discussion and Conclusion

The architecture and calibration of a new type of hyperspectral imager based on an exponentially variable narrow-band transmission filter is described. This type of sensor has interesting advantages over conventional push-broom systems but also exhibit some delicate calibration problems. The system allows a broad range of lenses to be used at selected f-numbers, which brings about easy change of field of view. However, this flexibility comes at the price of requiring calibration with respect to the different lenses depending on focal length and vignetting characteristics.

The PSF depends on the lens design and the f-number. Most lenses are chromatically corrected. This is not needed in this hyperspectral camera design, but since this is a new characteristic, a lens specifically designed for this camera has still to be developed. As a compromise, a tilt/shift lens is used that can compensate for the change in focus as a function of wavelength to a degree that exceeds the performance of broad band achromatic lenses. Low f-number lenses, e.g., lenses at F/1.4, will inevitably show vignetting, which must be considered in the calibration process. Retroeffects in the microlenses of the focal plane array can be observed but will to a large extent be remedied by future improvements in the optical filter.

The spectral transmission properties of the spectral filter changes with angle of incidence, causing shifts compared to the filter at normal incidence and some curvature with respect to image rows. This effect is more pronounced at the edges of the image. The spectral properties also, to some degree, depend on the f-number. Furthermore, vignetting causes a wavelength shift due to shadowing of certain angles of incidence. All these effects have been accurately modeled. A 2-D spectral calibration has been performed using laser diodes with secondary wavelength calibration using a spectrometer.

Radiometric calibration was performed based on an integrating sphere with a NIST traceable QTH lamp. The radiometric level was determined using a well-calibrated monitoring sensor.

Finally, noise properties of the sensor system are discussed, and SNRs estimated. From the model, it is possible to obtain parametric performance variations based on the properties of key components.

Coregistration errors should also be considered in the analyses.8 It is difficult to give a general estimate of this error since it depends on both the relative motion between the sensor and the target and on the specific sensor system realization. For a situation with translational motion only, the registration error will be very small. The error increases with the degrees of freedom in the motion, and nonlinearities in the motion, requiring more sophisticated transformations.

The spatial resolution can be measured by using resolution targets. The combination of spectral and spatial resolution forms a measure of the information collection capability of the sensor. Efforts to summarize this capability in a spectral quality equation by adding the number of spectral channels to the measures of spatial resolution and SNR have been reported.9 The result is, however, highly context-dependent and further studies are needed in order to be able to prioritize different aspects of hyperspectral imaging depending on the task being considered. A detailed sensor system specification facilitates the possibility to form figures of merit relevant for the specific application.

References

1. 

I. G. E. Renhorn et al., “High spatial resolution hyperspectral camera based on a linear variable filter,” Opt. Eng., 55 (11), 114105 (2016). https://doi.org/10.1117/1.OE.55.11.114105 Google Scholar

2. 

J. Ahlberg et al., “Three-dimensional hyperspectral imaging technique,” Proc. SPIE, 10198 1019805 (2017). https://doi.org/10.1117/12.2262456 Google Scholar

3. 

S. Golowich et al., “Three-dimensional radiative transfer for hyperspectral imaging classification and detection,” in IEEE Int. Symp. on Technol. for Homeland Secu., (2018). https://doi.org/10.1109/ths.2018.8574127 Google Scholar

4. 

I. G. E. Renhorn and G. D. Boreman, “Developing a generalized BRDF model from experimental data,” Opt. Express, 26 (13), 17099 –17114 (2018). https://doi.org/10.1364/OE.26.017099 OPEXFF 1094-4087 Google Scholar

5. 

C. Borel et al., “Adjoint radiosity based algorithms for retrieving target reflectances in urban area shadows,” in Proc. of the 6th EARSeL Imaging Spectro. SIG Workshop, (2009). Google Scholar

6. 

J. G. Robertson, “Detector sampling of optical/IR spectra: how many pixels per FWHM?,” Publ. Astron. Soc. Aust., 34 e035 (2017). https://doi.org/10.1017/pasa.2017.29 PASAFO 1323-3580 Google Scholar

7. 

T. Goossens et al., “Vignetted-aperture correction for spectral cameras with integrated thin-film Fabry–Perot filters,” Appl. Opt., 58 (7), 1789 –1799 (2019). https://doi.org/10.1364/AO.58.001789 Google Scholar

8. 

H. E. Torkildsen and T. Skauli, “Full characterization of spatial coregistration errors and spatial resolution in spectral imagers,” Opt. Lett., 43 (16), 3814 –3817 (2018). https://doi.org/10.1364/OL.43.003814 Google Scholar

9. 

J. Kerekes and S. Hsu, “Spectral quality metrics for VNIR and SWIR hyperspectral imagery,” Proc. SPIE, 5425 549 –557 (2004). https://doi.org/10.1117/12.542192 PSISDG 0277-786X Google Scholar

Biography

Ingmar G. E. Renhorn is with Renhorn IR Consultant AB, Sweden and cofounder of Glana Sensors AB. He was a staff scientist at the Swedish Defence Research Agency (FOI), Linköping, Sweden, from 1981 to 2014. In 1994, he was appointed as a director of Research at the Department of IR Systems at FOI. He has been working as a consultant from 2014 to the present. His research interests include hyperspectral imaging systems, optical turbulence and aerosol profiling, signature measurement, and sensor systems with applications in reconnaissance, infrared search and track, target acquisition, and optical warning. He is a SPIE fellow and member of OSA.

Linnéa Axelsson works at Swedish Defence Research Agency (FOI), Sweden. She has been a research engineer at FOI since 2012, working with EO/IR sensors and signatures.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Ingmar G. E. Renhorn and Linnéa Axelsson "High spatial resolution hyperspectral camera based on exponentially variable filter," Optical Engineering 58(10), 103106 (29 October 2019). https://doi.org/10.1117/1.OE.58.10.103106
Received: 16 July 2019; Accepted: 14 October 2019; Published: 29 October 2019
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
Sensors

Optical filters

Cameras

Vignetting

Calibration

Spatial resolution

Staring arrays

RELATED CONTENT


Back to Top