In seekers that never resolve targets spatially, it may be adequate to calibrate only with sources that have known aperture irradiance. In modern missile interceptors, the target becomes spatially resolved at close ranges, and the seeker's ability to accurately measure the radiance at different positions in the scene is also important. Thus, it is necessary to calibrate the seekers with extended sources of known radiance. The aperture irradiance is given by the radiance integrated over the angular extent of the target in the scene. Thus radiance calibrations and accurately presenting the targets spatially produces accurate irradiances. The accuracy of the scene radiance is also important in generating synthetic imagery for testing seeker conceptual designs and seeker algorithms, and for hardware-in-the-loop testing with imaging projection systems. The routine procedure at the Air Force Research Laboratory Munitions Directorate's AFRL/MNGG is to model and project the detailed spatial and radiometric content of the scenes. Hence, accurate depiction of the radiance in the scene is important. AFRL/MNGG calibrates the complete projection system (synthetic image generator and scene projector) with extended sources of known radiance, not unresolved sources of known irradiance. This paper demonstrates that accurate radiance calibrations and accurate spatial rendering do provide accurate aperture irradiances in the projection systems. In recent tests conducted by AFRL/MNGG, the projection system was calibrated in terms of radiance, and the aperture irradiances were determined both as they were observed in the synthetic images that drove the projection system and in the images of the projection system measured by the unit under test. The aperture irradiances were compared with the known truth data and errors were determined. This paper presents results of analyzing the errors associated with the observed aperture irradiances.
Spatial distortion effects in infrared scene projectors, and methods to correct them, have been studied and reported in several recent papers. Such effects may be important when high angular fidelity is required of a projection test. The modeling and processing methods previously studied, though effective, have not been well suited for real-time implementation. However, the “spatial calibration” must be achieved in real-time for certain testing requirements. In this paper we describe recent efforts to formalize and implement real-time spatial calibration in a scene projector test. We describe the effect of the scene generation software, “distortion compensation”, the projector, the sensor, and sensor processing algorithms on the transfer of spatial quantities through the projection system. These effects establish requirements for spatial calibration. The paper describes the hardware and software recently developed at KHILS to achieve real-time spatial calibration of a projection system. The technique extends previous efforts in its consideration of implementation requirements, and also in its explicit treatment of the spatial effects introduced by each of the distinct components of the overall system, as mentioned above.
One proven technique for nonuniformity correction (NUC) of a resistor array infrared scene projector requires careful measurement of the output-versus-input response for every emitter in a large array. In previous papers, we have discussed methods and results for accomplishing the projector NUC. Two difficulties that may limit the NUC results are residual nonuniformity in the calibration sensor, and nonlinearity in the calibration sensor's response to scene radiance. These effects introduce errors in the measurement of the projector elements' output, which lead to residual nonuniformity. In this paper we describe a recent effort to mitigate both of these problems using a procedure that combines sensor nonuniformity correction and sensor calibration, detector by detector, so that these problems do not contaminate the projector NUC. By measuring a set of blackbody flood-field images at a dozen or so different temperatures, the individual detector output-versus-input radiance responses can be measured. Similar to the projector NUC, we use a curve-fitting routine to model the response of each detector. Using this set of response curves, a post-processing algorithm is used to correct and calibrate the images measured by the sensor. We have used this approach to reduce several sensor error sources by a factor of 10 to 100. The resulting processing is used to correct and calibrate all of the sensor images used to perform the projector NUC, as one step in the projector NUC. The procedure appears to be useful for any application where sensor nonuniformity or response nonlinearities are significant.
Infrared projection systems based on resistor arrays typically produce radiometric outputs with wavelengths that range from less than 3 microns to more than 12 microns. This makes it possible to test infrared sensors with spectral responsivity anywhere in this range. Two resistor-array projectors optically folded together can stimulate the two bands of a 2-color sensor. If the wavebands of the sensor are separated well enough, it is possible to fold the projected images together with a dichroic beam combiner (perhaps also using spectral filters in front of each resistor array) so that each resistor array independently stimulates one band of the sensor. If the wavebands are independently stimulated, it is simple to perform radiometric calibrations of both projector wavebands. In some sensors, the wavebands are strongly overlapping, and driving one of the resistor arrays stimulates both bands of the unit-under-test (UUT). This “coupling” of the two bands causes errors in the radiance levels measured by the sensor, if the projector bands are calibrated one at a time. If the coupling between the bands is known, it is possible to preprocess the driving images to effectively decouple the bands. This requires performing transformations, which read both driving images (one in each of the two bands) and judiciously adjusting both projectors to give the desired radiance in both bands. With this transformation included, the projection system acts as if the bands were decoupled - varying one input radiance at a time only produces a change in the corresponding band of the sensor. This paper describes techniques that have been developed to perform radiometric calibrations of spectrally coupled, 2-color projector/sensor systems. Also presented in the paper are results of tests performed to demonstrate the performance of the calibration techniques. Possible hardware and algorithms for performing the transformation in real-time are also presented.
For many types of infrared scene projectors, differences in the outputs of individual elements are one source of error in projecting a desired radiance scene. This is particularly true of resistor-array based infrared projectors. Depending on the sensor and application, the desired response uniformity may prove difficult to achieve. The properties of the sensor used to measure the projector outputs critically affect the procedures that can be used for nonuniformity correction (NUC) of the projector, as well as the final accuracy achievable by the NUC. In this paper we present a description of recent efforts to perform NUC of an infrared projector under “adverse” circumstances. For example, the NUC sensor may have some undesirable properties, including: significant random noise, large residual response nonuniformity, temporal drift in bias or gain response, vibration, and bad pixels. We present a procedure for reliably determining the output versus input response of each individual emitter of a resistor array projector. This NUC procedure has been demonstrated in several projection systems at the Kinetic Kill Vehicle Hardware-In-the-Loop Simulator (KHILS) including those within the KHILS cryogenic chamber. The NUC procedure has proven to be generally robust to various sensor artifacts.
Infrared detectors operating in two or more wavebands can be used to obtain emissivity-area, temperature, and related parameters. While the cameras themselves may not collect data in the two bands simultaneously in space or time, the algorithms used to calculate such parameters rely on spatial and temporal alignment of the true optical data in the two bands. When such systems are tested in a hardware-in-the-loop (HWIL) environment, this requirement for alignment is in turn imposed on the projection systems used for testing. As has been discussed in previous presentations to this forum, optical distortion and misalignment can lead to significant band-to-band and band-to-truth simulation errors. This paper will address the potential impact of techniques to remove these errors on typical two-color estimation algorithms, as well as improvements obtained using distortion removal techniques applied to HWIL data collected at the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) facility.
The effects of distortion in the complex optical system of an IR scene projector have motivated the development of methods for spatial calibration for scene projectors. A typical method utilizes the projection of a set of test images, with careful measurement of the location of points in the image. Given the projected and measured positions, a parametric model is used to describe the spatial “distortion” of the projection system. This distortion model can then be used for a variety of purposes, including pre-processing the images to be projected so that the distortion of the projection system is pre-compensated, and the distortion of the projection system is negated. This application and specific method have been demonstrated, and can compensate for a variety of distortion and alignment effects in the projector / sensor configuration. Personnel at the Kinetic Kill Vehicle Hardware-in-the-loop Simulator (KHILS) facility have demonstrated compensation and co-alignment of 2-color projection systems with sub-pixel precision using this technique. This paper describes an analysis of a situation in which pre-compensated images are translated (either mechanically or optically) to simulate motion of a target object or adjust alignment of the sensor and projector. The effect of physically translating images that had been pre-compensated for a different projector/sensor alignment was analyzed. We describe the results of a study of the translation and distortion effects, and characterize the expected performance of a testing procedure that requires translation of the pre-compensated images.
An unexpected effect was observed in a data set recently measured at the Kinetic Kill Vehicle Hardware-in-the-loop Simulator (KHILS) facility. A KHILS projector was driven to illuminate a contiguous block of emitters, with all other emitters turned off. This scene was measured with a two-color IR sensor. A sequence of 100 images was recorded, and certain statistics were computed from the image sequence. After measuring and analyzing these images, a “border” was observed with a particularly large standard deviation around the bright rectangular region. The pixels on the border of the region were much noisier than either inside or outside of the bright region. Although several explanations were possible, the most likely seemed to be a small vibration of either the sensor or projector. The sensor, for example, uses a mechanical cyro-cooler, which produces a vibration that can be felt by hand. Further analyses revealed an erratic motion of the position of objects in the image with amplitude of a few tents of the detector pitch. This small motion is sufficient to produce large fluctuations in the image pixel values in regions that have a large radiance gradient - such as suggest that the standard deviation of a “block image” sequence is easy to compute and will show the characteristic effect in the presence of image motion as small as a fraction of the detector pitch.
In some of its infrared projection systems, the Kinetic Kill Vehicle Hardware-In-the-Loop Simulator (KHILS) facility uses two 512 x 512 Wideband Infrared Scene Projector (WISP) resistor arrays to stimulate two different camera wavebands at the same time. The images from the two arrays are combined with a dichroic beam combiner, allowing the two camera bands to be independently stimulated. In early tests it was observed that the projector bands were not completely independent. When one array was projecting, the projected pattern could be seen in the opposite camera band. This effect is caused by spectral “crosstalk” in the camera/projector system. The purpose of this study was to build a mathematical model of the crosstalk, validate the model with measurements of a 2-color projection system, and then use the model as a tool to determine the spectral characteristics of filters that would reduce the crosstalk. Measurements of the crosstalk were made in the KHILS 2-color projector with two different 2-color cameras. The KHILS Quantum Well Infrared Photodetector (QWIP) Mid-Wave (MW)/Long-Wave (LW) camera and the Army Research Laboratory HgCdTe (HCT) MW/LW camera were used in the tests. The model was used to analyze the measurements, thus validating the model at the same time. The model was then used to describe conceptual designs of new 2-color projection configurations, enabling a prediction of crosstalk in the system, and selection of filters that would eliminate the crosstalk.
As discussed in a previous paper to this forum, optical components such as collimators that are part of many infrared projection systems can lead to significant distortions in the sensed position of projected objects versus their true position. The previous paper discussed the removal of these distortions in a single waveband through a polynomial correction process. This correction was applied during post-processing of the data from the infrared camera-under-test. This paper extends the correction technique to two-color infrared projection. The extension of the technique allows the distortions in the individual bands to be corrected, as well as providing for alignment of the two color channels at the aperture of the camera-under-test. The co-alignment of the two color channels is obtained through the application of the distortion removal function to the object position data prior to object projection.
The Honeywell resistor arrays produce radiance outputs, which are observed to have a strong non-linear dependence on the voltage out of the digital-to-analog-converters (DACs). In order for the projection system to run in a radiometrically calibrated mode, the radiances in the image generator must be transformed with exactly the inverse of the resistor array response function before they are sent to the DACs. Representing the image values out of the image generator and the values into the DACs with quantized, digital values introduces errors in the radiance out of the resistor array. Given the functional form of the emitter array response and the number of bits used to represent the image values, these errors in the radiometric output due to the quantization effects can be calculated. This paper describes the calculations and presents results for WISP, MSSP, and the new extended range and standard range BRITE II arrays.
This paper describes a simulation and analysis of a sensor viewing a 'pixelized' scene projector like the KHILS' Wideband Infrared Scene Projector (WISP). The main objective of this effort is to understand and quantify the effects of different scene projector configurations on the performance of several sensor signal processing algorithms. We present simulation results that quantify the performance of two signal processing algorithms used to estimate the sub-pixel position and irradiance of a point source. The algorithms are characterized for different signal-to-noise ratios, different projector configurations, and two different methods for preparing images that drive the projector. We describe the simulation in detail, numerous results obtained by processing simulated images, algorithms and projector properties, and present conclusions.
The Kinetic Kill Vehicle Hardware-In-the-Loop Simulator, located at Eglin AFB, has developed the capability to perform broadband 2-color testing of guided missile seekers in both ambient and cryogenic environments. The 2-color capability is provided by optically combining two 512 X 512 resistor arrays and projecting through all-reflective optical systems. This capability has raised the following questions: `How would a resistor array, designed to work at ambient conditions, perform when operated in a cryogenic environment?' and `How would a resistor array that was non- uniformity corrected (NUC) at ambient conditions perform when the NUC is applied to the array in a cryogenic environment?' The authors will attempt to address these questions by performing several measurements on a Wideband Infrared Scene Projector (WISP) Phase III resistor array in both ambient and cryogenic conditions. The WISP array performance will be defined in terms of temporal response, spatial non-uniformity, radiometric and thermal resolution, and radiometric and thermal transfer function.
Infrared projection systems commonly use a collimating optical system to make images of a projection device appear far away from the infrared camera observing the projector. These `collimators' produce distortions in the image seen by the camera. For many applications the distortions are negligible, and the major problem is simply shifting, rotating, and adjusting the magnification, so that the projector image is aligned with the camera. In a recent test performed in the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator facility, it was necessary to correct for distortions as small as 1/10th the size of the camera pixels across the field of view of the camera. This paper describes measurements and analyses performed to determine the optical distortions, and methods used to correct them.
A challenging problem associated with performing hardware- in-the-loop tests of imaging infrared seekers is projecting images that are spatially realistic. The problem is complicated by the fact that the targets may be small and unresolved at acquisition and grow to fill the field of view before intercept. In previous work, mathematical and computer models of the process of observing a pixelized projector with a camera have been developed, metrics of the spatial realism of the projector have been proposed, and model predictions examined.
The KHILS Vacuum Cold Chamber (KVACC) was developed to provide the capability of performing hardware-in-the-loop testing of infrared seekers requiring scenes involving cold backgrounds. Being able to project cold backgrounds enables the projector to simulate high-altitude exoatmospheric engagements. Previous tests with the KVACC projection system have used only one resistive-array projection device. In order to realistically stimulate a 2-color seeker, it is necessary to project in two, independently controlled IR bands. Missile interceptors commonly use two or more colors; thus, a 2-color projection capability has been developed for the KVACC system. The 2- color projection capability is being accomplished by optically combining two Phase 3 WISP arrays with a dichroic beam combiner. Both WISP arrays are cooled to user-selected temperatures ranging from ambient temperature to below 150 K. In order to test the projection system, a special-purpose camera has also been developed. The camera is designed to operate inside the vacuum chamber. It has a cooled, all- reflective broadband optical system to enable the measurement of low radiance levels in the 3 - 12 micrometer spectrum. Camera upgrades later this year will allow measurements in two independent wavebands. Both the camera and the projector will be described in this paper.
Phase 3 WISP arrays and BRITE arrays are currently being used extensively in many projection systems in many different facilities. These arrays have not been annealed at the factory, and previous tests with the arrays have revealed instabilities in the radiometric output when the arrays are driven at higher voltages. In some applications, the instabilities can be avoided by operating the arrays at lower voltages. In many KHILS applications, it is desirable to drive the arrays with the highest possible voltages to simulate hot missile targets. In one KHILS application (the KHILS VAcuum Cold Chamber, KVACC), the arrays are cooled to near cryogenic temperatures and then driven to high voltages. At lower substrate temperatures, the characteristic responses of the emitters change. Thus, it is important that the response and the stability of the radiometric output of the arrays be well understood for various substrate temperatures, and that the arrays either be annealed or operated below the voltage where the emitters begin to anneal. KHILS has investigated annealing procedures in the past, but there was concern that the annealing procedures themselves -- driving the arrays at high voltages for long times -- would damage the arrays. In order to understand the performance of the arrays better, and to reduce risks associated with driving the arrays at high voltages and operating the arrays at low substrate temperatures, a systematic measurement program was initiated. The radiometric output of new Phase 3 WISP arrays was accurately measured as a function of voltage and time. Arrays designated for testing were driven to the higher voltages and the radiometric output was measured for as long as two hours. Curves indicative of the annealing were observed, and it was determined that the maximum stable output without annealing was about 500 K (MWIR apparent temperature). Blocks of emitters were annealed and tested again. It was determined that stable output of as much as 680 K could be obtained with annealed emitters. KHILS personnel worked with Honeywell Technology Center (HTC) to establish annealing procedures that could be done by HTC in the future. Conclusions to date are that once the emitters are sufficiently annealed, their output does not change further with time, except for some small transient effects that will be discussed in the paper.
In the past year, Honeywell has developed a 512 X 512 snapshot scene projector containing pixels with very high radiance efficiency. The array can operate in both snapshot and raster mode. The array pixels have near black body characteristics, high radiance outputs, broad band performance, and high speed. IR measurements and performance of these pixels will be described. In addition, a vacuum probe station that makes it possible to select the best die for packaging and delivery based on wafer level radiance screening, has been developed and is in operation. This system, as well as other improvements, will be described. Finally, a review of the status of the present projectors and plans for future arrays is included.
The KHILS Vacuum Cold Chamber (KVACC) provides the capability of testing IR seekers with scenes involving a `cold' background, more closely simulating a high altitude/exoatmospheric engagement. During the past year, a gaseous helium refrigeration system has been installed to simplify the logistics of cooling the chamber. An antechamber has also been installed to serve as a chamber for the sensor under test. A WISP array was installed in the Source Chamber. A thermal control system was developed by connecting the array to a cold surface by way of a thermal choke, then actively controlling the temperature with heating elements. This made it possible to operate the array at user selected, stable substrate temperatures ranging from ambient temperature to below 150 K. This capability makes it possible to select the infrared background level that the array operates at, and to operate with background levels that are adequate for testing the high altitude/exoatmospheric engagements. WISP arrays were designed for room temperature operation, but predicted performance at reduced temperatures appears acceptable. Tests were performed with a Phase I prototype WISP array inside the KVACC Source Chamber. Data on this array's radiometric response at various substrate temperatures are presented. It is demonstrated that the arrays can be operated at substrate temperatures as low as 145 K. Currently two Phase 3 WISP arrays and a dichroic beam combiner are being installed in the Source Chamber for 2- color testing.
KEYWORDS: Cameras, Projection systems, Calibration, Black bodies, Quantum well infrared photodetectors, Long wavelength infrared, Infrared radiation, Mid-IR, Temperature metrology, Imaging systems
The Wideband Infrared Scene Projector (WISP) has been undergoing development for the Kinetic-Kill Vehicle Hardware-in-the-Loop Simulator facility at Eglin AFB, Florida. In order to perform realistic tests of an infrared seeker, the radiometric output of the WISP system must produce the same response in the seeker as the real scene. In order to ensure this radiometric realism, calibration procedures must be established and followed. This paper describes calibration procedures that have been used in recent tests. The procedures require knowledge of the camera spectral response in the seeker under test. The camera is set up to operate over the desired range of observable radiances. The camera is then nonuniformity corrected (NUCed) and calibrated with an extended blackbody. The camera drift rates are characterized, and as necessary, the camera is reNUCed and recalibrated. The camera is then set up to observe the WISP system, and calibration measurements are made of the camera/WISP system.
This paper discusses the implementation and evaluation of several different algorithms for image superresolution (SR). Such processing is of interest in many imaging situations where resolution is limited by range, wavelength, aperture size, detector size, or other physical or practical constraints. A relevant example is the application of improved resolution to passive millimeter wave imaging sensors for munitions systems. In this paper, we refer to superresolution as processing which recovers spatial frequency components of a measured image that are completely suppressed by the image formation process. We demonstrate performance of several iterative algorithms, and discuss several aspects of the implementation and evaluation of SR processing.
The Wideband Infrared Scene Projector (WISP) has been undergoing development for the AF Research Laboratory Kinetic Kill Vehicle Hardware-in-the-loop Simulator facility (KHILS) at Eglin AFB, FL. Numerous characterization measurements defining array dynamic range, spectral output, temporal response and nonuniformity have been performed and reported on in the past. This paper addresses the measurements and analyses performed to characterize the radiometric, spatial, and temporal noise errors induced by the array on a unit under test (UUT). An Amber camera was used as the UUT. The Amber camera spectral, spatial and radiometric response characteristics were measured. The camera spatial and temporal noises were measured by observing an extended blackbody. Similar measurements were then made on the WISP/UUT system by projecting uniform scenes. The WISP spatial and radiometric responses and the WISP-induced spatial and temporal noise were determined from the measurements. Although the measurements are unique to the UUT adopted, the WISP contribution to the system noise-equivalent temperature difference (NEDT) was determined. The spatial noise measurements provided data for validating a spatial noise model described in a companion paper. The measurements and models are useful for analyzing future measurements and predicting the impact of WISP on various test articles.
This paper presents mathematical models and measurements of the spatial noise a camera observes as it views a projection system with nonuniform emitter responses. The models account for the effects of the projector and camera spatial resolutions and of the alignment of the emitters with respect to the camera detectors. The models attempt to provide a better understanding of the spatial effects in a projection system and provide mathematical models for analyzing measurements and designing future hardware-in-the-loop tests. In previous work, one of the authors presented a model of the spatial, spectral, and temporal effects in a pixelized projector. In this paper, the previous model is simplified omitting the temporal effects (the scenes are assumed static). The model is then modified to describe random variations (noise) in the responses from one emitter to the next. This paper presents two different methods of modeling these effects. The first involves evaluating the spatial model directly. The second method involves performing a first order error propagation analysis on the spatial model and neglecting alignment effects. Measurements were performed to validate the models. The measurements are described in detail in a companion paper. In this paper, the spatial noise measurements are compared with model results. It was found that alignment effects were negligible, and the resulting predictions of the simplest model were in good agreement with the measured spatial noise.
In a series of measurements made to characterize the performance of a Wideband Infrared Scene Projector (WISP) system, timing artifacts were observed in one set of tests in which the projector update was synchronized with the camera readout. The projector was driven with images that varied from frame to frame, and the measured images were examined to determine if they varied from frame to frame in a corresponding manner. It was found that regardless of the relative time delay between the projector update and sensor readout, each output image was a result of two input images. By analyzing the timing characteristics of the camera integration scheme and the WISP update scheme it was possible to understand effects in the measured images and simulate images with the same effects. This paper describes the measurements and the analyses. Although the effects were due to the unique camera integration and readout scheme, the effects could show up when testing other sensors. Thus also presented in this paper are techniques for testing with resistive array projectors, so that the timing artifacts observed with various kinds of cameras are minimized or eliminated.
A challenging problem associated with performing hardware- in-the-loop tests of imaging infrared seekers is projecting images that are spatially realistic. The problem is complicated by the fact that the targets may be small and unresolved at acquisition and grow to fill the field of view during the final guidance updates. Although characteristics of the projection system are usually thought of as determining the spatial realism, the imagery used to drive the projector is also important. For a pixelized projector, the driving imagery must be sampled at a rate determined by the sample spacing of the pixels in the projector. If the scenes contain important information that is small compared to the projector pixel spacing (that is, if they have important information at high spatial frequencies), then information may be lost in the sampling process if the images are not adequately bandlimited. This bandlimiting can be accomplished by prefiltering the scenes. At acquisition, targets are usually small; thus, prefiltering is necessary to preserve information about the target. Without such prefiltering, for example, infinitesimally small targets would never be seen unless they just happened to be at the exact location where the scene is sampled for a projector pixel. This paper reports the results of a study of various filters that might be used for prefiltering synthetic imagery generated to drive projectors in the KHILS facility. Projector and seeker characteristics typical of the KHILS facility were adopted for the study. Since the radiance produced by projectors is always positive, filters that can produce negative values were not considered. Figures of merit were defined based on the sensor-measured quantities such as radiant intensity, centroid, and spot size. The performance of prefilters of various shapes and sizes and for typical projector and seeker characteristics will be reported.
This paper presents an analysis of spatial blurring and sampling effects for a sensor viewing a pixelized scene projector. It addresses the ability of a projector to simulate an arbitrary continuous radiance scene using a field of discrete elements. The spatial fidelity of the projector as seen by an imaging sensor is shown to depend critically on the width of the sensor MTF or spatial response function, and the angular spacing between projector pixels. Quantitative results are presented based on a simulation that compares the output of a sensor viewing a reference scene to the output of the sensor viewing a projector display of the reference scene. Dependence on the blur of the sensor and projector, the scene content, and alignment both of features in the scene and sensor samples with the projector pixel locations are addressed. We attempt to determine the projector characteristics required to perform hardware-in-the-loop testing with adequate spatial realism to evaluate seeker functions like autonomous detection, measuring radiant intensities and angular positions or unresolved objects, or performing autonomous recognition and aimpoint selection for resolved objects.
The reliability of computer models of both signatures and effects introduced as the signatures are measured is of critical importance in the development of seekers and other devices which use EO sensors. Typically the models consist of many independent modules or programs, each of which models many different physical processes and requires many different types of input data. Due to the models' complexity, their reliability is commonly not well established, and the domains within which the models are valid are commonly not understood. Efforts at establishing the reliability of the models frequently only address a very limited number of characteristics of the outputs and seldom characterize the reliability of the input parameters. Figures-of-merit usually address specific characteristics of certain outputs and often do not attempt to evaluate the performance of the model in the complete system. This paper discusses the problem of establishing the reliability of computer models of passive imaging sensors and the signatures they observe. General question about the process of comparing measurements with model results to determine how well the model works are examined, and methods for establishing reliability are presented. The physical models and types of input data required to describe the signatures and sensors are enumerated. Quantities used to describe the outputs of the different physical models are identified, and different types of figures-of-merit are identified to evaluate how well the computer model outputs compare with true/measured values. The Irma passive model is used as an example and several examples of validation completed for this code are presented.
Hardware-in-the-loop (HWIL) simulation combines functional hardware with digital models. This technique has proven useful for test and evaluation of guided missile seekers. In a nominal configuration, the seeker is stimulated by synthetic image data. Seeker outputs are passed to a simulation control computer that simulates guidance, navigation, control, and airframe response of the missile. The seeker can be stimulated either by a projector or by direct signal injection (DSI). Despite recent advancements in scene projection technology, there are practical limits to the scenes produced by a scene projector. Thus, the test method of choice is often DSI. This paper discusses DSI techniques for HWIL. In this mode, sensor hardware is not used; scene signature data, provided directly to the seeker signal processor, is computed and sensor measurement effects are simulated. The computed images include sensor effects such as blurring, sampling, detector response characteristics, and noise. This paper discusses DSI methods for HWIL, with specific applications at the Air Force Kinetic Kill Vehicle Hardware-in-the-loop Simulator facility.
The degree to which hardware-in-the-loop tests can be used to replace more expensive flight tests is dependent on how well the tests resemble real flight tests. One of the most challenging problems associated with making realistic hardware-in-the-loop tests is the projection of realistic imagery to the seeker. Since a seeker is limited in its ability to `see' a real scene, projection systems do not have to perfectly replicate real scenes. They only have to produce scenes which appear the same as the real scenes when measured with spatial, spectral, and temporal resolutions that are at least as poor as those of the seekers to be tested. Unfortunately, this means that in order to determine the realism of a given test or class of tests, it is necessary to include in the analysis characteristics of the seekers as well as characteristics of both the real scenes and the projected scenes. For many reasons, the conventional Fourier transform techniques are not adequate for performing these analyses. In this paper, a formalism is given for analyzing spatial, spectral, and temporal effects in a hardware-in-the-loop system involving a pixelized projector and a passive imaging sensor. The fundamental equations are presented describing the measurement of either a real scene or a pixelized projector with a passive imaging sensor. The equations are kept in the space, wavelength, and time domains to avoid the unnecessary restrictions that are encountered when transforming to the Fourier domain. An example is given of an application of the formalism to evaluate the effects of projector pixel spacing and blur effects.
The Navy's Coastal System Station (CSS) at Panama City, Florida has been investigating the use of multispectral, intensified cameras for standoff minefield detection. In support of CSS, Nichols Research Corporation's Shalimar Florida Office has developed a 'minefield image synethesis tool', (MIST), which is capable of simulating UV to near-IR images of minefields. The MIST software is divided into two major modules, an image generator and an intensified camera model. The image generator (IG) software performs 3D graphics rendering of objects in the scene to produce 2D images as an imaging sensor would see them. The IG models diffuse reflection from sunshine, skyshine, and earthshine. Path transmittances and radiances are accounted for. The sensor spectral band is a user input. Other quantities including reflectances and illumination sources are imput spectrally, making it possible to generate images for different spectral bands, such as those being investigated by CSS. Sensor effects including intensifier/detector response, noise, and analog-to-digital conversion are modeled in the intensified camera model (ICM) software. This paper describes the MIST software and tests that have been performed to validate the software.
The Irma synthetic signature model was one of the first high resolution synthetic infrared (IR) target and background signature models to be developed for tactical air-to-surface weapon scenarios. Originally developed in 1980 by the Armament Directorate of the Air Force Wright Laboratory (WL/MN), the Irma model was used exclusively to generate IR scenes for smart weapons research and development. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser channel. This two channel version, Irma 3.0, was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model which supported correlated frame-to-frame imagery. This and other improvements were released in Irma 2.2. Recently, Irma 3.2, a passive IR/millimeter wave (MMW) code, was completed. Currently, upgrades are underway to include an active MMW channel. Designated Irma 4.0, this code will serve as a cornerstone of sensor fusion research in the laboratory from 6.1 concept development to 6.3 technology demonstration programs for precision guided munitions. Several significant milestones have been reached in this development process and are demonstrated. The Irma 4.0 software design has been developed and interim results are available. Irma is being developed to facilitate multi-sensor smart weapons research and development. It is currently in distribution to over 80 agencies within the U.S. Air Force, U.S. Army, U.S. Navy, ARPA, NASA, Department of Transportation, academia, and industry.
This paper addresses the process of measuring the output of individual elements of a pixelized scene projector. The in-band scene projector is a key component of a sensor/seeker test facility such as the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB, Florida. Analyses are presented which quantify errors associated with measuring the radiant intensity of individual pixels on a scene projector. The errors are broken down into sampling errors, truncation errors, and random measurement noise. The magnitude of each error source is determined as a function of parameters of the projector and sensor such as the element spacings, and blur. Guidelines for using this information to accurately and efficiently perform nonuniformity correction of a scene projector are presented.
KEYWORDS: Sensors, Projection systems, Signal to noise ratio, Interference (communication), Signal detection, Sensor performance, Error analysis, Signal processing, Staring arrays, Analytical research
This paper examines the relative significance of and dependencies between different noise sources which affect a sensor viewing a scene projector. An analysis is presented which compares the effect of various signal-dependent and signal-independent sensor noises, as well as the effect of projector nonuniformity (NU) on the sensor output. A key result of this analysis is a quantitative means to assess the importance of projector NU on sensor performance for different scene levels and operating conditions. It provides an analytical means to address questions such as: How and how much does projector NU influence the sensor response? At what level does the projector NU become a limiting factor in testing sensor performance? What is the penalty incurred in a particular test if a specified projector NU is not achieved? What effort should be expended to reduce projector NU in relation to other errors for a particular application? Discussion, results, and conclusions for a specific application are presented in addition to the analyses.
generating a high degree of uniformity across the FOV in infrared scene simulators, over a wide dynamic range, is necessary to avoid introducing unintentional structure into the projected image. One challenge for calibration is establishing measurement of the radiance outputs of each of the 256,000 individuals pixels to the required accuracy levels at several radiance levels, within a reasonable time, with available instrumentation. Issues affecting measurement accuracy include the aperture, focal length, blur circle, and IFOV characteristics of the non-uniformity calibration (NUC) sensor, geometric and diffraction blur characteristics of the collimator optics (which vary with field position), NUC sensor noise and stability (temporal and spatial), emitter pixel geometry and temperature profile, and the relationship between the spectral characteristics of the NUC sensor and the source. Analyses are presented which determine the limitations on calibration accuracy based on predicted and measured performance of the WISP projector and the NUC sensor components. Some NUC sensor accuracy data, needed to support the determination of the overall process parameters, was collected in special NUC sensor tests and is presented herein. A combination of NUC process parameters is developed which achieves optimum accuracy in performing the NUC calibration, and which is expected to achieve the necessary calibrated uniformity performance of 1% for WISP.
The fundamental quantity used to describe radiative transfer is the radiance or specific intensity. For unpolarized descriptions of surface scattering, the radiance is used in conjunction with the bidirectional reflectance distribution function (BRDF) defined by Nicodemus. Unfortunately, the BRDF does not describe polarization effects. In recent years, polarized descriptions of surface scattering have been developed, but the relationship between the BRDF and the polarized descriptions of surface scattering have not been published. The unpolarized description of surface scattering, which involves radiances and BRDFs, is extended to properly describe polarization effects. The concepts of radiance, irradiance, intensity, and radiant power are redefined as vector quantities. The intensity vectors are defined as conventional Stokes vectors and as the modified Stokes vectors adopted by Ishimaru, Ulaby, and others. The BRDF and directional reflectance matrices are defined and expressed in terms of the amplitude scattering matrix elements and in terms of the intensity scattering or phase matrix. The relationships between the unpolarized and polarized BRDFs and directional reflectances are discussed.
The field of radiometry describes the fundamental radiative processes that are commonly involved in the operation of a passive imaging sensor. Scattering from surfaces is described in radiometry by the bidirectional reflectance distribution function (BRDF) defined by Nicodemus. As it is commonly defined, radiometry does not describe polarization effects in the radiative transfer processes. Vector scattering theories have attempted to describe polarized surface scattering with a 2 X 2 BRDF matrix. Polarimetry suggests that a 4 X 4 Mueller-like matrix is required to describe polarized surface scattering. In this paper, radiometric terms are redefined as polarized, vector quantities in a manner consistent with polarimetry. A full 4 X 4 BRDF matrix is derived from the scattering matrix. (Actually a 3 X 3 BRDF matrix with one row and column of complex values is adopted to simplify the equations and to facilitate relating the BRDF matrix to both the scattering cross section and the 2 X 2 BRDF matrix adopted in vector scattering theories.) A directional reflectance matrix and directional emittance vector are defined and their relationship is given. It is observed that the polarization character of surface reflectances and emittances are commonly not measured completely, and it is recommended that measurement programs be initiated to measure the full polarization character of common materials.
Simulation methods offer a time- and cost-effective approach to the evaluation and testing of guided missiles. These include hardware-in-the-loop as well as all-digital simulations which provide information about how a particular existing or proposed missile system might perform in hypothetical situations which may not be practically duplicated in reality. This paper describes an all-digital simulation developed using available components. The paper describes the functional flow of the simulation, and identifies the information which is passed between the independent modules. Several applications are described, and results such as intercept miss distances, line-of-sight pointing error statistics, and sensitivity to certain system parameters are demonstrated.
Hardware-in-the-loop (HWIL) testing can be used as an efficient and effective means for analyzing the performance of guided missile systems. Due to the limits of current technologies, components of the simulation are limited in their capability to simulate real-world conditions for certain test articles. One component which is critical in an HWIL system for strategic guided missiles is the scene projection or delivery device. To stimulate imaging JR sensors, this scene projector (SP) typically consists of a pixelized in-band source which can be modulated both spatially and temporally to simulate the radiane scene which would be observed during an actual engagement. The SP is driven by a scene generator which provides scene radiance information to the SP under control of a simulation computer, which determines the field-of-view (FOV) composition based on a simulated engagement. In using such a system, a primary concern is that the SP is able to create a scene which produces the proper response in the observing sensor. Another effect which bears examination is the SFs projection method, such as scanning an in-band source to cover the projection FOV. The detailed interaction between the modulated source and the timing of the sensor's detection, integration, and readout processes may cause unrealistic or unexpected sensor behavior. In order to assess the compatibility of a specific sensor viewing a specific SP, a detailed simulation has been developed by Nichols Research Corporation under the direction of the Guided Interceptor Technology Branch (WL/MNSI) of the USAF Wright Laboratory Armament Directorate. This simulation was designed primarily to address issues related to scene projector usage in the Kinetic Kill Vehicle Hardware in the Loop Simulator (KHILS) facility at Eglin AFB, Florida. The simulation allows the user to define: the spatial response of the sensor; the spatial properties of the SP (i.e. the radiance distribution arising from a commanded impulse); the illumination timing of the SP, such as scan format, persistence, etc.; and the integration and readout timing of the sensor. Given sampled values of these response functions, and sampled values of the desired radiance scene, the SP simulation computes the detector outputs in the form of a sensed image. This output image can help to assess the suitability of using the modeled SP for testing the modeled sensor by illustrating potential mismatches. It also provides a means to predict the performance to be expected from this module of the HWIL simulation for a particular test scenario. This paper derives equations which express the sensor output as a function of the input scene, the spatial and temporal response functions of the sensor and the SP, and the spectral response functions of the sensor and SP. Assumptions which affect the implementation and the generality of application are stated and discussed. Results and conclusions are presented for a specific application which illustrate the utility of the simulation
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.