PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603279
Object detection in hyperspectral imagery benefits from the large amount of spectral information. The effective use of this information is crucial for a detection algorithm to achieve high accuracy under challenging conditions. In this paper, we establish subspace representations for 3D objects and backgrounds to improve discriminability for 3D detection invariant to unknown illumination and atmospheric conditions. Residual variance information is utilized to generate background and mixed residual statistics which improve the separation of target and background for detection. A new detection algorithm that uses these statistics in conjunction with a likelihood ratio test is proposed for the subpixel detection of
complex 3D objects in cluttered backgrounds. Other existing algorithms, e.g. the generalized likelihood ratio test (GLRT), can be derived from this algorithm by introducing the appropriate assumptions. The new detection algorithm is evaluated for a number of images simulated using DIRSIG and also compared with other detection algorithms. The experimental results demonstrate accurate performance on these data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.601834
In this paper we present a nonlinear version of the well-known anomaly detection method referred to as the RX-algorithm. Extending this algorithm to a feature space associated with the original input space via a certain nonlinear mapping function can provide a nonlinear version of the RX-algorithm. This nonlinear RX-algorithm, referred to as the kernel RX-algorithm, is basically intractable mainly due to the high dimensionality of the feature space produced by the non-linear mapping function. However, it is shown that the kernel RX-algorithm can easily be implemented by kernelizing it in terms of kernels which implicitly compute dot products in the nonlinear feature space. Improved performance of the kernel RX-algorithm over the conventional RX-algorithm is shown by testing hyperspectral imagery with military targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.598850
By using multispectral and hyperspectral data for detecting subpixel targets, we are able to exploit for point target detection both the spectral signature and the overall brightness of the target pixel. The standard methods for such detection (for example, the RX algorithm) assume that an accurate measure of the mean and the covariance matrix of the area is available; the deviation of the suspect pixel from these estimates is a measure of the degree that this pixel is a target.
These algorithms are particularly difficult to implement in images which contain multiple areas with different underlying statistical distributions. Such images need local estimates at each pixel to calculate the correct mean and covariance matrix. Even so, edge points between areas will still be incorrectly evaluated both because the pixels themselves are mixtures of different backgrounds and because the local estimate of the mean and covariance matrix will be faulty due to the presence of pixels from both distributions in the surrounding areas.
We have tried several approaches to lower the false alarms rates in such images. In particular, we have tried raising the threshold for detection in transition areas; in addition, we have used segmentation to better estimate the covariance matrixes of areas of similar pixels. In this work, we propose that the use of a spatial filter on the results of the spectral filter will greatly improve our results. Our experience in many of these images is that the locations of the false alarms tend to be grouped spatially. By raising the threshold in this way, we can eliminate the false alarms; although there will be some areas in which the targets will be noticeably harder to detect, the overall improvement due to the lowering of the false alarm rate and the consequent lowering of the threshold for target detection is notable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602315
The objective of this paper is the statistical characterization of
natural hyperspectral backgrounds using the multivariate t-elliptically contoured distribution. Traditionally, hyperspectral backgrounds have been modeled using multivariate Gaussian distributions; however it is well known that real data often exhibit "long-tail" behavior that cannot be accounted by normal distribution models. The proposed multivariate t-distribution model has elliptical equiprobability contours whose center and ellipticity is specified by the mean vector and covariance matrix of the data. The density of the contours, which is reflected into the distribution of the Mahalanobis distance, is controlled by an extra parameter, the number of degrees of freedom. As the number of degrees of freedom increases, the tails decrease and approach those of a normal distribution with the same mean and covariance. In this work we investigate the application of t-elliptically contoured distributions to the characterization of different hyperspectral background data obtained by visually interactive spatial segmentation ("physically" homogeneous classes), automated clustering algorithms using spectral similarity metrics (spectrally
homogeneous classes), and by fitting normal mixture models (statistically homogeneous classes). These investigations are done
using hyperspectral data from the AVIRIS sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
B. K. Feather, S. A. Fulkerson, J. H. Jones, R. A. Reed, M. A. Simmons, D. G. Swann, W. E. Taylor, L. S. Bernstein
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.601904
The authors recently developed a hyperspectral image output option for a standardized government code designed to predict missile exhaust plume infrared signatures. Typical predictions cover the 2- to 5-m wavelength range (2000 to 5000 cm-1) at 5 cm-1 spectral resolution, and as a result the hyperspectral images have several hundred frequency channels. Several hundred hyperspectral plume images are needed to span the full operational envelope of missile altitude, Mach number, and aspect angle. Since the net disk storage space can be as large as 100 GB, a Principal Components Analysis is used to compress the spectral dimension, reducing the volume of data to just a few gigabytes. The principal challenge was to specify a robust default setting for the data compression routine suitable for general users, who are not necessarily specialists in data compression. Specifically, the objective was to provide reasonable data compression efficiency of the hyperspectral imagery while at the same time retaining sufficient accuracy for infrared scene generation and hardware-in-the-loop test applications over a range of sensor bandpasses and scenarios. In addition, although the end users of the code do not usually access the detailed spectral information contained in these hyperspectral images, this information must nevertheless be of sufficient fidelity so that atmospheric transmission losses between the missile plume and the sensor could be reliably computed as a function of range. Several metrics were used to determine how far the plume signature hyperspectral data could be safely compressed while still meeting these end-user requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604895
Longwave Infrared (LWIR) data sets collected from airborne platforms provide opportunities for study of atmospheric and surface features in the emissive spectral regime. The transfer of radiation for LWIR scenes can be formulated in a manner that allows recovery of the surface-leaving radiance (a result of atmospheric compensation). Using a forward radiative transfer model, a number of modifications to the atmospheric component of the scene can be made and applied to the surface-leaving radiance to predict sensor radiance that reflects a desired scenario. One such modification is the inclusion of a layer of effluent, the structure of which can be simulated by a plume model. Additionally, a different set of atmospheric conditions can be modeled and used to replace the conditions present in the scene. The resultant scene radiance field can be used to test algorithms for effluent characterization since the composition of the effluent layer and the intervening atmosphere is known. This approach allows for the embedding of a plume layer containing any combination of effluents from a set of over 400 gas spectra, the dispersion of which can be simulated using various plume models. Examples of simulated plume scenes are given, one of which contains an existing plume which is replicated using known emission information. Comparison of the real and simulated plume brightness temperatures yielded differences on the order of 0.2 K.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605711
In the world of remote sensing, both radar and EO/IR (electro-optical/infrared) sensors carry with them unique information useful to the imaging community. Radar has the capability of imaging through all types of weather, day or night. EO/IR produces radiance maps and frequently images at much finer resolution than radar. While each of these systems is valuable to imaging, there exists unknown territory in the imaging community as to the value added in combining the best of both these worlds. This work will begin to explore the challenges in simulating a scene in both a radar tool called Xpatch and an EO/IR tool called DIRSIG (Digital Imaging and Remote Sensing Image Generation). The capabilities and limitations inherent to both radar and EO/IR are similar in the image simulation tools, so the work done in a simulated environment will carry over to the real-world environment as well. The goal of this effort is to demonstrate an environment where EO/IR and radar images of common scenes can be simulated. Once demonstrated, this environment would be used to facilitate trade studies of various multi-sensor instrument design and exploitation algorithm concepts. The synthetic data generated will be compared to existing measured data to demonstrate the validity of the experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602747
For accurate and robust analysis of remotely-sensed imagery it is
necessary to combine the information from both spectral and spatial domains in a meaningful manner. The two domains are intimately linked: objects in a scene are defined in terms of both their composition and their spatial arrangement, and cannot accurately be described by information from either of these two domains on their own.
To date there have been relatively few methods for combining spectral
and spatial information concurrently. Most techniques involve separate processing for extracting spatial and spectral information. In this paper we will describe several extensions to traditional morphological operators that can treat spectral and spatial domains concurrently and can be used to extract relationships between these domains in a meaningful way. This includes the investgation and development of suitable vector-ordering metrics and machine-learning-based techniques for optimizing the various parameters of the morphological operators, such as morphological operator, structuring element and vector ordering metric. We demonstrate their application to a range of multi- and hyper-spectral image analysis problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603938
In this paper, we design a decision rule to select optimized neighbor sets for multispectral images. We assume that multispectral images can be modeled by parametric Gaussian Random Fields. From a class of such models with different neighbor sets, we choose the best representation employing bayesian methods. The chosen model accounts for interactions within each of the spectral bands as well as the interaction between different spectral bands in a multispectral image. We evaluate the performance of the neighbor sets for multispectral texture classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603960
We use moment invariants for the recognition of regions in hyperspectral images under different illumination conditions. These moment invariants can be computed efficiently from the spectral histograms of the image regions. We propose methods for improving the recognition rate by choosing bands that improve the accuracy of the model underlying the invariants. These bands are also optimized to distinguish different materials. We demonstrate the use of multiple subsets of bands for invariant recognition. Experiments on DIRSIG images are presented to demonstrate the use of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602995
Efficient use of hyperspectral (HS) sensors that can selectively
activate individual, narrow spectral bands requires the development
of optimized band-selection strategies that are adapted to the needs
of specific detection and classification problems. By removing
superfluous components of the HS data, optimized band selection
significantly reduces the computational burden and improves
robustness in classification. In this paper, a new method for
selection of a subset of HS bands is proposed that is tailored to
the problem of recognition of classes of rocks and minerals. Based
on the analysis of the distribution of the amplitudes of the
principal components (PCs) for a given training data set, this
method identifies subsets of HS bands that provide highest spectral
contrast. Three criteria are considered in the band selection
process. The first criterion is based on ranking the HS bands
according to the minimum distance among their respective PCs'
amplitudes. The second criterion is based on ranking the HS bands
according to a variant of the Kullback-Liebler divergence between a
uniform distribution and the distribution of the amplitudes of the
PCs for each HS band. This criterion assigns a high number on
HS-bands whose PC-amplitudes' distribution either exhibits a wide
range or a strong similarity to the uniform distribution. The third
criterion is based on ranking the HS bands according to the
empirical relative variances of the inter-amplitude distances of
successive amplitudes at each HS band; it ranks high those HS-bands
with small inter-amplitude-distance variance and large amplitude
range. These band-selection strategies are applied to laboratory HS
data of rocks and minerals yielding a subset of thirteen optimal
multispectral bands. The classification performance with the reduced
number of bands is compared with that of the 13-band Multispectral
Thermal Imager showing a moderate improvement in classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604519
We present Genie Pro, a new software tool for image analysis produced by the ISIS (Intelligent Search in Images and Signals) group at Los Alamos National Laboratory. Like the earlier GENIE tool produced by the same group, Genie Pro is a general purpose adaptive tool that derives automatic pixel classification algorithms for satellite/aerial imagery, from training input provided by a human expert. Genie Pro is a complete rewrite of our earlier work that incorporates many new ideas and concepts. In particular, the new software integrates spectral information; and spatial cues such as texture, local morphology and large-scale shape information; in a much more sophisticated way. In addition, attention has been paid to how the human expert interacts with the software: Genie Pro facilitates highly efficient training through an interactive and iterative “training dialog”. Finally, the new software runs on both Linux and Windows platforms, increasing its versatility. We give detailed descriptions of the new techniques and ideas in Genie Pro, and summarize the results of a recent evaluation of the software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604114
One of great challenges in neural network-based analysis of remotely sensed imagery is to find an adequate pool of training samples without prior knowledge for the network so that that these unsupervised training samples can describe the data. A judicious selection of training data can be tremendously difficult due to the presence of subpixel targets and mixed pixels, particularly, when no prior knowledge is available. Surprisingly, the above issues have been largely overlooked in the past, where most of the efforts have been focused on exploring network architecture parameters such as the arrangement and number of neurons in the different layers. Very little has been done in regard to the selection of a set of good training samples for networks in mixed pixel classification. This paper revisits neural network-based mixed pixel classification from an aspect of training sample generation and further demonstrates that the selection of training samples can be more important than the choice of a specific network architecture. Since the training samples must be obtained directly from the data to be processed in an unsupervised fashion, four types of pixels: pure pixel, mixed pixel, anomalous pixel and homogeneous pixel are used to demonstrate this concept. A pure pixel is a pixel whose spectral signature is completely represented by a single marterial substance as opposed to a mixed pixel whossee spectral signature is made up of more than one material substance. A homogeneous pixel is defined as a pixed whose spectral signature remains nearly constant subject to small variations within its surroundings. Therefore, a homogeneous pixel can be considered as an opposite of an anomalous pixel whose signature is spectrally distinct from the signatures of its neighboring pixels. In this paper, various scenarios are designed for experiments to substantiate the impact of using these four types of pixels as training samples for mixed pixel classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604748
Many hyperspectral imagery (HSI) mapping methods currently attempt to determine the dimensionality of a dataset and extract discreet endmembers based on linear spectral mixing theory. The problem with this approach is that these datasets are often of such high dimensionality that it is difficult to extract the level of detail inherent in the data. Most such analysis approaches are simply overwhelmed by the complexity of HSI data. This research describes an approach that uses segmentation and iterative analysis of HSI data to reduce the dimensionality to a manageable level. The methodology involves spectral/spatial segmentation to determine initial groups of materials. The segmentation can be done using a variety of methods, including classical supervised or unsupervised classification methods, the Spectral Angle Mapper (SAM), spectral feature-based methods, or standard endmember determination and mapping approaches. The result of the segmentation is a broadly classified image. There may be significant variation within each class. These segments are then used as the starting point for additional n-Dimensional analysis. The HSI data are analyzed for each of the classes or segments using a linear mixing approach, endmembers are determined, and distributions and abundances are mapped. The segmentation reduces the original, complex dataset to a series of less complex problems. Combination of the segment results to a composite analysis result produces a materials map that includes additional detail beyond that achieved using the whole-image approaches. A case history utilizing AVIRIS data is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603094
The present paper addresses the issue of extraction, processing, and recognition of information from multi-spectral observations of our surroundings. A new method of dealing with multispectral recognition problems is developed, in which a physical thermodynamic model is used to describe the properties of the object classes in a multispectral image of a certain scene. According to the model, different groups of objects in the image are canonical populations that are in thermodynamic equilibrium with each other and with their surroundings. Between the objects act forces that result from a potential field. Various thermodynamic properties of the populations are calculated. The difference between two populations is evaluated by first bringing them to a common temperature and then using the informational difference as a difference measure.
The approach was implemented for a problem of combined formal and spectral classification of trees in a natural environment. The common temperature of two similar populations was varied until the separation between the populations reached a maximal value. A six-fold increase in the separation between the populations was achieved. In the future, we propose to use the Helmholtz free energy function as a quantity which attains a local minimum within each class of objects. An optimal classification scheme is one that minimizes the total free energy of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604075
To detect weak signals on cluttered backgrounds in high dimensional spaces (such as gaseous plumes in hyperspectral imagery) without excessive false alarms requires that the background clutter be effectively characterized. If the clutter is Gaussian, the well-known linear matched filter optimizes the sensitivity to a given plume signal while suppressing the effect of the background clutter. In practice, the background clutter is rarely Gaussian. Here we illustrate non-linear corrections to the matched filter that are optimal for two non-Gaussian clutter models and we report on parametric and nonparametric characterizations of background clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603661
Identification of constituent gases in effluent plumes is performed using linear least-squares regression techniques. Airborne thermal hyperspectral imagery is used for this study. Synthetic imagery is employed as the test-case for algorithm development. Synthetic images are generated by the Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model. The use of synthetic data provides a direct measure of the success of the algorithm through the comparison with truth map outputs. In image test-cases, plumes emanating from factory stacks will have been identified using a separate detection algorithm. The gas identification algorithm being developed in this work is performed only on pixels having been determined to contain the plume. Constrained stepwise linear regression is used in this study. Results indicate that the ability of the algorithm to correctly identify plume gases is directly related to the concentration of the gas. Previous concerns that the algorithm is hindered by spectral overlap were eliminated through the use of constraints on the regression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603940
The ability to detect and identify gaseous effluents is a problem that has been pursued with limited success. It has been shown to be
possible using the Invariant algorithm on synthetic hyperspectral
scenes with a strong single gas release. That however, is a very
specific case and leaves room for further investigation. This study
looks at more realistic detection and release scenarios. Our
implementation of the Invariant algorithm uses Singular Value
Decomposition (SVD) to select basis vectors from a subspace of target
gases in conjunction with a Generalized Likelihood Ratio Test (GLRT) to determine on a pixel by pixel basis how ``like" the target gas each pixel is. The target gases are modeled in the image radiance space including atmospheric effects. Target spectra are modeled in both emission and absorption. This study investigates how well weak plumes are detected. Also, there will be a test of a mixed gas in a strong plume release. Finally, a situation where a weak multiple gas release will be discussed. Synthetic hyperspectral imagery in the long wave infrared region (LWIR) of the electromagnetic spectrum will be the predominate data used in this study. This algorithm has been found to be applicable for these detection and identification scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603585
Several commercial and environmental applications require the detection and quantification of gaseous plumes from airborne platforms. Unlike active LIDAR imaging in a DIAL system, the signal received by a passive sensor depends not only on the gas concentration pathlength, but the temperature contrast between the gas column and the background as well. Further complicating the problem, the at-sensor radiance is a function of a non-linear combination of the gas concentration and temperature, both inherently unknown. A method is presented to estimate the gas concentration pathlength and temperature from LWIR Hyperspectral Imagery (HSI) without any assumptions about the gas properties or background radiance. A non-linear model is fit to the data using a Levenberg-Marquardt fitting procedure. This technique requires only a priori knowledge of the gas species present in the pixel of interest to reduce the complexity of the model. The resulting concentration pathlength and temperature are reported on a per-pixel basis. Results are shown for application to synthetic imagery created with the DIRSIG simulation. Concentration pathlength results are promising for a gas with strong, moderately broad features (Freon) but less so for a gas with weaker, narrow features (NH3). In neither case is the solution to the gas temperature satisfactory. This is further demonstrated through examination of the residual space in which the minimization is performed where it is shown that a unique minimum is not present in the space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603713
The usual first step in the processing of hyperspectral imagery is to remove the effects of the atmosphere. In the thermal region this usually involves accounting only for the upwelling radiance and the atmospheric transmission; the downwelling contribution being ignorable. This however can cause difficulties in the quantification of the gases if the atmosphere is improperly compensated for. Several algorithms exist for the compensation of the atmosphere including In-scene Atmospheric Compensation (ISAC) and Autonomous Atmospheric Correction (AAC). This study looks at which atmospheric compensation technique, if any, has the least negative impact on the gas signatures in the LWIR region (8-12 microns) using synthetic hyperspectral imagery created with the DIRSIG simulation. Two different metrics were used to this end: spectral angle and feature depth. Because the depth of the gas feature is directly related to the gas' concentration path length and temperature, it proved to be the more telling of the two. Results show that these atmospheric compensation routines have little effect in the strength of the gas features investigated here and, as such, atmospheric compensation may not be necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605272
Gravimetrically prepared aqueous binary solutions permit the generation of target vapors of methanol and ammonia in a portable vapor cell. A passive Fourier transform infrared (FT-IR) spectrometer monitors a short pathlength optical cell using a calibrated extended-blackbody background source. The temperature of the blackbody ranges from 5°C to 50°C in five degree increments. This temperature range simulates the radiance levels most often encountered for ambient temperature backgrounds in open-air field measurements. The solute liquid mole fractions determine the resultant vapor concentrations. The water component attenuates the target vapor concentration from that of the pure solute component depending on the solute liquid mole fraction. This study demonstrates the utility of a portable vapor cell using a series of binary aqueous solutions per target compound over the Beer’s Law range of infrared absorbances. These Beer’s Law infrared absorbances and blackbody radiance levels are within the linearity range of the passive FT-IR spectrometer and are representative of open-air field conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604435
A high-speed passive FTIR imaging spectrometer has been developed and tested in airborne flight tests on both fixed wing and helicopter platforms. This sensor was developed and flown from 2000 to 2005 in conjunction with various organizations, and is known as the Turbo FT. The Turbo FT is a laser-less rotary high speed Fourier
Transform Infra-Red (FTIR) spectrometer capable of very high speed, spectral resolution to 1 cm-1, and operation in rugged environments. For these tests, the sensor was run at 8 cm-1 resolution and 50-100 scans per second with either a single element or a 2x8 element LWIR detector. An on-board auto-calibrating blackbody accessory was developed and automated chemical detection software was developed. These features allow in-flight calibration, facilitated detection of target gas clouds, and reported detections to an on-board targeting computer. This paper will discuss the system specifications, sensor performance, and field results from various experiments. Current work on development of an 8x8 pixel Turbo FT system will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605885
Imaging Fourier transform spectrometers (IFTS) allow for very high spectral resolution hyperspectral imaging while using moderate size 2D focal plane arrays in a staring mode. This is not the case for slit scanning dispersive imaging spectrometers where spectral sampling is related to the focal plane pixel count along the spectral dimension of the 2D focal plane used in such an instrument. This can become a major issue in the longwave infrared (LWIR) where the operability and yield of highly sensitivity arrays (i.e.HgCdTe) of large dimension are generally poor. However using an IFTS introduces its own unique set of issues and tradeoffs. In this paper we develop simplified equations for describing the sensitivity of an IFTS, including the effects of data windowing. These equations provide useful insights into the optical, focal plane and operational design trade space that must be considered when examining IFTS concepts aimed at a specific sensitivity and spectral resolution application. The approach is illustrated by computing the LWIR noise-equivalent spectral radiance (NESR) corresponding to the NASA Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) concept assuming a proven and reasonable noise-equivalent irradiance (NEI) capability for the focal plane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604137
Projection pursuit (PP) is an interesting concept, which has been found in many applications. It uses a so-called projection index (PI) as a criterion to seek directions that may lead to interesting findings for data analysts. Unlike the principal components analysis (PCA), which uses variance as a measure to find directions that maximizes data variances, the PI used by the PP finds interesting directions that can be characterized by statistics higher than variance. As a result, the PCA is generally considered as a special case of PP with the PI particularly specified by the variance. Recently, a PP-based approach was developed by Ifarraguerri and Chang for multispectral/hyperspectral image analysis. This paper revisits their approach and investigates its application in endmember generation where endmembers can be extracted from a sequence of projections generated by PP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602373
Many endmember extraction algorithms have been developed for finding endmembers which are assumed to be pure signatures in the image data. One of the most widely used algorithms is the N-FINDR, developed by Winter et al. This algorithm assumes that, in L spectral dimensions, the L-dimensional volume formed by a simplex with vertices specified by purest pixels is always larger than that formed by any other combination of pixels. Despite the algorithm has been successfully used in various applications, it does not provide a mechanism to determine how many endmembers are needed. In this work, we use a recently developed concept of virtual dimensionality (VD) to determine how many endmembers need to be generated by N-FINDR. Another issue in implementing the algorithm is that N-FINDR starts with a random set of pixels generated from the data as the initial endmember set which cannot be selected by users at their discretion. Since the algorithm does not perform an exhaustive search, it is very sensitive to the selection of initial endmembers which not only can affect the algorithm convergence rate but also the final results. In order to resolve this dilemma, we use an endmember initialization algorithm (EIA) that can be used to select an appropriate set of endmembers for initialization of N-FINDR. Experiments show that, when N-FINDR is implemented in conjunction with such EIA-generated initial endmembers, the number of replacements during the course of searching process can be substantially reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602374
Pixel purity index (PPI) algorithm has been widely used in hyperspectral image analysis for endmember extraction because of its publicity and availability in the Research Systems ENVI software. In this paper, we develop a fast algorithm to implement the PPI, which provides several significant advantages over the PPI. First, it uses a newly developed concept, virtual dimensionality (VD) to estimate the number of endmembers required to be generated by the algorithm. Second, it uses an endmember initialization algorithm (EIA) to generate an appropriate set of initial endmembers that can reduce a significant number of runs required for the PPI. Third, it provides a new iterative rule and a stopping rule to terminate the algorithm, a feature that is not available in the original PPI which is not an iterative algorithm. Most importantly, unlike the PPI which requires a visualization tool to manually select a final set of endmembers, the FPPI is completely automatic and unsupervised. Since the original PPI algorithm has never been fully disclosed in the literature due to its propriety, the step-by-step algorithmic implementation of FPPI presented in this paper is considered to be new and may be very beneficial to users who are interested in this algorithm without soliciting help from particular software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603433
In previous research, we introduced a family of simplex projection methods for selection of endmembers in hyperspectral images. In this paper, we define a new member of that family, which we call the Stepwise Simplex Projection (SSP) method. This new method adds and eliminates endmembers based on their distances to simplexes defined by previously chosen endmembers. We compare the SSP method to a previously defined simplex projection method (called the Farthest Pixel Selection method) and to some other methods such as the Pixel Purity Index and Maximum Distance methods. To this end, we introduce several summary measures to describe how well a set of endmembers characterizes the image spectra. We also investigate how well the resulting sets of endmembers perform in subpixel target detection. The numerical results are based on AVIRIS hyperspectral imagery. The SSP method proves to be the most consistently well performing among the investigated methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602925
In this paper we present the utilization of high-spectral resolution imagery for improving low-spectral resolution imagery. In our analysis, we assume that an acquisition of high spectral resolution images provides more accurate spectral predictions of low spectral resolution images than a direct acquisition of low spectral resolution images. We illustrate the advantages by focusing on a specific case of images acquired by a hyperspectral (HS) camera and a color (red, green, and blue or RGB) camera. First, we identify two directions for utilization of HS images, such as (a) evaluation and calibration of RGB colors acquired from commercial color cameras, and (b) color quality improvement by achieving sub-spectral resolution. Second, we elaborate on challenges of RGB color calibration using HS information due to non-ideal illumination sources and non-ideal hyperspectral camera characteristics. We describe several adjustment (calibration) approaches to compensate for wavelength and spatial dependencies of real acquisition systems. Finally, we evaluate two color cameras by establishing ground truth RGB values from hyperspectral imagery and by defining pixel-based, correlation-based and histogram-based error metrics. Our experiments are conducted with three illumination sources (fluorescent light, Oriel Xenon lamp and incandescent light); with one HS Opto-Knowledge Systems camera and two color (RGB) cameras, such as Sony and Canon. We show a data-driven color-calibration as a method for improving image color quality. The applications of the developed techniques for HS to RGB image calibrations and sub-spectral resolution predictions are related to real-time model-based scene classification and scene simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603660
A useful technique in hyperspectral data analysis is dimensionality reduction, which replaces the original high dimensional data with low dimensional representations. Usually this is done with linear techniques such as linear mixing or principal components (PCA). While often useful, there is no a priori reason for believing that the data is actually linear.
Lately there has been renewed interest in modeling high dimensional data using nonlinear techniques such as manifold learning (ML). In ML, the data is assumed to lie on a low dimensional, possibly curved surface (or manifold). The goal is to discover this manifold and therefore find the best low dimensional representation of the data.
Recently, researchers at the Naval Research Lab have begun to model hyperspectral data using ML. We continue this work by applying ML techniques to hyperspectral ocean water data. We focus on water since there are underlying physical reasons for believing that the data lies on a certain type of nonlinear manifold. In particular, ocean data is influenced by three factors: the water parameters, the bottom type, and the depth. For fixed water and bottom types, the spectra that arise by varying the depth will lie on a nonlinear, one dimensional manifold (i.e. a curve). Generally, water scenes will contain a number of different water and bottom types, each combination of which leads to a distinct curve. In this way, the scene may be modeled as a union of one dimensional curves. In this paper, we investigate the use of manifold learning techniques to separate the various curves, thus partitioning the scene into homogeneous areas. We also discuss ways in which these techniques may be able to derive various scene characteristics such as bathymetry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603440
Accurate retrieval of wildland fire temperature from remote imagery would be useful in improving prediction of fire propagation and estimates of fire effects such as burn severity and gas and particle production. The feasibility of estimating temperatures for subpixel fires by spectral unmixing has been established by previous work with
the AVIRIS sensor. However, this unmixing approach can also produce optimizations for temperatures that may not be physically related to the fraction of flaming combustion in a pixel. Furthermore, previous techniques have treated fire as a blackbody and have modeled the mixed pixel transmitted radiance as two blackbody sources. This first order approximation can also affect the temperature retrieval. Knowledge of emissivity and use of a more complex radiance model should improve the accuracy of the temperature estimation. We therefore, propose a technique which improves the previous approach by using the potassium emission to pre-determine pixels that actually contain signal from flaming combustion and a modified mixed pixel radiance model. A non-linear, constrained multi-dimensional optimization procedure which estimates flame emissivity was applied
to the model to estimate fire temperature and its areal extent. Results are shown for AVIRIS data sets acquired over Cuiaba, Brazil (1995) and the San Bernardino Mountains (1999).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605671
A particular challenge in hyperspectral remote sensing of benthic habitats is that the signal exiting from the water is a small component of the overall signal received at the satellite or airborne sensor. Therefore, in order to be able to discriminate different ecological areas in benthic habitats, it is important to have a high signal to noise ratio (SNR). The SNR can be improved by building better sensors; SNR improvements however, we believe, are also achievable by means of signal processing and by taking advantage of the unique characteristics of hyperspectral sensors. One approach for SNR improvement is based on signal oversampling. Another approach for SNR improvement is Reduced Rank Filtering (RRF) where the small Singular Values of the image are discarded and then reconstruct a lower rank approximation to the original image. This paper presents a comparison in the use of oversampling filtering (OF) versus RRF as SNR enhancement methods in terms of classification accuracy and class separability when used as a pre-processing step in a classification system. Overall results show that OF does a better job improving the classification accuracy than RRF and at much lower computational cost, making it an attractive technique for Hyperspectral Image Processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602955
For multispectral sensory and geospatial data to be properly integrated they must be co-registered with known data which is a difficult and time consuming process. A persistent problem with new unregistered data is geometric image distortion. This paper deals with distortion due to disproportional transformation. Images can be disproportionally transformed because of a specific angle of data acquisition, sensor and lens distortions, atmospheric effects, and others factors. This research is focused on developing a method to overcome such distortion effects and to provide computational tools to automate a large portion of the process without relying on the sensor geometry and model that may not be known. Current methods of image analysis and feature recognition rely heavily on geometric shapes and/or the topological nature of data contained within the image. In addition to geometric shapes and topological data, features and images can also be compared algebraically. Algebraic structures have been defined with which comparisons can be made between geometric components such as relative angles, and lengths. Invariant point placement and feature comparison methods are developed here that can overcome the effect of distortion and disproportional scaling. Deriving a method that is invariant to disproportional scaling that is based on an algebraic invariant method is a new approach to solving this problem and represents a new mathematical language for the processing of image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604156
The United States Navy has recently shifted focus from open-ocean warfare to joint operations in optically complex nearshore regions. Accurately estimating bathymetry and water column inherent optical properties (IOPs) from passive remotely sensed imagery can be an important facilitator of naval operations. Lee et al. developed a semianalytical model that describes the relationship between shallow-water bottom depth, IOPs and subsurface and above-surface reflectance. They also developed a nonlinear optimization-based technique that estimates bottom depth and IOPs, using only measured spectral remote sensing reflectance as input. While quite effective, inversion using noisy field data can limit its accuracy. In this research, the nonlinear optimization-based Lee et al. inversion algorithm was used as a baseline method, and it provided the framework for a proposed hybrid evolutionary/classical optimization approach to hyperspectral data processing. All aspects of the proposed implementation were held constant with that of Lee et al., except that a hybrid evolutionary/classical optimizer (HECO) was substituted for the nonlinear method. HECO required more computer-processing time. In addition, HECO is nondeterministic, and the termination strategy is heuristic. However, the HECO method makes no assumptions regarding the mathematical form of the problem functions. Also, whereas smooth nonlinear optimization is only guaranteed to find a locally optimal solution, HECO has a higher probability of finding a more globally optimal result. While the HECO-acquired results are not provably optimal, we have empirically found that for certain variables, HECO does provide estimates comparable to nonlinear optimization (e.g., bottom albedo at 550 nm).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605821
Previous studies introduced, examined, and tested a variety of registration-free transforms, specifically the diagonal, whitening/dewhitening, and target CV (Covariance) transforms. These transforms temporally evolve spectral object signatures under varying conditions using imagery of regions of similar objects and content distribution from data sets collected at two different times. The transformed object signature is then inserted into the matched filter to search for targets. Spatial registration of two areas and/or finding two suitable candidate regions for the transforms is often problematic. This study examines and finds that the average correlation coefficient between the corrected histograms of the multi-spectral image cube collected at two times can assess the similarity of the areas and predict object detection performance. This metric is applied in four distinctive situations and tested on three independently collected data sets. In one data set, the correlation between histograms was taken from an airborne long wave infrared sensor that imaged objects in Florida and tested on registered images modified by systematically eliminating opposed ends of the image set. The other data set examined images of objects in Yellowstone National Park from a visible/near IR multi-spectral sensor. This comparison was also applied to images collected using oblique angles (depression angle of 10°) of objects placed at Webster Field in Maryland. Candidate heterogeneous image areas were compared to each other using the average correlation coefficient and inserted into statistical transforms. In addition the correlations were computed between corrected histograms based on the normalized difference vegetation index (NDVI). Similarly, the analysis is applied to data collected at oblique angles (10° depression angle). The net signal to clutter ratio depends on the average correlation coefficient and has low p-values (p<0.05). All statistical transforms (diagonal, whitening/dewhitening, target CV) performed comparably using the various backgrounds and scenarios. Objects that are spectrally distinct from the backgrounds followed the average correlation coefficient more closely than objects whose spectral signatures contained background components. This study is the first to examine the similarity of the corrected histograms and does not exclude other approaches for comparing areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604231
The spectral radiance measured by an airborne sensor is dependent on the spectral reflectance of the ground material, the orientation of the material surface, and the atmospheric and illumination conditions. We present a non-linear algorithm to estimate the surface spectral reflectance given the sensor radiance spectrum corresponding to a single pixel. The algorithm uses a low-dimensional subspace model for the reflectance spectra. The solar radiance, sky radiance, and path-scattered radiance are dependent on the environmental condition and viewing geometry and this inter-dependence is considered by using a coupled subspace model for these spectra. The algorithm uses the Levenberg-Marquardt method to estimate the subspace model parameters which are used to determine the reflectance spectrum. We have applied the algorithm to a large set of 0.42-1.74 micron sensor radiance spectra simulated for different atmospheric conditions, materials, and surface orientations. We have also examined the utility of the algorithm for
reflectance recovery in digital imaging and remote sensing image generation (DIRSIG) scenes that contain 3D objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603172
The problem of imagery registration/conflation and change detection requires sophisticated and robust methods to produce better image fusion, target recognition, and tracking. Ideally these methods should be invariant to arbitrary image affine transformations. A new abstract algebraic structural invariant approach with area ratios can be used to identify corresponding features in two images and use them for registration/conflation. Area ratios of specific features do not change when an image is rescaled or skewed by an arbitrary affine transformation. Variations in area ratios can also be used to identify features that have moved and to provide measures of image registration/conflation quality. Under more general transformations, area ratios are not preserved exactly, but in practice can often still be effectively used. The theory of area ratios is described and three examples of registration/conflation and change detection are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603726
Algorithms for the analysis of spectral information operate almost exclusively with real-number arithmetic on real-number entities; i.e. spectra. This is certainly so in the exploitation of imaging spectrometer data. Motivated by a desire to take advantage of the power of complex mathematics for spectral analysis and an attempt to incorporate additional spectral information into traditional exploitation techniques, a method to transform real-number spectra into complex numbers is presented. The esssence of the complex spectral analysis method (CSAM)is the population of the imaginary part of a complex number representation of a spectrum; the original, untransformed spectrum forms the real part of the complex number. Here, the imaginary part is assigned the magnitude of the separation between successive points in a spectrum; its sign is equal to the sign of the slope of the line segment connecting the two points. This parameterization is chosen to pack more information on spectral shape into signatures operated on by algorithms modified to process complex data. A spectral library of mineral signatures was converted to complex spectra. A confusion matrix of complex spectral angles was constructed for the transformed library. A confusion matrix of real spectral angles between the original, untransformed library signatures was generated for comparison. The CSAM signatures provide greater spectral separability with a lower number of hits (i.e. number of confusion matrix cells with angle values less than or equal to the threshold) per threshold angle compared to the real spectral angles. CSAM separability also exceeds that of appended spectra where an extended real spectrum is created by appending the transformed information onto the end of the original, real spectrum rather than creating a complex spectrum. Numerous spectral parameterizations (and information sources) for building complex spectra are suggested as is the utility of CSAM for hyperspectral information (HSI) analysis and exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604493
A methodology capable of quantitatively assessing the quality of hyperspectral data has become increasingly desirable as hyperspectral remote sensing technology migrates into operational systems. The quality of spectral data depends on many factors including collection parameters charactering the sensor and the scene, and the desired spectral products. Therefore, there is a recognized urgent need to understand the phenomenology associated with the collection paramters and how they relate to the quality of the information extracted from the spectral data for different applications. If such relationships can be established, data collection requirements and tasking strategies can then be formulated for these applications. A spectal quality equation with an excellent least-squares fit was established for object/anomaly detection in an earlier work. This paper describes a spectral quality equation established for material identification. This spectral quality equation relates the collection parameters (i.e. spatial resolution, spectral resolution, signal-to-noise ratio, and scene complexity) to the probability of correct identification (Pi) of materials at a given probability of false alarms (Pfa).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602746
Published approaches to assessing and predicting spectral image utility are generally based on regression methods which fit coefficients to an equation with terms representing spatial scale, spectral fidelity, and signal-to-noise. Such approaches are patterned after the National Imagery Interpretability Rating Scale General Image Quality Equation (NIIRS GIQE) designed for use with remotely-sensed panchromatic imagery. Preliminary testing of these approaches suggests that they will work for some subsets of spectral imagery applications but are not generally applicable to all spectral imaging problems.
We present here an approach that gets at the heart of the general problem−assessing the confidence of an image analyst in performing a specified task with a specific spectral image. While applicable in other areas such as health imaging, our approach to spectral utility assessment is presented in this paper from a remote sensing point of view. Our approach allows trade-offs in tasking and system design across the “spectrum” of imagers including panchromatic, multispectral, hyperspectral, and even ultraspectral.
Our approach is based on a fusion concept called “semantic transformation.” We assume that spectral and spatial information are largely separable with both contributing to the overall utility of the image. The “semantic transformation” combines the spatial and spectral information in a common term (in our case confidence) to give an overall confidence in performing the specified task.
Addressing the spatial and spectral information separately allows us the freedom to assess the information contained in each in ways that the information is actually assimilated (i.e., usually spatial information in exploited visually while spectral information consisting of more than three or four bands is usually exploited by computer algorithms). For the spectral information, we can use either generic exploitation algorithms or the specific algorithms that the image analyst would be expected to use.
Testing of our approach was done with a parametric set of simulated imagery where Ground Sampled Distance (GSD) and the number of spectral bands were varied. Our initial test led to some refinements of our approach, which are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605916
Quantitative methods to assess or predict the quality of a spectral image are the subject of a number of current research activities. An accepted methodology would be highly desirable to use for data collection tasking or data archive searches in way analogous to the current uses of the National Imagery Interpretation Rating Scale (NIIRS) General Image Quality Equation (GIQE). A number of approaches to the estimation of quality of a spectral image have been published. An issue with many of these approaches is that they tend to be constructed around specific tasks (target detection, background classification, etc.) While this has often been necessary to make the quality assessment tractable, it is desirable to have a method that is more general. One such general approach is presented in a companion paper (Simmons, et al). This new approach seeks to get at the heart of the general spectral imagery quality analysis problem−assessing the confidence of an image analyst in performing a specified task with a specific spectral image. In this approach the quality from spatial and spectral aspects of the imagery are treated separately and then a fusion concept known as “semantic transformation” is used to combine the utility, or confidence, from these two aspects into an overall quality metric. This paper compares and contrasts the various methods published in the literature with this new General Spectral Utility Metric (GSUM). In particular, the methods are applied to a target detection problem using data from the airborne HYDICE instrument collected at Forest Radiance I. While the GSUM approach is seen to lead to intuitively pleasing results, its sensitivity to image parameters was not seen to be consistent with previously published approaches. However, this likely resulted more from limitations of the previous approaches than with problems with GSUM. Further studies with additional spectral imaging applications are recommended along with efforts to integrate a performance predication capability into the GSUM framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603977
Remote collections of hyperspectral sensor imagery (HSI) often produce extremely large data sets that make storage and transmission difficult. Smart reduction of such a large data set has been a challenge. Automatic anomaly detection has been cited as a suitable method for remote processing of HSI, although automatic anomaly detection using HSI is itself a challenging problem owing to the impact of the atmosphere on spectral content and the variability of spectral signatures. In this paper, we present the performance of an anomaly detection algorithm known as an approximation to semiparametric (AsemiP) anomaly detector. This detector was conceptualized and developed in the Army Research Laboratory (ARL), where it became a favorite technique for the intended purpose using HSI. This detector uses fundamental theorems of large sample theory to implement a notion of indirect comparison, and it supersedes an earlier ARL technique that uses a semiparametric (SemiP) model, as a basis for statistical inference. The strength of both algorithms is that no prior knowledge is assumed about the target and/or the clutter statistics, albeit AsemiP has an advantage over SemiP of not using an iterative algorithm which is sensitive to arbitrary initial conditions. The AsemiP anomaly detector was tested using real hyperspectral data and compared to alternative techniques, including a benchmark approach, yielding some good results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605850
This paper develops a hybrid target detector that incorporates structured backgrounds and physics based modeling together with a geometric infeasibility metric. More often than not, detection algorithms are usually applied to atmospherically compensated hyperspectral imagery. Rather than compensate the imagery, we take the opposite approach by using a physics based model to generate permutations of what the target might look like as seen by the sensor in radiance space. The development and status of such a method is presented as applied to the generation of target spaces. The generated target spaces are designed to fully encompass image target pixels while using a limited number of input model parameters. Background spaces are modeled using a linear subspace (structured) approach characterized by endmembers found by using the maximum distance method (MaxD). After augmenting the image data with the target space, 15 endmembers were found, which were not related to the target (i.e., background endmembers). A geometric infeasibility metric is developed which enables one to be more selective in rejecting false alarms. Preliminary results in the design of such a metric show that an orthogonal projection operator based on target space vectors can distinguish between target and background pixels. Furthermore, when used in conjunction with an operator that produces abundance-like values, we obtained separation between target, ackground, and anomalous pixels. This approach was applied to HYDICE image spectrometer data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604969
This research addresses the problem of tracking a moving point target from a time sequence of hyperspectral images. We focus on the detection of moving targets with staring technologies such as the ones used in space surveillance and missile tracking applications. In these applications, the images consist of targets moving at sub-pixel velocity in backgrounds which are influenced by both evolving clutter and noise. The demand for a low false alarm rate on one hand and a high probability of detection on the other makes the tracking a challenging task. The use of hyperspectral images should be superior to current technologies due to the benefit of simultaneously exploiting two target specific properties: the spectral target characteristics and the time dependent target behavior.
We propose an algorithm which is performed in two steps. The first step is the transformation of each of the hyperspectral images forming the sequence into a two dimensional image using a known point target detection acquisition algorithm. In the second step, target detection and tracking is performed by the means of time domain processing. A match-filter based technique is developed for the hyperspectral image transformation; a variance-filter based algorithm is used to detect the presence of targets from the temporal profile of each pixel while suppressing clutter specific influences.
We then show results obtained on real image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604133
Hyperspectral remotely sensed imagery is rapidly developed recently. It collects radiance from the ground with hundreds of channels which results in hundreds of co-registered images. How to process this huge amount of data is a great challenge, especially when no information of the image scene is available. Under this circumstance, anomaly detection becomes more difficult. Several methods are devoted to this problem, such as the well-known RX algorithm and high-moment statistics approaches. The RX algorithm can detect all anomalies in single image but it can not discriminate them. On the other hand, the high-moment statistics approaches use criterion such as skewness and kurtosis to find the projection directions to detect anomalies. In this paper we propose an effective algorithm for anomaly detection and discrimination extended from RX algorithm, called Background Whitened Target Detection Algorithm. It first modeled the background signature with Gaussian distribution and applied the whitening process. After the process, the background will distribute as i.i.d. Gaussian in all spectral bands. Those pixels did not fit in the distribution will be the anomalies. Then Automatic Target Detection and Classification Algorithm (ATDCA) is applied to search for those distinct spectrum automatically and classify them as anomalies. Since ATDCA can also estimated the abundance fraction of each target resident in one pixel by applying Sum-to-one and Nonnegativity constraints, the proposed method can also be applied in a constrained fashion. The experimental results show that the proposed method can improve RX algorithm by discriminate the anomalies and also outperform high-moment approaches in terms of computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.609505
A supervised subpixel target detection algorithm based on parametric general linear model using whitening transformation for hyperspectral imaging is developed. Statistical tests are described to assess the performance of the algorithm in comparison with the corresponding classical approach. Numerical results are presented to show that the parametric algorithm using low-order models can adequately represent the classical model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.598866
Spectral signature databases abound in the field of remote sensing. Scientists use these databases to assist in their analysis everyday. Many decisions are made about hyperspectral data and the observations made with this data based on the assumption that these databases contain "ground truth" representations of the signatures for materials sensed. For the most part, this is true if the team collecting the signatures that populate these databases follow sound practices when collecting this data. The data does, however, represent a very specific picture of the "truth". Signatures found in databases represent a specific collection configuration or geometry. The source of illumination, whether it is artificial or natural, is in a very specific location as is the sensor used to collect radiance for the derivation of the reflectance signatures. A signature found in the database is useful for only a very specific scenario, one that matches the geometry used during ground truth collection. There are other very significant factors regarding illumination field and scattering properties of the material and reference standards that influence the computed reflectance signature. This work will illustrate some of the dramatic variation that can exist in the reflectance signatures derived for the same material using different techniques. Difference upward of 30% may exist for the same material. These observations are presented so that scientists who look to these databases in the future will consider very carefully the metadata that is presented with the signatures that they use to make sure they are applicable to the phenomenology and collection scenario that they have under study. These observations should also point out that signatures presented without detailed metadata could be very hazardous to use if the outcome of the analysis being performed relies upon the absolute reflectance spectra being known.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603663
Soil surface materials often originate from different sources and are spectrally variable. Their presence will alter soil spectral features and mask the nature of the underlying soil surface horizon. The upper-most, thin, granular layer determines a soil sample's spectra. This study's objective was to characterize the optical depth of some sandy soils and their relationship to spectral reflectance from 0.35-2.50mm. The reflectance-optical depth relationships were determined for four, air-dried, granular, sieved samples, with particle sizes of 1.0-2.0, 0.5-1.0, 0.25-0.5, 0.125-0.25, 0.075-0.125, or <0.075mm. Each particle size separate has convergent reflectance spectra associated with an optical depth that ranged from 0.2 to 8.1mm. The optical depth was greater for larger sized particles than for smaller sized particles. Normalizing the sample depth by the mean particle diameter of each sieve fraction found the optical depth-spectral feature relationships were determined by a layer of granular materials that was 5-8 particles thick. Three non-sieved, well-graded composite soils were also evaluated and their optical depths ranged from 1.4 to 3.9mm. These non-sieved composite soils include a medium fused-silica sand, a medium calcareous sand, and a medium gypsum sand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605838
This paper describes a collaborative collection campaign to
spectrally image and measure a well characterized scene for hyperspectral algorithm development and validation/verification
of scene simulation models (DIRSIG). The RIT Megascene, located in the northeast corner of Monroe County near Rochester, New York, has been modeled and characterized under the DIRSIG environment and has been simulated for various hyperspectral and multispectral systems (e.g., HYDICE, LANDSAT, etc.). Until recently, most of the
electro-optical imagery of this area has been limited to very
high altitude airborne or orbital platforms with low spatial
resolutions. Megacollect 2004 addresses this shortcoming by
bringing together, in June of 2004, a suite of airborne sensors
to image this area in the VNIR, SWIR, MWIR, and LWIR regions.
These include the COMPASS (hyperspectral VNIR,SWIR), SEBASS
(hyperspectral LWIR), WASP (broadband VIS, SWIR, MWIR, LWIR)
and MISI (hyperspectral VNIR, broadband SWIR, MWIR, LWIR). In conjunction with the airborne collections, an extensive ground truth measurement campaign was conducted to characterize atmospheric parameters, select targets, and backgrounds in the field. Laboratory measurements were also made on samples to confirm the field measurements. These spectral measurements spanned the visible and thermal region from 0.4 to 20 microns. These measurements will help identify imaging factors that affect algorithm robustness and areas
of improvement in the physical modeling of scene/sensor phenomena. Reflectance panels have also been deployed as control targets to both quantify sensor characteristics and atmospheric effects. A subset of these targets have also been deployed as an independent test suite for target detection algorithms. Details of the planning, coordination, protocols, and execution of the campaign will be discussed with particular emphasis on the ground measurements. The system used to collect the metadata of ground truth measurements and disseminate this data will be described. Lastly, lessons learned in the field will be underscored to highlight additional measurements and changes in protocol to improve future collections of this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605856
This work describes the water collection experiment component of the Megacollect 2004 campaign. Megacollect was a collaborative campaign coordinated by RIT with several institutions to spectrally measure various target/background scenarios with airborne sensors and ground instruments. An extension to the terrestrial campaign was an effort
to simultaneously measure water optical properties in different bodies of water in the Rochester Embayment. This collection updates a previous effort in which water surface measurements were made during an AVIRIS mission over the Rochester Embayment (May 1999).
Megacollect 2004 builds on this through an expanded campaign that increased the number of stations sampled, extended the spectral range of measurements, and improved the spatial resolution of the imagery through the use of multiple sensors (COMPASS, SEBASS, MISI, WASP). A larger set of in-water instruments were deployed on several vessels to sample and measure water optical properties near the shores of Lake Ontario, the northern portions of Irondequoit Bay, and several smaller ponds and bays in the Rochester Embayment. This paper describes the different in-water instruments deployed, the measurements obtained and how they will be used for future modeling efforts and development of hyperspectral algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric Measurement Instrumentation and Remote Sensing
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604450
The Atmospheric Infrared Sounder (AIRS) is a hyper-spectral infrared sounder which covers the 3.7 to 15.4 micron region with 2378 spectral channels. The AIRS instrument specification called for spatial co-registration of all channels to better than 2% of the field of view. Pre-launch testing confirmed that this requirement was met, since the standard deviations in the centroids was about 1% of the 13.5 km IFOV in scan and 3% in track. Detailed analysis of global AIRS data show that the typical scene gradient in 10-micron window channels is about 1.3K/km rms. The way these gradients, which are predominantly caused by clouds, manifest themselves in the data depends on the details of the instrument design and the way the spectral channels are used in the data analysis. AIRS temperature and moisture retrievals use 328 of the 2378 channels from 17 independent arrays. As a result, the effect of the boresight misalignment averages to zero mean. Any increase in the effective noise is less than 0.2K. Also, there is no discernable performance degradation of products at the 45 km spatial resolution in the presence of partially cloudy scenes with up to 80% cloudiness. Single pixel radiometric differences between channels with boresight alignment differences can be appreciable and can affect scientific investigations on a single 15km footprint scale, particularly near coastlines, thunderstorms and surface emissivity inhomogeneities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602546
Simultaneous use of AIRS/AMSU-A observations allow for the determination of accurate atmospheric soundings under partial cloud cover conditions. The methodology involves the determination of the radiances AIRS would have seen if the AIRS fields of view were clear, called clear column radiances, and use of these radiances to infer the atmospheric and surface conditions giving rise to these clear column radiances. Susskind et al. demonstrate via simulation that accurate temperature soundings and clear column radiances can be derived from AIRS/AMSU-A observations in cases of up to 80% partial cloud cover, with only a small degradation in accuracy compared to that obtained in clear scenes. Susskind and Atlas show that these findings hold for real AIRS/AMSU-A soundings as well. For data assimilation purposes, this small degradation in accuracy is more than offset by a significant increase in spatial coverage (roughly 50% of global cases were accepted, compared to 3.6% of the global cases being diagnosed as clear), and assimilation of AIRS temperature soundings in partially cloudy conditions resulted in a larger improvement in forecast skill than when AIRS soundings were assimilated only under clear conditions. Alternatively, derived AIRS clear column radiances under partial cloud cover could also be used for data assimilation purposes. Further improvements in AIRS sounding methodology have been made since the results shown in Susskind and Atlas. A new version of the AIRS/AMSU-A retrieval algorithm, Version 4.0, was delivered to the Goddard DAAC in February 2005 for production of AIRS derived products, including clear column radiances. The major improvement in the Version 4.0 retrieval algorithm is with regard to a more flexible, parameter dependent, quality control. Results are shown of the accuracy and spatial distribution of temperature-moisture profiles and clear column radiances derived from AIRS/AMSU-A as a function of fractional cloud cover using the Version 4.0 algorithm. Use of the Version 4.0 AIRS temperature profiles increased the positive forecast impact arising from AIRS retrievals relative to what was shown in Susskind and Atlas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602540
Observing system simulation experiments (OSSE) conducted prior to the launch of AIRS indicated significant potential for AIRS temperature soundings to improve numerical weather prediction (NWP), provided that cloud effects could be cleared effectively. Since the launch of AIRS aboard the AQUA satellite, a detailed geophysical validation of AIRS data has been performed. This included collocations of AIRS temperatures with in situ observations and model analyses, and observing system experiments (OSEs) to evaluate the actual impact of AIRS data on NWP. At the NASA Goddard Space Flight Center, we are evaluating AIRS data in several different forms, and are performing impact studies using multiple data assimilation systems. In general, the results of the OSE confirm the results of the earlier simulation experiments in that a meaningful positive impact of AIRS data is obtained and this impact depends strongly upon the assimilation of partially cloudy AIRS data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603163
A novel statistical method for the retrieval of atmospheric temperature and moisture profiles has been developed and evaluated with sounding data from the Atmospheric InfraRed Sounder (AIRS) and the Advanced Microwave Sounding Unit (AMSU). The algorithm is implemented in three stages. First, the infrared radiance perturbations due to clouds are estimated and corrected by combined processing of the infrared and microwave data. Second, a Projected Principal Components (PPC) transform is used to reduce the dimensionality of and optimally extract geophysical profile information from the cloud-cleared infrared radiance data. Third, an artificial feedforward neural network is used to estimate the desired geophysical parameters from the projected principal components. The cloud-clearing of the infrared radiances was performed by the AIRS Science Team using infrared brightness temperature contrasts in adjacent fields of view and microwave-derived estimates of the infrared clear-column radiances to estimate and correct the radiance contamination introduced by clouds. The PPC compression technique was used to reduce the infrared radiance dimensionality by a factor of 100, while retaining over 99.99% of the radiance variance that is correlated to the geophysical profiles. This compression allows the use of smaller, faster, and more robust estimators. A single-layer feedforward neural network with approximately 3000 degrees of freedom was then used to estimate the geophysical profiles at approximately 60 levels from the surface to 20 km. The performance of this method (henceforth referred to as the PPC/NN method) was evaluated using global (ascending and descending) EOS-Aqua orbits co-located with ECMWF fields for a variety of days throughout 2002 and 2003. Over 30,000 fields of regard (3x3 arrays of footprints) over land and ocean were used in the study. Retrieval performance compares favorably with that obtained with simulated observations from the NOAA88b radiosonde set of approximately 7500 profiles. The PPC/NN method requires significantly less computation than traditional variational retrieval methods, while achieving comparable performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603706
The AIRS/AMSU (flying on the EOS-AQUA satellite) sounding retrieval methodology allows for the retrieval of key atmospheric/surface parameters under partially cloudy conditions (Susskind et al., 2003). In addition, cloud parameters are also derived from the AIRS/AMSU observations. Within each AIRS footprint, cloud parameters at up to 2 cloud layers are determined with differing cloud top pressures and "effective" (product of infrared emissivity at 11 microns and physical cloud fraction) cloud fractions. However, so far the AIRS cloud product has not been rigorously evaluated/validated. Fortunately, collocated/coincident radiances measured by MODIS/AQUA (at a much lower spectral resolution but roughly an order of-magnitude higher spatial resolution than that of AIRS) are used to determine analogous cloud products from MODIS. This allows us for a rather rare and interesting possibility: the intercomparisons and mutual validation of imager vs. sounder-based cloud products obtained from the same satellite positions. First, we present results of small-scale (granules) instantaneous intercomparisons. Next, we will evaluate differences of temporally averaged (monthly) means as well as the representation of inter-annual variability of cloud parameters as presented by the two cloud data sets. In particular, we present statistical differences in the retrieved parameters of cloud fraction and cloud top pressure. We will investigate what type of cloud systems are retrieved most consistently (if any) with both retrieval schemes, and attempt to assess reasons behind statistically significant differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604905
As we enter a new era of using satellite hyperspectral sensors for weather and other environmental applications, this paper discusses the applicability of using IR hyperspectral data for climate change monitoring; in particular, for quantifying the greenhouse effects. While broadband 1st order statistics quantify radiative forcings, the IR hyperspectral data provides a means of monitoring feedback processes. Radiative transfer modeling of the greenhouse effect is illustrated with examples: varying surface temperature, atmospheric temperature and water vapor. Three spectral greenhouse metrics are discussed: the difference between the surface emission and the outgoing longwave radiation (G), the surface-temperature normalized greenhouse effect (g) and vertical profile of cooling rate (C). Effects of changes in water vapor, clouds, carbon dioxide and methane are modeled and their potential observables identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602772
With increased spectral, spatial, and temporal resolution, the Hyperspectral Environmental Suite (HES) of the Geostationary Operational Environmental Satellite (GOES)-R Series will contribute to a significant improvement in the GOES products, including an increase in the number of products over the current GOES Imager and Sounder, especially when combined with the GOES-R Advanced Baseline Imager (ABI). The planned capabilities of the HES are encompassed by tasks, which describe required performance for operating at required scan rates. The scheduling of the HES will be determined by NOAA (National Oceanic and Atmospheric Administration). A range of possible scan scenarios for optimizing the collection of data for users with a variety of geographic or phenomenological concerns will be discussed here. One such schedule from the sounding capability of the HES would be a full "sounding disk" at 10 km (sub-satellite point resolution) covered every three hours, as well as the contiguous U.S. every hour at 4 km resolution, plus selected other regions of interest. The HES Coastal Waters (CW) will provide coverage of the coastal areas every three hours, in addition to other regions such as the Great Lakes, or other features of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.609497
The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) will be flying on the National Polar Orbiting Environmental Satellite System (NPOESS) and NASA's NPOESS Preparatory Project (NPP) satellites. Atmospheric vertical temperature and moisture profiles are the two key Environmental Data Records (EDRs) to be measured by the CrIMSS. The retrieval algorithm that was developed to produce these two EDRs has been delivered to Northrop Grumman Space Technology (NGST) and gone through NGST's rigorous pre-launch algorithm performance verification process. In this paper, we will present the methodology and test data used in our verification process. The current estimate of the CrIMSS EDR algorithm’s performance will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.606026
The MODTRANTM5 radiation transport (RT) model is a major advancement over earlier versions of the MODTRANTM atmospheric transmittance and radiance model. New model features include (1) finer spectral resolution via the Spectrally Enhanced Resolution MODTRAN (SERTRAN) molecular band model, (2) a fully coupled treatment of auxiliary molecular species, and (3) a rapid, high fidelity multiple scattering (MS) option. The finer spectral resolution improves model accuracy especially in the mid- and long-wave infrared atmospheric windows; the auxiliary species option permits the addition of any or all of the suite of HITRAN molecular line species, along with default and user-defined profile specification; and the MS option makes feasible the calculation of Vis-NIR databases that include high-fidelity scattered radiances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603359
We describe a new visible-near infrared short-wavelength infrared (VNIR-SWIR) atmospheric correction method for multi- and hyperspectral imagery, dubbed QUAC (QUick Atmospheric Correction) that also enables retrieval of the wavelength-dependent optical depth of the aerosol or haze and molecular absorbers. It determines the atmospheric compensation parameters directly from the information contained within the scene using the observed pixel spectra. The approach is based on the empirical finding that the spectral standard deviation of a collection of diverse material spectra, such as the endmember spectra in a scene, is essentially spectrally flat. It allows the retrieval of reasonably accurate reflectance spectra even when the sensor does not have a proper radiometric or wavelength calibration, or when the solar illumination intensity is unknown. The computational speed of the atmospheric correction method is significantly faster than for the first-principles methods, making it potentially suitable for real-time applications. The aerosol optical depth retrieval method, unlike most prior methods, does not require the presence of dark pixels. QUAC is applied to atmospherically correction several AVIRIS data sets and a Landsat-7 data set, as well as to simulated HyMap data for a wide variety of atmospheric conditions. Comparisons to the physics-based Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) code are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603931
First-principles atmospheric correction of earth-viewing spectral imagery requires atmospheric property information derived from the image itself or measured independently. A field experiment was conducted in May, 2003 at Davis, CA to investigate the validity and consistency of atmospheric properties and surface reflectances derived from simultaneous ground-, aircraft- and satellite-based spectral measurements. The experiment involved the simultaneous collection of HyMap and Landsat-7 imagery, in-situ reflectance spectra of calibration surfaces, and sun and sky radiances from ultraviolet and visible multi-filter rotating shadowband radiometers (MFRSRs). This paper briefly describes the experiment, data analysis and key results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604102
Linear spectral mixture analysis (LSMA) is a widely used technique for subpxiel detection and mixed pixel classification. Due to mathematical tractability it is generally implemented without constraints. However, it has been shown that the constrained LSMA can improve the unconstrained LSMA, specifically in quantification when accurate estimates of abundance fractions are required. When the constrained LSMA is considered, two constraints are generally imposed on abundance fractions, abundance sum-to-one constraint (ASC) and abundance nonnegativity constraint (ANC), referred to as abundance-constrained LSMA (AC-LSMA). A general and common approach to solving the AC-LSMA is to estimate abundance fractions in the sense of least-squares error (LSE) subject to the imposed constraints. Since the LSE is not weighted in accordance with significance of bands, the effect caused by the LSE is assumed to be uniform over all the bands, which is generally not necessarily true. This paper extends the commonly used LSE-based AC-LSMA to weighted LSE-based AC-LSMA with the weighting matrix that is derived from various approaches such as parameter estimation, pattern classification and orthogonal subspace projection (OSP). As demonstrated by experiments, the weighted LSE-based AC-LSMA generally performs better than the commonly used LSE-based AC-LSMA where the latter can be considered a special case of the former with the weighting matrix reduced to the identity matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605672
This paper presents an approach for simultaneous determination of endmembers and their abundances in hyperspectral imagery using a constrained positive matrix factorization. The algorithm presented here solves the constrained PMF by formulating it as a nonnegative least squares problem where the cost function is expanded with a penalty term to enforce the sum to one constraint. Preliminary results using simulated and AVIRIS-Cuprite data are presented. These results show the potential of the method to solve the unsupervised unmixing problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605670
This paper presents an algorithm for abundance estimation in hyperspectral imagery. The fully constrained abundance estimation problem where the positivity and the sum to less than or equal to one (or sum equal to one) constraints are enforced is solved by reformulating the problem as a least distance (LSD) least squares (LS) problem. The advantage of reformulating the problem as a least distance problem is that the resulting LSD problem can be solved using a duality theory using a nonnegative LS problem (NNLS). The NNLS problem can then be solved using Hanson and Lawson algorithm or one of several multiplicative iterative algorithms presented in the literature. The paper presents the derivation of the algorithm and a comparison to other approaches described in the literature. Application to HYPERION image taken over La Parguera, Puerto Rico is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.601738
It is now established that hyperspectral images of many natural backgrounds have statistics with fat-tails. In spite of this, many of the algorithms that are used to process them appeal to the multivariate Gaussian model. In this paper we consider biologically motivated generative models that might explain observed mixtures of vegetation in natural backgrounds. The degree to which these models match the observed fat-tailed distributions is investigated. Having shown how fat-tailed statistics arise naturally from the generative process, the models are put to work in new anomaly detection and un-mixing algorithms. The performance of these algorithms is compared with more traditional approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604471
The Civil Air Patrol (CAP) is procuring Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) systems to increase their search-and-rescue mission capability. These systems are being installed on a fleet of Gippsland GA-8 aircraft, and will position CAP to gain realworld mission experience with the application of hyperspectral sensor and processing technology to search and rescue. The ARCHER system design, data processing, and operational concept leverage several years of investment in hyperspectral technology research and airborne system demonstration programs by the Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL). Each ARCHER system consists of a NovaSol-designed, pushbroom, visible/near-infrared (VNIR) hyperspectral imaging (HSI) sensor, a co-boresighted visible panchromatic high-resolution imaging (HRI) sensor, and a CMIGITS-III GPS/INS unit in an integrated sensor assembly mounted inside the GA-8 cabin. ARCHER incorporates an on-board data processing system developed by Space Computer Corporation (SCC) to perform numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information. A ground processing station is provided for post-flight data playback and analysis. This paper describes the requirements and architecture of the ARCHER system, with emphasis on data processor design, components, software, interfaces, and displays. Key sensor performance characteristics and real-time data processing features are discussed. The use of the system for detecting and geo-locating ground targets in real-time is demonstrated using test data collected in Southern California in the fall of 2004.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605674
The Hyperspectral Image Analysis Toolbox (HIAT) is a collection of algorithms that extend the capability of the MATLAB numerical computing environment for the processing of hyperspectral and multispectral imagery. The purpose of the HIAT Toolbox is to provide information extraction algorithms to users of hyperspectral and multispectral imagery in environmental and biomedical applications. HIAT has been developed as part of the NSF Center for Subsurface Sensing and Imaging (CenSSIS) Solutionware that seeks to develop a repository of reliable and reusable software tools that can be shared by researchers across research domains. HIAT provides easy access to supervised and unsupervised classification algorithms developed at LARSIP over the last 8 years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605889
Detection of a known target in an image can be accomplished using several different approaches. The complexity and number of steps involved in the target detection process makes a comparison of the different possible algorithm chains desirable. Of the different steps involved, some have a more significant impact than others on the final result-the ability to find a target in an image. These more important steps often include atmospheric compensation, noise and dimensionality reduction, background characterization, and detection (matched filtering for this research). A brief overview of the algorithms to be compared for each step will be presented. This research seeks to identify the most effective set of algorithms for a particular image or target type. Several different algorithms for each step will be presented, to include ELM, FLAASH, MNF, PPI, MAXD, the structured background matched filters OSP, and ASD. The chains generated by these algorithms will be compared using the Forest Radiance I HYDICE data set. Finally, receiver operating characteristic (ROC) curves will be calculated for each algorithm chain and, as an end result, a comparison of the various algorithm chains will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Carmen L. Carvajal, Wilfredo Lugo, Wilson Rivera, John Sanabria
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603047
This paper outlines the design and implementation of Grid-HSI, a Service Oriented Architecture-based Grid application to enable hyperspectral imaging analysis. Grid-HSI provides users with a transparent interface to access computational resources and perform remotely hyperspectral imaging analysis through a set of Grid services. Grid-HSI is composed by a Portal Grid Interface, a Data Broker and a set of specialized Grid services. Grid based applications, contrary to other client\server approaches, provide the capabilities of persistence and potential transient process on the web. Our experimental results on Grid-HSI show the suitability of the prototype system to perform efficiently hyperspectral imaging analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604128
Hyperspectral image compression can be performed by either 3-D compression or spectral/spatial compression. It has been demonstrated that due to high spectral resolution hyperspectral image compression can be more effective if compression is carried out spectrally and spatially in two separate stages. One commonly used spectral/spatial compression implements principal components analysis (PCA) or wavelet for spectral compression followed by a 2-D/3D compression technique for spatial compression. This paper presents another type of spectral/spatial compression technique, which uses Hyvarinen and Oja's Fast independent component analysis (FastICA) to perform spectral compression, while JPEG2000 is used for 2-D/3-D spatial compression. In order to determine how many independent components are required, a newly developed concept, virtual dimensionality (VD) is used. Since the VD is determined by the false alarm probability rather than the commonly used signal-to-noise ratio or mean squared error (MSE), our proposed FastICA-based spectral/spatial compression is more effective than PCA-based or wavelet-based spectral/spatial compression in data exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605357
Improvements in weather and climate observation, analysis, and prediction will be achieved through advances of contemporary and future ultraspectral infrared sounders such as Atmospheric Infrared Sounder (AIRS), Tropospheric Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES). Given their unprecedented 3D data sizes to be generated each day, the use of robust data compression techniques will be beneficial to data transfer and archive. Lossless or near-lossless compression of this ultraspectral sounder data is desired to avoid potentially significant degradation of the geophysical parameter retrieval in an associated ill-posed inverse problem. In this paper we investigate various 2D and 3D compression techniques applicable to ultraspectral sounder data. These techniques include transform-based (JPEG2000, 3D-SPIHT), prediction-based (JPEG-LS, CALIC), and clustering-based (PVQ, DPVQ, PPVQ) compression methods. Data preprocessing schemes for compression gains are also illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.606054
Multispectral sharpening of hyperspectral imagery fuses the spectral content of a hyperspectral image with the spatial and spectral content of the multispectral image. The approach we have been investigating compares the spectral information present in the multispectral image to the spectral content in the hyperspectral image and derives a set of equations to approximately transform the multispectral image into a synthetic hyperspectral image. This synthetic hyperspectral image is then recombined with the original low-resolution hyperspectral image to produce a sharpened product. We evaluate this technique against several types of data, showing good performance across with all data sets. Recent improvements in the algorithm allow target detection to be performed without loss of performance even at extreme sharpening ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602706
Multispectral (MS) and hyperspectral (HS) sensors can facilitate target or anomaly detection in clutter since natural clutter and man-made objects diff er in the energy they radiate across the electromagnetic spectrum. Previous research in anomaly detection has formulated two popular algorithms: those based on Gauss-Markov Random Fields (GMRF) and the so-called RX-detector. Performance of these algorithms is dependent on a number of issues including spatial resolution, spectral correlation between the imaging bands, clutter/target model accuracy and the acquired data's signal-to-noise ratio (SNR). This paper provides a comparison study of the anomaly detection performance of the RXdetector and the GMRF-based algorithm using: (1) 4m MS imagery acquired f rom the IKONOS satellite and (2) pansharpened 1m MS imagery created by fusing the 4m MS and the associated 1m panchromatic image sets. The study will be based on the detection performance for stationary and slow moving targets selected f rom imagery acquired during training exercises at Canadian Forces Base (CFB) Petawawa and CFB Wainwright, Canada.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.607010
For the past two decades, hydrographic surveyors have used Optech's bathymetric laser technology to accurately measure water depths and to describe the geometry of the shallow-water seafloor. Recently, we have demonstrated the potential to produce bottom images from estimates of SHOALS-1000T green laser reflectance, and spatial variations in the optical properties of the water column by analyzing time-resolved waveforms. We have also performed the electronic and geometric integration of an imaging spectrometer into SHOALS, and have developed a first generation of software which provides for the exploitation of the combined laser and hyperspectral data within a fusion paradigm. In this paper, we discuss relevant sensor and data fusion issues, and present recent 3D benthic mapping results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.602182
In this paper, we compare several detection algorithms that are based on spectral matched (subspace) filters. Non-linear (kernel) versions of these spectral matched (subspace) detectors are also discussed and their performance is compared with the linear versions. These kernel-based detectors exploit the nonlinear correlations between the spectral bands that are ignored by the conventional detectors. Several well-known matched detectors, such as matched subspace detector, orthogonal subspace detector, spectral matched filter and adaptive subspace detector (adaptive co-sine estimator) are extended to their corresponding kernel versions by using the idea of kernel-based learning theory. In kernel-based detection algorithms the data is implicitly mapped into a high dimensional kernel feature space by a nonlinear mapping which is associated with a kernel function. The detection algorithm is then derived in the feature space which is kernelized in terms of the kernel functions in order to avoid explicit computation in the high dimensional feature space. Experimental results based on simulated toy-examples and real hyperspectral imagery show that the kernel versions of these detectors outperform the conventional linear detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603768
Many different hyperspectral target detection algorithms have been developed and tested under various assumptions, methods, and data sets. This work examines the spectral angle mapper (SAM), adaptive coherence estimator (ACE), and constrained energy maximization (CEM) algorithms. Algorithm performance is examined over multiple images, targets, and backgrounds. Methods to examine algorithm performance are plentiful and several different metrics are used here. Quantitative metrics are used to make direct comparisons between algorithms. Further analysis using visual performance metrics is made to examine interesting trends in the data. Results show an increase in detection algorithm performance as image altitude increases and spatial information decreases. Theories to explain this phenomenon are introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.605727
Algorithms exploiting hyperspectral imagery for target detection have continually evolved to provide improved detection results. Adaptive matched filters can be used to locate spectral targets by modeling scene background as either structured (geometric) with a set of endmembers (basis vectors) or as unstructured (stochastic) with a covariance or correlation matrix. These matrices are often calculated using all available pixels in a data set. In unstructured background research, various techniques for improving upon scene-wide methods have been developed, each involving either the removal of target signatures from the background model or the segmentation of image data into spatial or spectral subsets. Each of these methods increase the detection signal-to-background ratio (SBR) and the multivariate normality (MVN) of the data from which background statistics are calculated, thus increasing separation between target and non-target species in the detection statistic and ultimately improving thresholded target detection results. Such techniques for improved background characterization are widely practiced but not well documented or compared. This paper provides a review and comparison of methods in target exclusion, spatial subsetting and spectral pre-clustering, and introduces a new technique which combines these methods. The analysis provides insight into the merit of employing unstructured background characterization techniques, as well as limitations for their practical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603169
In the current target detection literature, there are two major approaches to the evaluation of detectors' performance. One is based on theoretical calculations assuming some simple statistical models, and the other approach uses real or simulated spectral images. The former approach is too simplistic, at this point, to address practical needs. On the other hand, the latter approach does not give us a good understanding of why certain detectors work better than others in the context of specific targets and spectral images. Our goal is to initiate research that will combine these two separate approaches. In this paper, we start with a comparison of two well-known detectors-the matched filter detector (MFD) and the orthogonal subspace projection (OSP) detector. We show a surprising result that MFD always outperforms OSP in a traditional theoretical formulation of the detection problem. We also show that this theoretical formulation is not realistic in practical target detection in real spectral images. However, the obtained results suggest more realistic approaches for providing theoretical background for practical target detection. In this paper, we also point out many detectors introduced in the literature that are equivalent to MFD or OSP detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.603195
Detection of targets at subpixel level is a very challenging task due to the lack of available spatial properties, low probability of target occurrence, and background interference. Constrained Energy Minimization (CEM) is a popular technique for target detection in hyperspectral images. It is particular useful when only the desired target signature is available. When those undesired signatures to be eliminated are also known, Target Constrained Interference Minimization Filter (TCIMF) can be used to minimize the output of undesired signatures to further improve the target discrimination performance. Both CEM and TCIMF are match-filter based detectors. In this paper, we will further investigate their performance in background suppression, and how to improve the generalization capability in the detection of objects with large size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.604834
Spectrometer for real-time differential spectroscopy has been created. Spectrometer provides detection of spectrum derivatives with random spectral access. The instrument is based on AOTF with ultrasound phase manipulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.610638
A new class of spectrally adaptive infrared detectors has been
reported recently that has a spectral response function that can be
altered electronically by controlling the bias voltage of the
photodetector. Unlike conventional sensors, these new sensors have
``bands'' that have highly correlated spectral responses. The
potential benefit of these sensors is that the number of bands (and
their spectral features) used can be adapted to a specific task. The
drawback is that there might not be enough spectral diversity to
perform detection and classification operations.
In this paper we present a new theory that describes the suitability
of an arbitrary spectral sensor to perform a specific spectral
detection/classification task. This theory is based on the
geometric relationships between the sensor space that describes the
spectral characteristics of the detector and a scene space that
contains the spectra to be observed. We adapt the theory of
canonical correlation analysis to provide a rigorous framework for
assessing the utility of spectral detectors. We also show that this
general theory encompasses traditional band selection methods, but
provides much greater flexibility and a more transparent and
intuitive explanation of the phenomenology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, (2005) https://doi.org/10.1117/12.611238
The Hyperspectral Polarimetric Imaging (HPI) Testbed system combines a dual-band hyperspectral imager (VNIR and SWIR), a 3-axis polarimetric imager, and a high resolution panchromatic imager. All imagers operate through a common fore-optic, and thus have identical fields of view, with simultaneous image capture. The HPI testbed system was developed to aid a sentry in the surveillance of broad sectors for intrusion by ground vehicles or other non-natural objects. The various image components are readily combined through image fusion, which lends itself well to anomaly detection algorithms. This paper describes the general HPI testbed system design and performance, and also provides a detailed description of the polarimetric imaging system, calibration methods, and performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.