PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908801 (2014) https://doi.org/10.1117/12.2073455
This PDF file contains the front matter associated with SPIE Proceedings Volume 9088, including the Title Page, Copyright information, Table of Contents, Introduction, Tribute, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908802 (2014) https://doi.org/10.1117/12.2050397
Image segmentation and clustering is a method to extract a set of components whose members are similar in
some way. Instead of focusing on the consistencies of local image characteristics such as borders and regions in a
perceptual way, the spectral graph theoretic approach is based on the eigenvectors of an affinity matrix; therefore
it captures perceptually important non-local properties of an image. A typical spectral graph segmentation
algorithm, normalized cuts, incorporates both the dissimilarity between groups and similarity within groups by
capturing global consistency making the segmentation process more balanced and stable. For spectral graph
partitioning, we create a graph-image representation wherein each pixel is taken as a graph node, and two pixels
are connected by an edge based on certain similarity criteria. In most cases, nearby pixels are likely to be
in the same region, therefore each pixel is connected to its spatial neighbors in the normalized cut algorithm.
However, this ignores the difference between distinct groups or the similarity within a group. A hyperspectral
image contains high spatial correlation among pixels, but each pixel is better described by its high dimensional
spectral feature vector which provides more information when characterizing the similarities among every pair
of pixels. Also, to facilitate the fact that boundary usually resides in low density regions in spectral domain, a
local density adaptive affinity matrix is presented in this paper. Results will be shown for airborne hyperspectral
imagery collected with the HyMAP, AVIRIS, HYDICE sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908803 (2014) https://doi.org/10.1117/12.2050405
We introduce a novel method for image fusion based on wavelet packets. Our ideas yield an approach for pan-sharpening low spatial resolution multispectral images with high spatial resolution panchromatic images. Two distinct algorithms for fusing are investigated, based on which wavelet packet coefficients are mixed. We evaluate our algorithm on images acquired from Landsat 7 ETM+, showing an improvement over results achieved through more basic wavelet algorithms. We also propose the use of spectral concentration during the wavelet packet pan-sharpening process to reduce the dimensionality of the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908804 (2014) https://doi.org/10.1117/12.2050651
Schroedinger Eigenmaps (SE) has recently emerged as a powerful graph-based technique for semi-supervised manifold learning and recovery. By extending the Laplacian of a graph constructed from hyperspectral imagery to incorporate barrier or cluster potentials, SE enables machine learning techniques that employ expert/labeled information provided at a subset of pixels. In this paper, we show how different types of nondiagonal potentials can be used within the SE framework in a way that allows for the integration of spatial and spectral information in unsupervised manifold learning and recovery. The nondiagonal potentials encode spatial proximity, which when combined with the spectral proximity information in the original graph, yields a framework that is competitive with state-of-the-art spectral/spatial fusion approaches for clustering and subsequent classification of hyperspectral image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908805 (2014) https://doi.org/10.1117/12.2050776
Hyperspectral sensors produce large quantities of data when operating on uninhabited aerial vehicles (UAV) that can overwhelm available data links. Technical Research Associates, Inc. designed, developed, and implemented a data compression approach that is capable of reducing this data volume by a factor of 100 or more with no loss in the tactical utility of the data. This algorithm, Full Spectrum Wavelet, combines efficient coding of the spectral dimension with a wavelet transformation of the spatial dimension. The approach has been tested on a wide variety of reflection band and thermal band hyperspectral data sets. In addition to such traditional measures as the error introduced by the compression, the performance of the algorithm was evaluated using application-oriented measures such as Receiver Operating Curves (ROC) and terrain categorization maps. Comparisons between these products showed little or no degradation of performance out to compression factors of 100. The evaluation procedure provided results directly relevant to tactical users of the data
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908806 (2014) https://doi.org/10.1117/12.2048639
One common approach to the compression of ultraspectral data cubes is by means of schemes where linear prediction plays an important role in facilitating the removal of redundant information. In general, compression algorithms can be seen as a sequence of stages where the output of one stage is the input of the following one. A stage that implements linear prediction relies heavily on a preprocessing stage that acts as a reversible procedure that rearranges the data cube and maximizes its spectral band correlation. In this paper we focus on AIRS (Atmospheric Infrared Sounder) images, a type of ultraspectral data cube, that involve more than two thousand bands and are excellent candidates to compression. Specifically we take into consideration several elements that are part of the preprocessing stage of an ultraspectral image. First, we explore the effect of SFCs (Space Filling Curves) as a way to provide a method to map an m-dimensional space into a highly correlated unidimensional space. In order to improve the overall mapping performance we propose a new scanning procedure that provides a more efficient alternative to the use of traditional state of the art curves. Second, we analyze, compare and introduce modifications to different band ordering and correlation estimation methods presented in the context of ultraspectral image preprocessing. Finally, we apply the techniques presented in this paper to a real AIRS compression architecture to obtain rate-distortion curves as a function of preprocessing parameters and determine the best scenario for a given linear prediction stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908807 (2014) https://doi.org/10.1117/12.2050682
In past work, we have shown that density effects in hyperspectral bi-directional reflectance function (BRDF) data are consistent in laboratory goniometer data, field goniometer measurements with the NRL Goniometer for Portable Hyperspectral Earth Reflectance (GOPHER), and airborne CASI-1500 hyperspectral imagery. Density effects in granular materials have been described in radiative transfer models and are known, for example, to influence both the overall level of reflectance as well as the size of specific characteristics such as the width of the opposition effect in the BRDF. However, in mineralogically complex sands, such as coastal sands, the relative change in reflectance with density depends on the composite nature of the sand. This paper examines the use of laboratory and field hyperspectral goniometer data and their utility for retrieving sand density from airborne hyperspectral imagery. We focus on limitations of current models to describe density effects in BRDF data acquired in the field, laboratory setting, and from airborne systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908809 (2014) https://doi.org/10.1117/12.2053350
Reflectance spectra of solids are influenced by the absorption coefficient and index of refraction as well as particle size
and morphology. In the infrared, spectral features may be observed as either maxima or minima: in general, the upwardgoing
peaks in the reflectance spectrum result from surface scattering, which are rays that have reflected from the
surface without penetration, whereas downward-going peaks result from either absorption or volume scattering, i.e. rays
that have penetrated into the sample to be absorbed or refracted into the sample interior and are not reflected. The light
signal reflected from solids usually encompasses all these effects which include dependencies on particle size,
morphology and sample density. This paper measures the reflectance spectra in the 1.3 – 16 micron range for various
bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the
spectral features as a function of the mean grain size of the sample. The bulk materials were ground and sieved to
separate the samples into various size fractions: 0-45, 45-90, 90-180, 180-250, 250-500, and >500 microns. The
directional-hemispherical spectra were recorded using a Fourier transform infrared spectrometer equipped with an
integrating sphere to measure the reflectance for all of the particle-size fractions. We have studied both organic and
inorganic materials, but this paper focuses on inorganic salts, NaNO3, in particular. Our studies clearly show that particle
size has an enormous influence on the measured reflectance spectra for bulk materials and that successful identification
requires sufficient representative reflectance data so as to include the particle size(s) of interest. Origins of the effects are
discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880B (2014) https://doi.org/10.1117/12.2050382
Hyperspectral images comprise, by design, high dimensional image data. However, research has shown that for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of non-linear manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data. With graph theory and manifold learning based models, the only assumption is that the data reside on an underlying manifold. In previous publications, we have shown that manifold coordinate approximation using locally linear embedding (LLE) is a viable pre-processing step for target detection with the Adaptive Cosine/Coherence Estimator (ACE) algorithm. Here, we improve upon that methodology using a more rigorous, data-driven implementation of LLE that incorporates the injection of a cloud" of target pixels and the Spectral Angle Mapper (SAM) detector. The LLE algorithm, which holds that the data is locally linear, is typically governed by a user defined parameter k, indicating the number of nearest neighbors to use in the initial graph model. We use an adaptive approach to building the graph that is governed by the data itself and does not rely upon user input. This implementation of LLE can yield greater separation between the target pixels and the background pixels in the manifold space. We present an analysis of target detection performance in the manifold coordinates using scene-derived target spectra and laboratory-measured target spectra across two different data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880C (2014) https://doi.org/10.1117/12.2049146
The additive target model is used routinely in the statistical detection of opaque targets, despite its phenomenological inconsistency. The more appropriate replacement target model is seldom used, because the standard method for producing a detection algorithm from it proves to be intractable, unless narrow restrictions are imposed on the target model. Now however, continuum fusion methods have allowed an expanded solution set to a more general replacement target problem. We derive an example detection algorithm, to illustrate the fusion principles one can use to tailor a method to a particular performance requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880D (2014) https://doi.org/10.1117/12.2053540
In this study, targets and nontargets in a hyperspectral image are characterized in terms of their spectral features. Target detection problem is considered as a two-class classification problem. For this purpose, a vector tunnel algorithm (VTA) is proposed. The vector tunnel is characterized only by the target class information. Then, this method is compared with Euclidean Distance (ED), Spectral Angle Map (SAM) and Support Vector Machine (SVM) algorithms. To obtain the training data belonging to target class, the training regions are selected randomly. After determination of the parameters of the algorithms with the training set, detection procedures are accomplished at each pixel as target or background. Consequently, detection results are displayed as thematic maps. The algorithms are trained with the same training sets, and their comparative performances are tested under various cases. During these studies, various levels of thresholds are evaluated based on the efficiency of the algorithms by means of Receiver Operating Characteristic Curves (ROC) as well as visually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880E (2014) https://doi.org/10.1117/12.2048860
This paper will describe the application of two non-traditional kinds of machine learning (transductive machine learning
and the more recently proposed matched-pair machine learning) to the target detection problem. The approach combines
explicit domain knowledge to model the target signal with a more agnostic machine-learning approach to characterize
the background. The concept is illustrated with simulated data from an elliptically-contoured background distribution, on
which a subpixel target of known spectral signature but unknown spatial extent has been implanted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880G (2014) https://doi.org/10.1117/12.2050522
This paper presents a methodology and results for the comparison of simulated imagery to real imagery acquired with
multiple sensors hosted on an airborne platform. The dataset includes aerial multi- and hyperspectral imagery with
spatial resolutions of one meter or less. The multispectral imagery includes data from an airborne sensor with three-band
visible color and calibrated radiance imagery in the long-, mid-, and short-wave infrared. The airborne hyperspectral
imagery includes 360 bands of calibrated radiance and reflectance data spanning 400 to 2450 nm in wavelength.
Collected in September 2012, the imagery is of a park in Avon, NY, and includes a dirt track and areas of grass, gravel,
forest, and agricultural fields. A number of artificial targets were deployed in the scene prior to collection for purposes of
target detection, subpixel detection, spectral unmixing, and 3D object recognition. A synthetic reconstruction of the
collection site was created in DIRSIG, an image generation and modeling tool developed by the Rochester Institute of
Technology, based on ground-measured reflectance data, ground photography, and previous airborne imagery.
Simulated airborne images were generated using the scene model, time of observation, estimates of the atmospheric
conditions, and approximations of the sensor characteristics. The paper provides a comparison between the empirical and
simulated images, including a comparison of achieved performance for classification, detection and unmixing
applications. It was found that several differences exist due to the way the image is generated, including finite sampling
and incomplete knowledge of the scene, atmospheric conditions and sensor characteristics. The lessons learned from this
effort can be used in constructing future simulated scenes and further comparisons between real and simulated imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880H (2014) https://doi.org/10.1117/12.2050433
The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN
atmospheric transmittance and radiance model. This version of the code incorporates modern software ar-
chitecture including an application programming interface, enhanced physics features including a line-by-line
algorithm, a supplementary physics toolkit, and new documentation. The application programming interface
has been developed for ease of integration into user applications. The MODTRAN code has been restructured
towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other
developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well
as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880I (2014) https://doi.org/10.1117/12.2050596
Most Earth observation hyperspectral imagery (HSI) detection and identification algorithms depend critically upon a robust atmospheric compensation capability to correct for the effects of the atmosphere on the radiance signal. Atmospheric compensation methods typically perform optimally when ancillary ground truth data are available, e.g., high fidelity in situ radiometric observations or atmospheric profile measurements. When ground truth is incomplete or not available, additional assumptions must be made to perform the compensation. Meteorological climatologies are available to provide climatological norms for input into the radiative transfer models; however no such climatologies exist for empirical methods. The success of atmospheric compensation methods such as the empirical line method suggests that remotely sensed HSI scenes contain comprehensive sets of atmospheric state information within the spectral data itself. It is argued that large collections of empirically-derived atmospheric coefficients collected over a range of climatic and atmospheric conditions comprise a resource that can be applied to prospective atmospheric compensation problems. This paper introduces a new climatological approach to atmospheric compensation in which empirically derived spectral information, rather than sensible atmospheric state variables, is the fundamental datum. An experimental archive of airborne HSI data is mined for representative atmospheric compensation coefficients, which are assembled in a scientific database of spectral and sensible atmospheric observations. We present the empirical techniques for extracting the coefficients, the modeling methods used to standardize the coefficients across varying collection and illumination geometries, and the resulting comparisons of adjusted coefficients. Preliminary results comparing normalized coefficients from representative scenes across several distinct environments are presented, along with a discussion of the potential benefits, shortfalls and future work to fully develop the new technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
F. A. Kruse, A. M. Kim, S. C. Runyon, Sarah C. Carlisle, C. C. Clasen, C. H. Esterline, A. Jalobeanu, J. P. Metcalf, P. L. Basgall, et al.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880K (2014) https://doi.org/10.1117/12.2049725
The Naval Postgraduate School (NPS) Remote Sensing Center (RSC) and research partners have completed a remote sensing pilot project in support of California post-earthquake-event emergency response. The project goals were to dovetail emergency management requirements with remote sensing capabilities to develop prototype map products for improved earthquake response. NPS coordinated with emergency management services and first responders to compile information about essential elements of information (EEI) requirements. A wide variety of remote sensing datasets including multispectral imagery (MSI), hyperspectral imagery (HSI), and LiDAR were assembled by NPS for the purpose of building imagery baseline data; and to demonstrate the use of remote sensing to derive ground surface information for use in planning, conducting, and monitoring post-earthquake emergency response. Worldview-2 data were converted to reflectance, orthorectified, and mosaicked for most of Monterey County; CA. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data acquired at two spatial resolutions were atmospherically corrected and analyzed in conjunction with the MSI data. LiDAR data at point densities from 1.4 pts/m2 to over 40 points/ m2 were analyzed to determine digital surface models. The multimodal data were then used to develop change detection approaches and products and other supporting information. Analysis results from these data along with other geographic information were used to identify and generate multi-tiered products tied to the level of post-event communications infrastructure (internet access + cell, cell only, no internet/cell). Technology transfer of these capabilities to local and state emergency response organizations gives emergency responders new tools in support of post-disaster operational scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880L (2014) https://doi.org/10.1117/12.2051049
Optical imaging spectroscopy is investigated as a method to estimate radiological background by spectral identification of soils, sediments, rocks, minerals and building materials derived from natural materials and assigning tabulated radiological emission values to these materials. Radiological airborne surveys are undertaken by local, state and federal agencies to identify the presence of radiological materials out of regulatory compliance. Detection performance in such surveys is determined by (among other factors) the uncertainty in the radiation background; increased knowledge of the expected radiation background will improve the ability to detect low-activity radiological materials. Radiological background due to naturally occurring radiological materials (NORM) can be estimated by reference to previous survey results, use of global 40K, 238U, and 232Th (KUT) values, reference to existing USGS radiation background maps, or by a moving average of the data as it is acquired. Each of these methods has its drawbacks: previous survey results may not include recent changes, the global average provides only a zero-order estimate, the USGS background radiation map resolutions are coarse and are accurate only to 1 km - 25 km sampling intervals depending on locale, and a moving average may essentially low pass filter the data to obscure small changes in radiation counts. Imaging spectroscopy from airborne or spaceborne platforms can offer higher resolution identification of materials and background, as well as provide imaging context information. AVIRlS hyperspectral image data is analyzed using commercial exploitation software to determine the usefulness of imaging spectroscopy to identify qualitative radiological background emissions when compared to airborne radiological survey data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880M (2014) https://doi.org/10.1117/12.2053491
Mapping of benthic habitats from hyperspectral imagery can be achieved by integrating bio-optical models with common techniques for hyperspectral image processing, such as spectral unmixing. Several algorithms have been described in the literature to compensate or remove the effects of the water column and extract information about the benthic habitat characteristics utilizing only measured hyperspectral imagery as input. More recently, the increasing availability of lidar-derived bathymetry information offers the possibility to incorporate this data into existing algorithms, thereby reducing the number of unknowns in the problem, for the improved retrieval of benthic habitat properties. This study demonstrates how bathymetry information improves the mapping of benthic habitats using two algorithms that combine bio-optical models with linear spectral unmixing. Hyperspectral data, both simulated and measured, in-situ spectral data, and lidar-derived bathymetry data are used for the analysis. The simulated data is used to study the capabilities of the selected algorithm to improve estimates of benthic habitat composition by combining bathymetry data with the hyperspectral imagery. Hyperspectral images captured over Emique in Puerto Rico using an AISA Eagle sensor is used to further test the algorithms using real data. Results from analyzing this imagery demonstrate increased agreement between algorithm output and existing habitat maps and ground truth when bathymetry data is used jointly with hyperspectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880N (2014) https://doi.org/10.1117/12.2050902
A pixel-level Generalized Likelihood Ratio Test (GLRT) statistic for hyperspectral change detection is developed to mitigate false change caused by image parallax. Change detection, in general, represents the difficult problem of discriminating significant changes opposed to insignificant changes caused by radiometric calibration, image registration issues, and varying view geometries. We assume that the images have been registered, and each pixel pair provides a measurement from the same spatial region in the scene. Although advanced image registration methods exist that can reduce mis-registration to subpixel levels; residual spatial mis-registration can still be incorrectly detected as significant changes. Similarly, changes in sensor viewing geometry can lead to parallax error in an urban cluttered scene where height structures, such as buildings, appear to move. Our algorithm looks to the inherent relationship between the image views and the theory of stereo vision to perform parallax mitigation leading to a search result in the assumed parallax direction. Mitigation of the parallax-induced false alarms is demonstrated using hyperspectral data in the experimental analysis. The algorithm is examined and compared to the existing chronochrome anomalous change detection algorithm to assess performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880O (2014) https://doi.org/10.1117/12.2049983
Our first observations using the longwave infrared (LWIR) hyperspectral data subset of the Spectral and Polarimetric Imagery Collection Experiment (SPICE) database are summarized in this paper, focusing on the inherent challenges associated with using this sensing modality for the purpose of object pattern recognition. Emphases are also put on data quality, qualitative validation of expected atmospheric spectral features, and qualitative comparison against another dataset of the same site using a different LWIR hyperspectral sensor. SPICE is a collaborative effort between the Army Research Laboratory, U.S. Army Armament RDEC, and more recently the Air Force Institute of Technology. It focuses on the collection and exploitation of longwave and midwave infrared (LWIR and MWIR) hyperspectral and polarimetric imagery. We concluded from this work that the quality of SPICE hyperspectral LWIR data is categorically comparable to other datasets recorded by a different sensor of similar specs; and adequate for algorithm research, given the scope of SPICE. The scope was to conduct a long-term infrared data collection of the same site with targets, using both sensing modalities, under various weather and non-ideal conditions. Then use the vast dataset and associated ground truth information to assess performance of the state of the art algorithms, while determining performance degradation sources. The expectation is that results from these assessments will spur new algorithmic ideas with the potential to augment pattern recognition performance in remote sensing applications. Over time, we are confident the SPICE database will prove to be an asset to the wide open remote sensing community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880P (2014) https://doi.org/10.1117/12.2050699
The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Greg Kopp, Chris Belting, Zach Castleman, Ginger Drake, Joey Espejo, Karl Heuerman, Bret Lamprecht, James Lanzi, Paul Smith, et al.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880Q (2014) https://doi.org/10.1117/12.2053426
The 2007 National Research Council Decadal Survey for Earth Science identified needed measurements to improve understanding of the Earth’s climate system, recommending acquiring Earth spectral radiances with an unprecedented 0.2% absolute radiometric accuracy to track long-term climate change and to improve climate models and predictions. Current space-based imagers have radiometric uncertainties of ~2% or higher limited by the high degradation uncertainties of onboard solar diffusers or calibration lamps or by vicarious ground scenes viewed through the Earth’s atmosphere. The HyperSpectral Imager for Climate Science (HySICS) is a spatial/spectral imaging spectrometer with an emphasis on radiometric accuracy for such long-term climate studies based on Earth-reflected visible and near-infrared radiances. The HySICS’s accuracy is provided by direct views of the Sun, which is more stable and better characterized than traditional flight calibration sources. Two high-altitude balloon flights provided by NASA's Wallops Flight Facility and NASA’s Columbia Scientific Balloon Facility are intended to demonstrate the instrument’s 10× improvement in radiometric accuracy over existing instruments. We present the results of the first of these flights, during which measurements of the Sun, Earth, and lunar crescent were acquired from 37 km altitude. Covering the entire 350-2300 nm spectral region needed for shortwave Earth remote sensing with the HySICS’s single, flight-heritage detector array promises mass, cost, and size advantages for eventual space- and air-borne missions. A 6 nm spectral resolution with a 0.5 km spatial resolution from low Earth orbit helps in determinations of atmospheric composition, land usage, vegetation, and ocean color.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880S (2014) https://doi.org/10.1117/12.2050733
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial,
temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample
data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing
chain capable of “fusing” image data from multiple independent and asynchronous sensors into a form amenable to
analysis and exploitation using commercially-available tools.
Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2)
Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration
algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal
upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution
imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor’s imagery.
Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion.
This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that
from the previous coarser-resolution image.
Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed
by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up
to a coarser resolution LWIR camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880T (2014) https://doi.org/10.1117/12.2050149
We consider the challenge of detection of chemical plumes in hyperspectral image data. Segmentation of gas is
difficult due to the diffusive nature of the cloud. The use of hyperspectral imagery provides non-visual data for
this problem, allowing for the utilization of a richer array of sensing information. We consider several videos of
different gases taken with the same background scene. We investigate a technique known as “manifold denoising”
to delineate different features in the hyperspectral frames. With manifold denoising, we can bring more pertinent
eigenvectors to the forefront. One can also simultaneously analyze frames from multiple videos using efficient
algorithms for high dimensional data such as spectral clustering combined with linear algebra methods that
leverage either subsampling or sparsity in the data. Analysis of multiple frames by the Nyström extension shows
the ability to differentiate between different gasses while being able to group the similar items together, such as
gasses or background signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880U (2014) https://doi.org/10.1117/12.2050446
Processing long-wave infrared (LWIR) hyperspectral imagery to surface emissivity or reflectance units via atmospheric
compensation and temperature-emissivity separation (TES) affords the opportunity to remotely classify and identify
solid materials with minimal interference from atmospheric effects. This paper describes an automated atmospheric
compensation and TES method, called FLAASH®-IR (Fast Line-of-sight Atmospheric Analysis of Spectral Hypecubes--
Infrared), and its application to ground-to-ground imagery taken with the Telops Inc. Hyper-Cam interferometric
hyperspectral imager. The results demonstrate that clean, quantitative surface spectra can be obtained, even with highly
reflective (low emissivity) objects such as bare metal and in the presence of some illumination from the surroundings. In
particular, the atmospheric compensation process suppresses the spectral features due to atmospheric water vapor and
ozone, which are especially prominent in reflected sky radiance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880V (2014) https://doi.org/10.1117/12.2049980
Most hyperspectral chemical gaseous plume quantification algorithms assume a priori knowledge of the plume
temperature either through direct measurement or an auxiliary temperature estimation approach. In this paper,
we propose a new quantification algorithm that can simultaneously estimate the plume strength as well as its
temperature. We impose only a mild spatial assumption, that at least one nearby pixel shares the same plume
parameters as the target pixel, which we believe will be generally satisfied in practice. Simulations show that
the performance loss incurred by estimating both the temperature and plume strength is small, as compared to
the case when the plume temperature is known exactly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880W (2014) https://doi.org/10.1117/12.2050640
Spectral images have relatively low spatial resolution, compared to high-resolution single band panchromatic (PAN)
images. Therefore, fusing a spectral image with a PAN image has been widely studied to produce a high-resolution
spectral image. However, raw spectral images are too large to process and contain redundant information that is not
utilized in the fusion process. In this study, we propose a novel fusion method that employs a spectral band reduction
and contourlets. The band reduction begins with the best two band combination, and this two-band combination is
subsequently augmented to three, four, and more until the desired number of bands is selected. The adopted band
selection algorithm using the endmember extraction concept employs a sequential forward search strategy. Next, the
image fusion is performed with two different spectral images based on the frequency components that are newly
obtained by contourlet transform (CT). One spectral image that is used as a dataset is multispectral (MS) image and the
other is hyperspectral (HS) image. Each original spectral image is pre-processed by spectrally integrating over the entire
spectral range to obtain a PAN source image that is used in the fusion process. This way, we can eliminate the step of
image co-registration since the obtained PAN image is already perfectly aligned to the spectral image. Next, we fuse the
band-reduced spectral images with the PAN images using contourlet-based fusion framework. The resultant fusion
image provides enhanced spatial resolution while preserving the spectral information. In order to analyze the band
reduction performance, the original spectral images are fused with the same PAN images to serve as a reference image,
which is then compared to the band-reduced spectral image fusion results using six different quality metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880X (2014) https://doi.org/10.1117/12.2051089
This work describes a novel method of estimating statistically optimum pixel sizes for classification. Historically
more resolution, smaller pixel sizes, are considered better, but having smaller pixels can cause difficulties in
classification. If the pixel size is too small, then the variation in pixels belonging to the same class could be very
large. This work studies the variance of the pixels for different pixel sizes to try and answer the question of how
small, (or how large) can the pixel size be and still have good algorithm performance. Optimum pixel size is defined
here as the size when pixels from the same class statistically come from the same distribution. The work first derives
ideal results, then compares this to real data. The real hyperspectral data comes from a SOC-700 stand mounted
hyperspectral camera. The results compare the theoretical derivations to variances calculated with real data in order
to estimate different optimal pixel sizes, and show a good correlation between real and ideal data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90880Z (2014) https://doi.org/10.1117/12.2051434
Nonlinear spectral mixing occurs when materials are intimately mixed. Intimate mixing is a common characteristic of granular materials such as soils. A linear spectral unmixing inversion applied to a nonlinear mixture will yield subpixel abundance estimates that do not equal the true values of the mixture's components. These aspects of spectral mixture analysis theory are well documented. Several methods to invert (and model) nonlinear spectral mixtures have been proposed. Examples include Hapke theory, the extended endmember matrix method, and kernel-based methods. There is, however, a relative paucity of real spectral image data sets that contain well characterized intimate mixtures. To address this, special materials were custom fabricated, mechanically mixed to form intimate mixtures, and measured with a hyperspectral imaging (HSI) microscope. The results of analyses of visible/near-infrared (VNIR; 400 nm to 900 nm) HSI microscopy image cubes (in reflectance) of intimate mixtures of the two materials are presented. The materials are spherical beads of didymium glass and soda-lime glass both ranging in particle size from 63 m to 125 m. Mixtures are generated by volume and thoroughly mixed mechanically. Three binary mixtures (and the two endmembers) are constructed and emplaced in the wells of a 96-well sample plate: 0%/100%, 25%/75%, 50%/50%, 80%/20%, and 100%/0% didymium/soda-lime. Analysis methods are linear spectral unmixing (LSU), LSU applied to reflectance converted to single-scattering albedo (SSA) using Hapke theory, and two kernel-based methods. The first kernel method uses a generalized kernel with a gamma parameter that gauges non-linearity, applying the well-known kernel trick to the least squares formulation of the constrained linear model. This method attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The second method uses 'K-hype' with a polynomial (quadratic) kernel. LSU applied to the reflectance spectra of the mixtures produced poor abundance estimates regardless of the constraints applied in the inversion. The 'K-hype' kernel-based method also produced poor fraction estimates. The best performers are LSU applied to the reflectance spectra converted to SSA using Hapke theory and the gamma parameter kernel-based method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908810 (2014) https://doi.org/10.1117/12.2052887
The evaluation and discrimination of similar objects in real versus synthetically generated aerial color images is needed for security and surveillance purposes among other applications. Identification of appropriate discrimination metrics between real versus synthetic images may also help in more robust generation of these synthetic images. In this paper, we investigate the effectiveness of three different metrics based on Gaussian Blur, Differential Operators and singular value decomposition (SVD) to differentiate between a pair of same objects contained in real and synthetic overhead aerial color images. We use nine pairs of images in our tests. The real images were obtained in the visible aerial color image domain. The proposed metrics are used to discriminate between pairs of real and synthetic objects such as cooling units, industrial buildings, houses, conveyors, stacks, piles, railroads and ponds in these real and synthetically generated images respectively. The proposed method successfully discriminates between the real and synthetic objects in aerial color images without any apriori knowledge or extra information such as optical flow. We ranked these metrics according to their effectiveness to discriminate between synthetic and real objects in overhead images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908811 (2014) https://doi.org/10.1117/12.2053401
An approach to incorporate spatial information in unmixing using the nonnegative matrix factorization is presented.
We call this method the spectrally adaptive constrained NMF (sacNMF). The spatial information is incorporated by
partitioning hyperspectral images into spectrally homogeneous regions using quadtree region partitioning.
Endmembers for each region are extracted using the nonnegative matrix factorization and then clustered in spectral
endmembers classes. The endmember classes better account for the variability of spectral endmembers across the
landscape. Abundances are estimated using all spectral endmembers. Experimental results using AVIRIS data from
Indian Pines is used to demonstrate the potential of the proposed approach. Comparisons with other published
approaches are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908812 (2014) https://doi.org/10.1117/12.2050165
Hyperspectral image denoising methods aim to improve the spatial and spectral quality of the image to increase the
effectiveness of target detection algorithms. Comparing denoising methods is difficult, because sometimes authors have
compared their algorithms to simple methods such as Wiener filter and wavelet thresholding. We would like to compare
only the most effective methods for standoff target detection using sampled training spectra. Our overall goal is to
implement an HSI algorithm to detect possible weapons and shielding materials in a scene, using a lab collected library
of materials spectra.
Selection of a suitable method is based on PSNR, classification accuracy, and time complexity. Since our goal is target
detection, classification accuracy is more emphasized; however, an algorithm that requires large processing time would
not be effective for the purpose of real-time detection. Elapsed time between HSI data collection and its processing could
allow changes or movement in the scene, decreasing the validity of results. Based on our study, the First Order
Roughness Penalty algorithm provides computation time of less than 2 seconds, but only provides an overall accuracy of
88% for the Indian Pines dataset. The Spectral Spatial Adaptive Total Variation method increases overall accuracy to
almost 97%, but requires a computation time of over 50 seconds. For standoff target detection, Spectral Spatial Adaptive
Total Variation is preferable, because it increases the probability of classification. By increasing the percentage of
weapons materials that are correctly identified, further actions such as inspection or interception can be determined with
confidence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908813 (2014) https://doi.org/10.1117/12.2050387
Non-linear dimensionality reduction methods have been widely applied to hyperspectral imagery due to its structure as the information can be represented in a lower dimension without losing information, and because the non-linear methods preserve the local geometry of the data while the dimension is reduced. One of these methods is Laplacian Eigenmaps (LE), which assumes that the data lies on a low dimensional manifold embedded in a high dimensional space. LE builds a nearest neighbor graph, computes its Laplacian and performs the eigendecomposition of the Laplacian. These eigenfunctions constitute a basis for the lower dimensional space in which the geometry of the manifold is preserved. In addition to the reduction problem, LE has been widely used in tasks such as segmentation, clustering, and classification. In this regard, a new Schrodinger Eigenmaps (SE) method was developed and presented as a semi-supervised classification scheme in order to improve the classification performance and take advantage of the labeled data. SE is an algorithm built upon LE, where the former Laplacian operator is replaced by the Schrodinger operator. The Schrodinger operator includes a potential term V, that, taking advantage of the additional information such as labeled data, allows clustering of similar points. In this paper, we explore the idea of using SE in target detection. In this way, we present a framework where the potential term V is defined as a barrier potential: a diagonal matrix encoding the spatial position of the target, and the detection performance is evaluated by using different targets and different hyperspectral scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908814 (2014) https://doi.org/10.1117/12.2050638
Band selection is an important unsolved challenge in hyperspectral image processing that has been used for
dimensionality reduction and classification improvement. To date, numerous researchers have investigated the
unsupervised selection of band groups using measures such as correlation and Kullback-Leibler divergence. However,
no clear winner has emerged across data sets and detection tasks. Herein, we investigate the utility of aggregating
different proximity measures for band group selection. Specifically, we employ the Choquet integral with respect to different measures (capacities) as it is able to yield a variety of aggregation functions like t-norms, t-conorms and
averaging operators. We explore the utility of aggregation in the context of single band, single band group, band group
dimensionality reduction and multiple band group combinations in conjunction with support vector machine (SVM)
based classification. Our preliminary experiments indicate there is value in aggregating different proximity measures. In some instances an intersection operator works well while in other cases a union operator is best. As may be expected,
this can, and does vary per detection task. We also see that depending on the difficulty of the target detection problem, different aggregation, band grouping and combination strategies prevail. Advantages of our approach include; flexibility,
the aggregation operator can be learned, and the method can default to a single proximity measure if needed and result, in the worst case, in no performance loss. Experiments are performed on three hyperspectral benchmark data sets to demonstrate the applicability of the proposed concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908815 (2014) https://doi.org/10.1117/12.2051040
Anomaly detection (AD) is an important application for target detection in remotely sensed hyperspectral data.
Therefore, variety kinds of methods with different advantages and drawbacks have been proposed for past two decades.
Recently, the kernelized support vector data description (SVDD) based anomaly detection approaches has become
popular as these methods avoid prior assumptions about the distribution of data and provides better generalization to
characterize the background. The global SVDD needs a training set for the background modeling; however, it is sensitive
to outliers in the data; so the training set has to be generated with pure background spectra. In general, the training data is
selected by random selection of the pixels spectra in entire image. In this study, we propose an approach for better
selection of the training data based on principal component analysis (PCA). A valid assumption for remotely sensed
images is that the principal components (PCs) with higher variance include substantial amount of background
information. For this reason, a subspace composed of several of the highest variance PCs of cluttered data can be defined
as background subspace. Thus, with the proposed algorithm, the selection of background pixels is achieved by projecting
all pixels in the image into the background subspace and thresholding them with respect to the relative energy on the
background subspace. Experimental results verify that the proposed algorithm has promising results in terms of accuracy
and speed during the detection of anomalies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908817 (2014) https://doi.org/10.1117/12.2050353
Chromotomography is a form of hyperspectral imaging that utilizes a spinning diffractive element to resolve a rapidly
evolving scene. The system captures both spatial dimensions and the spectral dimension at the same time. Advanced
algorithms take the recorded dispersed images and use them to construct the data cube in which each reconstructed
image is the recorded scene at a specific wavelength. A simulation tool has been developed which uses Zemax to
accurately trace rays through real or proposed optical systems. The simulation is used here to explore the limitations of
tomographic reconstruction in both idealized and aberrated imaging systems. Results of the study show the accuracy of
reconstructed images depends upon the content of the original target scene, the number of projections measured, and the
angle through which the prism is rotated. For cases studied here, 20 projections are sufficient to achieve image quality
99.51% of the max value. Reconstructed image quality degrades with aberrations, but no worse than equivalent
conventional imagers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908818 (2014) https://doi.org/10.1117/12.2050361
A fieldable hyperspectral chromotomographic imager has been developed at the Air Force Institute of Technology to refine component requirements for a space-based system. The imager uses a high speed visible band camera behind a direct-vision prism to simultaneously record two spatial dimensions and the spectral dimension. Capturing all three dimensions simultaneously allows for the hyperspectral imaging of transient events. The prism multiplexes the spectral and spatial information, so a tomographic reconstruction algorithm is required to separate hyperspectral channels. The fixed dispersion of the prism limits the available projections, leading to artifacts in the reconstruction which limit the image quality and spectrometric accuracy of the reconstructions. The amount of degradation is highly dependent on the content of the scene. Experiments were conducted to characterize the image and spectral quality as a function of spatial, spectral, and temporal complexity. We find that in general, image quality degrades as the source bandwidth increases. Spectra estimated from the reconstructed data cube are generally best for point-like sources, and can be highly inaccurate for extended scenes. In other words, the spatial accuracy varies inversely with the spectral width, and the spectral accuracy varies inversely with the spatial width. Experiment results also demonstrate the ability to reconstruct hyperspectral images from transient combustion events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 908819 (2014) https://doi.org/10.1117/12.2054477
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact
spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been
proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As
the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with
varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object
varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design
where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral
distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by
each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times
so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is
small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion
tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point
spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction.
A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results.
Elimination of spectral artifacts due to scene motion is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90881A (2014) https://doi.org/10.1117/12.2056722
Hyperspectral imaging systems are currently used for numerous activities related to spectral identification of
materials. These passive imaging systems rely on naturally reflected/emitted radiation as the source of the
signal. Thermal infrared systems measure radiation emitted from objects in the scene. As such, they can
operate at both day and night. However, visible through shortwave infrared systems measure solar illumination
reflected from objects. As a result, their use is limited to daytime applications. Omni Sciences has produced
high powered broadband shortwave infrared super-continuum laser illuminators. A 64-watt breadboard system
was recently packaged and tested at Wright-Patterson Air Force Base to gauge beam quality and to serve as a
proof-of-concept for potential use as an illuminator for a hyperspectral receiver. The laser illuminator was placed
in a tower and directed along a 1.4km slant path to various target materials with reflected radiation measured
with both a broadband camera and a hyperspectral imaging system to gauge performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90881B (2014) https://doi.org/10.1117/12.2049314
In this paper we describe the algorithm for local image reconstructions from global measurements on the Focal Plane Array (FPA). The global measurements may come from a multiplexed imaging and /or convolution-based sampling model. The algorithm consists of scanning a rectangular segment on the FPA data and reconstructing the image on that segment using a modified Wiener Filter by adapting the measurements on the data via a linear operator. This method is essential in the reconstruction of large format images from large data samples. In particular, in this paper the method is applied to multiplexed, multispectral imaging from a single measurement on the FPA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90881C (2014) https://doi.org/10.1117/12.2049594
The development of multispectral and hyperspectral systems in the area of civil security have created new opportunities
with regard to mobile sampling as well as rapid detection for on-site analysis. The latest developments, especially in the
area of optical detectors, and the constant improvements to microprocessor computing capacity, have had an enormous
influence on coloristic analysis methods. Ongoing optimization of multispectral systems is leading to a vast quantity of
new application scenarios due to the simplification of measuring systems and the independence of specialized lab
environments. The objective is to adapt the developed algorithm1 to a selection of suitable spectral bands, from
monochrome powder specimens to panchromatic safety signs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90881E (2014) https://doi.org/10.1117/12.2050297
In generation of the true color image with the hyper-spectral data, there are still some key issues. On one hand, it is
of great difficulty in obtaining a true-color image, which is satisfying the human visual habit, with narrow band-width
hyper-spectral data. On the other hand, the degradation of the performance of the hyper-spectral sensor will also cause
the color distortion. In this paper, a color calibration method based on physical model for hyper-spectral data is proposed
to deal with those questions. Employing the color matching function, the method reconstructs the true color image with
the spectral information in the red, green and blue band extracted from the hyper-spectral data. Combining the in situ
measurements of the reflectance spectrum of artificial targets, the true colors of the targets were first deduced with the
radiative transfer model. Therefore, the color calibration model was established. The novel method was validated with
the aerial hyper-spectral data to be suit for the calibration of the true color images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX, 90881F (2014) https://doi.org/10.1117/12.2063812
In this paper we propose an ι1-norm penalized sparse support vector machine (SSVM) as an embedded approach
to the hyperspectral imagery band selection problem. SSVMs exhibit a model structure that includes a clearly
identifiable gap between zero and non-zero weights that permits important bands to be definitively selected in
conjunction with the classification problem. The SSVM Algorithm is trained using bootstrap aggregating to
obtain a sample of SSVM models to reduce variability in the band selection process. This preliminary sample
approach for band selection is followed by a secondary band selection which involves retraining the SSVM to
further reduce the set of bands retained. We propose and compare three adaptations of the SSVM band selection
algorithm for the multiclass problem. Two extensions of the SSVM Algorithm are based on pairwise band
selection between classes. Their performance is validated by using one-against-one (OAO) SSVMs. The third
proposed method is a combination of the filter band selection method WaLuMI in sequence with the (OAO)
SSVM embedded band selection algorithm. We illustrate the perfomance of these methods on the AVIRIS
Indian Pines data set and compare the results to other techniques in the literature. Additionally we illustrate
the SSVM Algorithm on the Long-Wavelength Infrared (LWIR) data set consisting of hyperspectral videos of
chemical plumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.