Advances in hyperspectral sensor technology increasingly provide higher resolution and higher quality data for the accurate generation of terrain categorization/classification (TERCAT) maps. The generation of TERCAT maps from hyperspectral imagery can be accomplished using a variety of spectral pattern analysis algorithms; however, the algorithms are sometimes complex, and the training of such algorithms can be tedious. Further, hyperspectral imagery contains a voluminous amount of data with contiguous spectral bands being highly correlated. These highly correlated bands tend to provide redundant information for classification/feature extraction computations. In this paper, we introduce the use of wavelets to generate a set of Generalized Difference Feature Index (GDFI) measures, which transforms a hyperspectral image cube into a derived set of GDFI bands. A commonly known special case of the proposed GDFI approach is the Normalized Difference Vegetation Index (NDVI) measure, which seeks to emphasize vegetation in a scene. Numerous other band-ratio measures that emphasize other specific ground features can be shown to be a special case of the proposed GDFI approach. Generating a set of GDFI bands is fast and simple. However, the number of possible bands is capacious and only a few of these “generalized ratios” will be useful. Judicious data mining of the large set of GDFI bands produces a small subset of GDFI bands designed to extract specific TERCAT features. We extract/classify several terrain features and we compare our results with the results of a more sophisticated neural network feature extraction routine.
Fine resolution synthetic aperture radar (SAR) and interferometric synthetic aperture radar (IFSAR) have been widely used for the purpose of creating viable terrain maps. A map is only as good as the information it contains. Therefore, it is a major priority of the mapmakers that the data that goes into the process be as complete and accurate as possible. In this paper, we analyze IFSAR correlation/de-correlation data to help in terrain feature information. The correlation data contains the correlation coefficient between the bottom and top IFSAR radar channels. It is a 32-bit floating-point number. This number is a measure of the absolute complex correlation coefficient between the signals that are received in each channel. The range of these numbers in between zero and unity. Unity indicates 100% correlation and zero indicates no correlation. The correlation is a function of several system parameters including signal-to-noise ratio (SNR), local geometry, and scattering mechanism. These two radar channels are physically close together and signals are inherently highly correlated. Significant difference is found beyond the fourth decimal place. We have concentrated our analysis on small features that are easily detectable in the correlation/de-correlation data and not so easily detectable in the elevation or magnitude data.
Remotely sensed imagery can be used to assess the results of natural disasters such as floods. The imagery can be used to predict the extent of a flood, to develop methods to control a flood, and to assess the damage caused by a flood. This paper addresses the information derived from two different sources: Interferometric Synthetic Aperture Radar (IFSAR) and Light Detection and Ranging (LIDAR). The study will show how the information differs and how this information can be fused to better analyze flood problems. LIDAR and IFSAR data were collected over the same Lakewood area of Los Angeles, California as part of a Federal Emergency Management Agency (FEMA)-sponsored data collection. Lakewood is located in a floodplain and is of interest for updating the maps of floodplains. IFSAR is an active sensor and can penetrate through clouds and provides three separate digital files for analysis: magnitude, elevation, and correlation files. LIDAR provides elevation and magnitude files. However, for this study only the elevation values were provided. The LIDAR elevation data is more accurate and more densely sampled than the IFSAR data. In this study, the above information is used to produce charts with information relevant to floodplain mapping. To produce relevant information, the data had to be adjusted due to different coordinate systems, different sampling rates, vertical and horizontal post spacing differences, and orientation differences between the IFSAR and LIDAR data sets. This paper will describe the methods and procedures to transform the data sets to a common reference.
Optical systems associated with imaging sensors and instruments typically distort the 'true' or object image, I(x), in a manner usually characterized by their point spread function (PSF). Determining I(x) from the measured image data, M(z), using the convolutional relation with the PSF is called deconvolution. This paper proposes what appears to be a new deconvolution technique by taking advantage of a remarkable coincidence. It is that for most optical systems of interest here the PSF is Gaussian, which is a zeroth order Hermite function. By expressing I(x) in an orthogonal representation using Hermite functions, which are to be distinguished from Hermite polynomials, the convolution integral can be evaluated exactly in analytical form, perhaps for the first time for the general case. This, in turn, leads to simple, precise linear relations between the coefficients of the Hermite representation of I(x) and that of M(x); while avoiding the common problem of division of noisy data by small quantities. The coefficients in those linear equations have precise values obtained from the nature of Hermite function interrelations rather than measured data. These values of I(x) may be more useful than M(x) as the initial iterate in the iteration techniques commonly used for deconvolution.
This paper discusses the nonuniform illumination of individual pixels in an array that is intrinsic to the scene viewed, as opposed to turbulence or platform motion as an error source in quantitative imagery. It describes two classes of algorithms to treat this type of problem. It points out that this problem can be viewed as a type of inverse problem with a corresponding integral equation unlike those commonly treated in the literature. One class allows estimation of the spatial variation of radiance within pixels using the single digital number irradiances produced by the measurements of the detectors within their instantaneous-fields-of-view (IFOVs). Usually it is assumed without discussion that the intrapixel radiance distribution is constant. Results are presented showing the improvements obtained by the methods discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.