PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9874, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional Wiener filtering has been widely used to restore single-band images. However, it has not been discussed yet how to specially use Wiener filtering to get a spectral restoration effect for a 3-Dimensional hyperspectral image. Modeling the measured spectrum to be the result of a convolution with the Spectral Response Function (SRF) and noise-adding process, a method to apply spectral Wiener filtering to hyperspectral images is proposed. Spectral Wiener filtering aims to get an optimal estimation of real spectrum which considers the effect of both noise and SRF. For doing this, the spectral signal-to-noise ratio (SNR) is calculated using a decorrelation method. In an experiment based on simulated hyperspectral image cube, spectral Wiener filtering in a pixel by pixel way achieved a 1.38% increase in the average depth of spectral signature and a 15.4% increase in image sharpness. As a comparison, spatial Wiener filtering band by band achieved a 0.49% decrease in the average depth of spectral signature and a 21.6% increase in image sharpness. The results suggest that spatial and spectral degradation of hyper-spectral image are inter-coupled, and spectral Wiener filter is more suitable to restore spectrum while the spatial Wiener filter is more suitable to restore single-band image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-negative matrix factorization (NMF) has been introduced into the field of hyperspectral unmixing in the last ten years. Though NMF-based approaches have been widely accepted by researchers, the assumptions in them may not always fit for the characteristics of real ground objectives, which will cause the incorrect results and restrict the applications for these approaches. This paper proposes a novel semi-supervised NMF model, in which the ground truth information is introduced such as partial known endmembers from ground measurment. The relationship between the known and unknown endmembers are explored. The distance function is designed to describe the relationship and introduced into the NMF model. In this way, SSNMF could use the known endmembers to help estimating the unknown endmembers, so that accurate and robust results can be obtained. The proposed algorithm was compared with NMFupk, which also considered partial known endmembers, using extensive synthetic data and real hyperspectral data. The experiments show that the proposed algorithm can give a better performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most compression methods for hyperspectral images have been optimized to minimize mean squared errors. However,
this kind of compression method may not retain all discriminant information, which is important if hyperspectral images
are to be used to distinguish among classes. In this paper, we propose a two-stage compression method for hyperspectral
images with encoding residual discriminant information. In the proposed method, we first apply a compression method
to hyperspectral images, producing compressed image data. From the compressed image data, we produce reconstructed
images. Then we generate residual images by subtracting the reconstructed images from the original images. We also
apply a feature extraction method to the original images, which produces a set of feature vectors. By applying these
feature vectors to the residual images, we generate discriminant feature images which provide the discriminant
information missed by the compression method. In the proposed method, these discriminant feature images are also
encoded. Experiments with AVIRIS data show that the proposed method provides better compression efficiency and
improved classification accuracy than other compression methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral data is composed of a set of correlated band images. In order to efficiently compress the hyperspectral imagery, this inherent correlation may be exploited by means of spectral decorrelators. In this paper, a fractional wavelet transform based method is introduced for spectral decorrelation of hyperspectral data. As opposed to regular wavelet transform which decomposes a given signal into two equal-length sub-signals, fractional wavelet transform is carried out by decomposing the signal corresponding to the spectral content into two sub-signals with different lengths. Sub-signal lengths are adapted to data to achieve a better spectral decorrelation. Performance results pertaining to AVIRIS datasets are presented in comparison with existing regular wavelet decomposition based compression methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the
performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been
previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge
coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for
some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static
between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing
are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an
alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing
paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively
minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate
that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This
reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research investigates the features retained after image compression for automatic pattern recognition purposes. Images were significantly compressed using open-source JPEG and JPEG2000 compression algorithms. The original and compressed images were processed with a Map Seeking Circuit (MSC) pattern recognition algorithm. [1] The resulting target detection rates for the compressed images were very similar to the original images, which included compression rates ranging from 10 to 0.2. Target detection location precision and target aspect were degraded for the lowest compression rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast Iterative Pixel Purity Index (FIPPI) was previously developed to address two major issues arising in PPI which are the use of skewers whose number must be determined by a priori and inconsistent final results which cannot be reproduced. Recently, a new concept has been developed for hyperspectral data communication according to Band SeQuential (BSQ) acquisition format in such a way that bands can be collected band by band. By virtue of BSQ users are able to develop Progressive Band Processing (PBP) for hyperspectral imaging algorithms so that data analysts can observe progressive profiles of inter-band changes among bands. Its advantages have been justified in several applications, anomaly detection, constrained energy minimization, automatic target generation process, orthogonal subspace projection, PPI, etc. This paper further extends PBP to FIPPI. The idea to implement PBP-FIPPI is to use two loops specified by skewers and bands to process FIPPI. Depending upon which one is implemented in the outer loop two different versions of PBP-FIPPI can be designed. When the outer loop is iterated band by band, it is called to be called Progressive Band Processing of FIPPI (PBP-FIPPI). When the outer loop is iterated by growing skewers, it is called Progressive Skewer Processing of FIPPI (PSP-FIPPI). Interestingly, both versions provide different insights into the design of FIPPI but produce close results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anomaly detection is one of most fundamental tasks in hyperspectral data exploitation. Since anomalies are generally unknown and unexpected, their detection must be carried out without prior knowledge. Most importantly, when anomalies are weak and moving as time goes on real time processing of anomaly detection becomes immense in detecting these anomalous targets. Due to hyperspectral sensor design two major formats are used for data acquisition in real-time. One is real-time sample processing which collects data in two different fashions, Band-interleaved by Pixel/Sample (BIP/BIS) sample-by-sample and Band-Interleaved-by-Line (BIL) line-by-line. Another is real-time band processing which follows the Band Sequential (BSQ) format to collect the data. Recently, anomaly detection using both BIP/BIS and BSQ has been reported in the literature. Since a hyperspectral imaging sensor generally collects the data in a push-broom manner, the BIL format is preferred to BIP/BIS. But it is interesting to note that AD using BIL has not been explored and investigated in the past mainly because it was expected that AD using BIL may perform similarly to AD using BIP/BIS. This paper shows otherwise due to the fact that the covariance/correlation matrix used by an anomaly detector has significant impact on the detectability of anomalies. It has been shown that anomaly detection is heavily determined by the ratio of anomalies to be detected to the image size that forms the covariance/correlation matrix. So, when AD using BIL is implemented, the information of covariance/correlation matrix provided BIL is different from that provided by BIP/BIS. As a result, it is anticipated that AD using BIL may result different performance from AD using BIP/BIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vital Sign Signals (VSSs) have been widely used for medical data analysis. One classic approach is to use Logistic Regression Model (LRM) to describe data to be analyzed. There are two challenging issues from this approach. One is how many VSSs needed to be used in the model since there are many VSSs can be used for this purpose. Another is that once the number of VSSs is determined, the follow-up issue what these VSSs are. Up to date these two issues are resolved by empirical selection. This paper addresses these two issues from a hyperspectral imaging perspective. If we view a patient with collected different vital sign signals as a pixel vector in hyperspectral image, then each vital sign signal can be considered as a particular band. In light of this interpretation each VSS can be ranked by band prioritization commonly used by band selection in hyperspectral imaging. In order to resolve the issue of how many VSSs should be used for data analysis we further develop a Progressive Band Processing of Anomaly Detection (PBPAD) which allows users to detect anomalies in medical data using prioritized VSSs one after another so that data changes between bands can be dictated by profiles provided by PBPAD. As a result, there is no need of determining the number of VSSs as well as which VSS should be used because all VSSs are used in their prioritized orders. To demonstrate the utility of PBPAD in medical data analysis anomaly detection is implemented as PBP to find anomalies which correspond to abnormal patients. The data to be used for experiments are data collected in University of Maryland, School of Medicine, Shock Trauma Center (STC). The results will be evaluated by the results obtained by Logistic Regression Model (LRM).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among all the various computer vision applications, automatic logo recognition has drawn great interest from industry as well as various academic institutions. In this paper, we propose an angle-distance map, which we used to develop a robust logo detection algorithm. The proposed angle-distance histogram is invariant against scale and rotation. The proposed method first used shape information and color characteristics to find the candidate regions and then applied the angle-distance histogram. Experiments show that the proposed method detected logos of various sizes and orientations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When medical data are collected there are many Vital Sign Signals (VSSs) that can be used for data analysis. From a hyperspectral imaging perspective, we can consider a patient with different vital sign signals as a pixel vector in hyperspectral image and each vital sign signal as a particular band. In light of this interpretation this paper develops two new concepts of prioritization of VSSs. One is Orthogonal Subspace Projection Residual (OSPR), which measures the residual of a VSS in the orthogonal complement subspace to the space linearly spanned by the remaining VSSs. Another is to construct a histogram for each of VSSs that can be used as a means of ranking VSSs according to a certain criterion for optimality. Several measures are proposed to be used as criteria for VSS prioritization, which are variance, entropy and Kullbak-Leibler (KL) information measure. VSS prioritization can then be used as the VSS selection method to form Logistic Regression model (LRM). In order to determine how many VSSs should be used a recently developed concept, called Virtual Dimensionality (VD) can be used for this purpose. To demonstrate the utility of VSS prioritization, data collected in University of Maryland, School of Medicine, Shock Trauma Center (STC) was used for experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic Resonance (MR) images can be considered as multispectral images so that MR imaging can be processed by
multispectral imaging techniques such as maximum likelihood classification. Unfortunately, most multispectral imaging
techniques are not particularly designed for target detection. On the other hand, hyperspectral imaging is primarily
developed to address subpixel detection, mixed pixel classification for which multispectral imaging is generally not
effective. This paper takes advantages of hyperspectral imaging techniques to develop target detection algorithms to find
lesions in MR brain images. Since MR images are collected by only three image sequences, T1, T2 and PD, if a
hyperspectral imaging technique is used to process MR images it suffers from the issue of insufficient dimensionality.
To address this issue, two approaches to nonlinear dimensionality expansion are proposed, nonlinear correlation
expansion and nonlinear band ratio expansion. Once dimensionality is expanded hyperspectral imaging algorithms are
readily applied. The hyperspectral detection algorithm to be investigated for lesion detection in MR brain is the well-known
subpixel target detection algorithm, called Constrained Energy Minimization (CEM). In order to demonstrate the
effectiveness of proposed CEM in lesion detection, synthetic images provided by BrainWeb are used for experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a weighted reduced multivariate polynomial for class imbalance learning is proposed. When there is a large variation in the numbers of available class samples, class distribution is said to be imbalanced. In such cases, conventional classifiers may classify most samples as majority classes to maximize the classification accuracy, which may not be desirable in some applications. Thus, for imbalanced data classification, an additional algorithm may be required to address low representation of minority classes when the classification performance of those classes is important. We used weighted ridge regression for class imbalanced data classification. Experimental results with the UCI database show improved classification of the minority classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most non-linear classification methods can be viewed as non-linear dimension expansion methods followed by a linear classifier. For example, the support vector machine (SVM) expands the dimensions of the original data using various kernels and classifies the data in the expanded data space using a linear SVM. In case of extreme learning machines or neural networks, the dimensions are expanded by hidden neurons and the final layer represents the linear classification. In this paper, we analyze the discriminant powers of various non-linear classifiers. Some analyses of the discriminating powers of non-linear dimension expansion methods are presented along with a suggestion of how to improve separability in non-linear classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to get high spatial resolution hyperspectral data, many studies have examined methods to combine spectral information contained in hyperspectral image with spatial information contained in multispectral/panchromatic image. This paper developed a new hyperspectral image fusion method base on the non-negative matrix factorization (NMF) theory. Data sets obtained by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) was used to evaluate the performance of the method. Experimental results show that the proposed algorithm can provide a good way to solve the problem of high spatial resolution hyperspectral data shortage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sparse unmixing (SU) has been investigated to select a small number of endmembers from a large spectral library, which is a pixel-based technique. In image-based collaborative sparse unmxing (CSU) techniques, pixels are forced to select the same small set of endmembers. In reality, the same small set of endmembers may be responsible for pixel construction within a homogeneous area. For an entire image, the endmember sets are often different. So, in this paper, we propose a region-based collaborative sparse unmixing (RCSU) algorithm, and the region may include nonlocal areas as long as they belong to the same type of homogeneous segments. Experimental results show that the overall performance of the proposed RCSU algorithm is better than that of image-based CSU or pixel-based SU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the paper we propose approach for lossless image compression. Proposed method is based on separate processing of two image components: structure and texture. In the subsequent step separated components are compressed by standard RLE/LZW coding. We have performed a comparative analysis with existing techniques using standard test images. Our approach have shown promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minerals are generally present as intimate mixtures. The spectra of intimate mixtures in visible-infrared are complex function of abundance, grain size, and optical constants et.al, making the linear spectral unmixing model inapplicable. In this paper, we presented a nonlinear unmixing method by combining Shkuratov model (SK99) and Hapke model (H81) to unmix the mineral mixtures. For obtaining the abundances of mineral endmembers, we built up a look-up table (LUT) in the following steps: First, the optical constants were derived by SK99 model and then single scattering albedos of endmembers were computed. Second, the approximation of multiple scattering was derived by the Chandrasekhar H-function. Finally, LUT was established using H81 model. The root-mean-square error (RMSE) was calculated to find the best match between the reflectance of mixtures and LUT. We used the laboratory mineral mixtures to verify the accuracy of abundance estimation. The results show that RMSEs are less than 1% and the absolute errors of abundance retrieval are within 5%. The presented method can retrieve mineral abundance effectively and rapidly. It can be a potential method applying for hyperspectral images of the earth and planetary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Crop pests and diseases is one of major agricultural disasters, which have caused heavy losses in agricultural production each year. Hyperspectral remote sensing technology is one of the most advanced and effective method for monitoring crop pests and diseases. However, Hyperspectral facing serial problems such as low degree of automation of data processing and poor timeliness of information extraction. It resulting we cannot respond quickly to crop pests and diseases in a critical period, and missed the best time for quantitative spraying control on a fixed point. In this study, we take the crop pests and diseases as research point and breakthrough, using a self-development line scanning VNIR field imaging spectrometer. Take the advantage of the progressive obtain image characteristics of the push-broom hyperspectral remote sensor, a synchronous real-time progressive hyperspectral algorithms and models will development. Namely, the object’s information will get row by row just after the data obtained. It will greatly improve operating time and efficiency under the same detection accuracy. This may solve the poor timeliness problem when we using hyperspectral remote sensing for crop pests and diseases detection. Furthermore, this method will provide a common way for time-sensitive industrial applications, such as environment, disaster. It may providing methods and technical reserves for the development of real-time detection satellite technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.