PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 8391, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection, tracking, and classification of humans in video imagery is of obvious military and civilian importance.
The problem is difficult under the best of circumstances. In infrared (IR) imagery, or any grayscale imagery, the problem
is compounded by the lack of color cues. Sometimes, human detection in IR imagery can take advantage of the thermal
difference between humans and background-but this difference is not robust. Varying environmental conditions
regularly degrade the thermal contrast between humans and background. In difficult data, humans can be effectively
camouflaged by their environment and standard feature detectors are unreliable. The research described here uses a
hybrid approach toward human detection, tracking, and classification. The first is a feature-based correlated body parts
detector. The second is a pseudo-Hough transform applied to the edge images of the video sequence. The third relies on
an optical flow-based vector field transformation of the video sequence. This vector field permits a multidimensional
application of the feature detectors initiated in the previous two methods. Then a multi-dimensional oriented Haar
transform is applied to the vector field to further characterize potential detections. This transform also shows potential
for distinguishing human behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forward Looking Infrared (FLIR) automatic target recognition (ATR) systems depend upon the capacity of the atmosphere
to propagate thermal radiation over long distances. To date, not much research has been conducted on analyzing
and mitigating the effects of the atmosphere on FLIR ATR performance, even though the atmosphere is often the
limiting factor in long-range target detection and recognition. The atmosphere can also cause frame-to-frame inconsistencies
in the scene, affecting the ability to detect and track moving targets. When image quality is limited by turbulence,
increasing the aperture size or improving the focal plane array cannot improve ATR performance. Traditional
single frame image enhancement does not solve the problem.
A new approach is described for reducing the effects of turbulence. It is implemented under a lucky-region-imaging
framework using short integration time and spatial domain processing. It is designed to preserve important target and
scene structure. Unlike previous Fourier-based approaches originating from the astronomical community, this new approach
is intended for real-time processing from a moving platform, with ground as the background. The system produces
a video stream with minimal delay.
A new video quality measure (VQMturb) is presented for quantifying the success of turbulence mitigation on real data
where no reference imagery is available. The VQMturb is the core of the innovation because it allows a wide range of
algorithms to be quantitatively compared. An algorithm can be chosen, and then tuned, to best-fit available processing
power, latency requirements, scenarios and sensor characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building
takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The
complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the
identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular
features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the
prediction process for the image creation or reconstruction. The results are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a robust automatic target recognition algorithm in FLIR imagery is proposed. Target is first segmented out
from the background using wavelet transform. Segmentation process is accomplished by parametric Gabor wavelet
transformation. Invariant features that belong to the target, which is segmented out from the background, are then
extracted via moments. Higher-order moments, while providing better quality for identifying the image, are more
sensitive to noise. A trade-off study is then performed on a few moments that provide effective performance. Bayes
method is used for classification, using Mahalanobis distance as the Bayes' classifier. Results are assessed based on false
alarm rates. The proposed method is shown to be robust against rotations, translations and scale effects. Moreover, it is
shown to effectively perform under low-contrast objects in FLIR images. Performance comparisons are also performed
on both GPU and CPU. Results indicate that GPU has superior performance over CPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images obtained by airborne sensors are corrupted by distortions due to turbid and/or turbulent mediums between sensors
and objects of interest on the ground. In this paper we present automated methods for compensating for this distortion
by exploiting information available through use of multi-aperture passive imaging, and formation of composite bestpreserved
enhanced imagery. Preliminary results of applying these methods on real and simulated data indicate their
robustness in enhancing severely degraded imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may
be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. While
developing algorithms to blindly extract the aforementioned features from a vehicle's vibration signature, it was shown
that detection of engine speed and number of cylinders was more successful when utilizing a priori knowledge of the
engine type (gas or diesel piston) and optimizing algorithms for each engine type. In practice, implementing different
algorithms based on engine type first requires an algorithm to determine whether a vibration signature was produced by a
gas piston or diesel piston engine. This paper provides a general overview of the observed differences between datasets
from gas and diesel piston engines, and proceeds to detail the current method of differentiating between the two. To date,
research has shown that basic signal processing techniques can be used to distinguish between gas and diesel vibration
datasets with reasonable accuracy for piston engines of different configurations running at various speeds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present two methods to accurately estimate the location of an emitter from the received signal. The first
of these methods is a variation of the standard cross-ambiguity function (CAF) for which we introduce a cross-
spectral frequency estimation technique to replace the conventional methods based on the power spectrum. We
demonstrate the use of the CSCAF in estimating the source location of an RF emission. The CSCAF is computed
as the product of the complex-valued CAF and the conjugate of the time-delayed CAF. The magnitude of the
CSCAF is the conventional CAF energy surface, and the argument of the CSCAF is the unquantized frequency
difference of arrival (FDOA) computed as the phase of the CAF differentiated with respect to time. The
advantage of the CSCAF is that it provides an extremely accurate estimate of FDOA. We demonstrate the use
of the CSCAF in providing emitter location estimates that are superior to those provided by the conventional
CAF. The second method presented provides a precision geolocation of the emitter from the signal received by
a single moving receiver. This method matches the Doppler characteristics of the received signal to the Doppler
characteristics estimated from the geometry of the transmitter and receiver. Both the CSCAF and the single
receiver methods are enabled by cross-spectral frequency estimation methods that provide extremely accurate
frequency estimation and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper utilizes game-theoretic principles in the automatic recognition of unknown radar targets. This study uses a
non-cooperative matching game where pure strategies are associated with specific items to be matched, and agreement
between possible hypotheses represents the payoff gained when playing a certain strategy against an opponent who is
playing another strategy. The target recognition approach attempts to match scattering centers of an unknown target with
those of library targets as competing strategies. The algorithm is tested using real radar data representing scattering from
commercial aircraft models. Radar data of library targets at various azimuth positions are matched against an unknown
radar target signature at a specific aspect angle. Computer simulations provide an estimate of the error rates in scenarios
of additive Gaussian noise corrupting target signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents new techniques and processing results for automatic segmentation, shape classification, generic pose
estimation, and model-based identification of naval vessels in laser radar imagery. The special characteristics of focal
plane array laser radar systems such as multiple reflections and intensity-dependent range measurements are incorporated
into the algorithms. The proposed 3D model matching technique is probabilistic, based on the range error distribution,
correspondence errors, the detection probability of potentially visible model points and false alarm errors. The match
algorithm is robust against incomplete and inaccurate models, each model having been generated semi-automatically
from a single range image. A classification accuracy of about 96% was attained, using a maritime database with over
8000 flash laser radar images of 146 ships at various ranges and orientations together with a model library of 46 vessels.
Applications include military maritime reconnaissance, coastal surveillance, harbor security and anti-piracy operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have previously developed a feature extraction process for propagation-invariant
classification of a target from its propagated sonar backscatter. The features are invariant to the
frequency dependent propagation effects of absorption and dispersion, for range-independent channels.
Simulations have shown that these features lose their effectiveness when applied to waves propagating
in a range-dependent environment. In this paper we extend our previous approach to obtain invariant
features for classification in range-dependent environments. Numerical simulations are presented for
the classification of two shells from their acoustic backscatter propagating in an ideal wedge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present two methods to obtain a probability distribution from its moments. In the first method
one chooses a set of orthogonal polynomials with corresponding weighting function. In the second
method one chooses a starting distribution and uses that to construct orthogonal polynomials. We
give a number of examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have calculated the raw and central moments of a noise model where the noise is the sum of
elementary signals which are time and space dependent. We show that the scintillation index is
a constant plus a correction term that goes as 1/N where N is the number of elementary signals.
We study the correction term for a specific elementary signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated pattern recognition has been around for several decades. Generally we have been successful at tackling a
large variety of technical problems. This paper presents the challenges inherent in wide area motion imagery, which is
problematic to the normal pattern recognition process and associated pattern recognition systems. This paper describes
persistent wide area motion imagery, its role as a manifold for overlaying episodic sensors of various modalities to
present a better view of activity to an analyst. An underlying framework, SPADE, is introduced and a layered sensing
viewer, Pursuer, is also presented to demonstrate the utility of creating a unified view of the sensing world to an analyst.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to recognize a target in an image is an important problem for machine vision, surveillance systems, and
military weapons. There are many "solutions" to an automatic target recognition (ATR) problem proposed by
practitioners. Often the definition of the problem leads to multiple solutions due to the incompleteness of the definition.
Solutions are also made approximate due to resource limitations. Issues concerning "best" solution and solution
performance are very open issues, since problem definitions and solutions are ill-defined. Indeed from information
based physical measurement theory such as found in the Minimum Description Length (MDL) the exact solution is
intractable1. Generating some clarity in defining problems on restricted sets seems an appropriate approach for
improving this vagueness in ATR definitions and solutions. Given that a one to one relationship between a physical
system and the MDL exists, then this uniqueness allows that a solution can be defined by its description and a norm
assigned to that description. Moreover, the solution can be characterized by a set of metrics that are based on the
algorithmic information of the physical measurements. The MDL, however, is not a constructive theory, but solutions
can be defined by concise problem descriptions. This limits the scope of the problem and we will take this approach
here. The paper will start with a definition of an ATR problem followed by our proposal of a descriptive solution using a
union of subspaces model of images as described below based on Lu and Do2. This solution uses the concept of
informative representations3 implicitly which we review briefly. Then we will present some metrics to be used to
characterize the solution(s) which we will demonstrate by a simple example. In the discussions following the example
we will suggest how this fits in the context of present and future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse
and compact spatial feature descriptors and show much potential for defence and security applications. This paper
considers the extension of such techniques to include information from the temporal domain, to improve utility in
applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal
descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can
distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are
presented, and the relative merits of the approach are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a general purpose, inherently robust system for object representation and recognition.
The system is model-based and knowledge-based, with knowledge derived from analysis of objects and images, unlike many of the current methods which rely on generic statistical inference. This knowledge is intrinsic to the objects themselves, based on geometric and semantic relations among objects. Therefor the system is insensitive to external interferences such as viewpoint changes (scale, pose etc.), illumination changes, occlusion, shadows, sensor noise etc. It also handles variability in the object itself, e.g.
articulation or camouflage. We represent all available models in a graph containing two independent but interlocking hierarchies. One of these intrinsic hierarchies is based on parts, e.g. a truck has a cabin, a trunk, wheels etc. The other hierarchy we call the "Level of Abstraction (LOA), e.g. a vehicle is more abstract than a truck, a rectangle is more abstract than a door.
This enables us to represent and recognize generic objects just as easily as specific ones. A new algorithm for traversing our graph, combining the advantages of both top-down and bottom-up strategies, has been implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a technique for small watercraft detection in a littoral environment characterized by multiple targets
and both land- and sea-based clutter. The detector correlates a tailored wavelet model trained from previous
imagery with newly acquired scenes. An optimization routine is used to learn a wavelet signal model that
improves the average probability of detection for a xed false alarm rate on an ensemble of training images.
The resulting wavelet is shown to improve detection on a previously unseen set of test images. Performance is
quantied with ROC curves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient moving object tracking requires near flawless detection results to establish correct correspondences between
frames. This is especially true in the defense sector where accuracy and speed are critical factors of success. However,
problems such as camera motion, lighting and weather changes, texture variation and inter-object occlusions result in
misdetections or false positive detections which in turn, lead to broken tracks. In this paper, we propose to use
background subtraction and an optimized version of Horn & Schunk's optical flow algorithm in order to boost detection
response. We use the frame differencing method, followed by morphological operations to show that it works in many
scenarios and the optimized optical flow technique serves to complement the detector results. The Horn & Schunk's
method yields color-coded motion vectors for each frame pixel. To segment the moving regions in the frame, we apply
color thresholding to distinguish the blobs. Next, we extract appearance-based features from the detected object and
establish the correspondences between objects' features, in our case, the object's centroid. We have used the Euclidean
distance measure to compute the minimum distances between the centroids. The centroids are matched by using
Hungarian algorithm, thus obtaining point correspondences. The Hungarian algorithm's output matrix dictates the
objects' associations with each other. We have tested the algorithm to detect people in corridor, mall and field sequences
and our early results with an accuracy of 86.4% indicate that this system has the ability to detect and track objects in
video sequences robustly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The set of orthogonal eigen-vectors built via principal component analysis (PCA), while very effective for com-
pression, can often lead to loss of crucial discriminative information in signals. In this work, we build a new
basis set using synthetic aperture radar (SAR) target images via non-negative matrix approximations (NNMAs).
Owing to the underlying physics, we expect a non-negative basis and an accompanying non-negative coecient
set to be a more accurate generative model for SAR proles than the PCA basis which lacks direct physical interpretation. The NNMA basis vectors while not orthogonal capture discriminative local components of
SAR target images. We test the merits of the NNMA basis representation for the problem of automatic target
recognition using SAR images with a support vector machine (SVM) classier. Experiments on the benchmark
MSTAR database reveal the merits of basis selection techniques that can model imaging physics more closely
and can capture inter-class variability, in addition to identifying a trade-off between classication performance
and availability of training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic object recognition capabilities are traditionally tuned to exploit the specific sensing modality they were
designed to. Their successes (and shortcomings) are tied to object segmentation from the background, they typically
require highly skilled personnel to train them, and they become cumbersome with the introduction of new objects. In this
paper we describe a sensor independent algorithm based on the biologically inspired technology of map seeking circuits
(MSC) which overcomes many of these obstacles. In particular, the MSC concept offers transparency in object
recognition from a common interface to all sensor types, analogous to a USB device. It also provides a common core
framework that is independent of the sensor and expandable to support high dimensionality decision spaces. Ease in
training is assured by using commercially available 3D models from the video game community. The search time
remains linear no matter how many objects are introduced, ensuring rapid object recognition. Here, we report results of
an MSC algorithm applied to object recognition and pose estimation from high range resolution radar (1D), electrooptical
imagery (2D), and LIDAR point clouds (3D) separately. By abstracting the sensor phenomenology from the
underlying a prior knowledge base, MSC shows promise as an easily adaptable tool for incorporating additional sensor
inputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive imaging is an emerging field which allows one to acquire far fewer
measurements of a scene than a standard pixel array and still retain the information
contained in the scene. One can use these measurements to reconstruct the original
image or even a processed version of the image. Recent work in compressive imaging
from random convolutions is extended by relaxing some model assumptions and
introducing the latest sparse reconstruction algorithms. We then compare image
reconstruction quality of various convolution mask sizes, compression ratios, and
reconstruction algorithms. We also expand the algorithm to derive a pattern recognition
system which operates of a compressively sensed measurement stream. The developed
compressive pattern recognition system reconstructions the detections map of the scene
without the intermediate step of image reconstruction. A case study is presented where
pattern recognition performance of this compressive system is compared against a full
resolution image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Covariance estimation is a key step in many target detection algorithms. To distinguish target from background
requires that the background be well-characterized. This applies to targets ranging from the precisely known
chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors.
When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution
(such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance
overfits the data, and when the training sample size is small, the target detection performance suffers.
Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from
a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a
fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance,
or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator
can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is
generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally
expensive.
This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations
of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the
covariance matrix from a limited sample of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection and classification is very crucial in wireless sensor network for outdoor security applications. This paper
presents a novel concept to detect and further discriminate dynamic objects (persons and vehicles) in WSN using
geophones that work as dynamic presence detectors in a given field of interest. The basic concept to detect and classify a
target lies in the idea to consider an individual footstep of a person and/or motion of a vehicle detects as an event in the
detection range of the geophone. In an on-going project, the design of a wireless geophone sensor node is implemented.
This design is based on high processing Gumstix Overo Fire Computer-On-Module that is used for high performance
and has built-in wireless capabilities. A high resolution analog-to-digital converter is integrated to the module to acquire
data from geophone. The raw data is processed on the sensor node to detect and classify a target. The adaptive wavelet
denoising algorithm is applied in real-time to extract a target signal from the real noisy environments. This algorithm
adjusts the threshold based on the energy of the wavelet series coefficients. Timestamps of the events are extracted using
event detection method. These events are used to classify a target on a node level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection is an important research content in hyperspectral remote sensing technology, which is widely
used in securities and defenses. Nowadays, many target detection algorithm have been proposed. One of the key
evaluation indicators of these algorithms performance is false-alarm rate. The feature-level fusion of different
target detection results is a simple and effective method to reduce false-alarm rate. But the different value ranges
of different algorithms bring difficulties for data fusion. This paper proposed a feature-level fusion method based
on RXD detector, which is to integrate multiple target detection results into a multi-bands image, and fuse
detection results using principal theory of abnormal detection. Experiments revealed that, this method is not
restricted by the quantity of target detection algorithms and not influenced by different value ranges of different
algorithms, which can reduce false-alarm rate effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.