A method of infrasonic signal classification using hermite polynomials for signal preprocessing is presented. Infrasound is a low frequency acoustic phenomenon typically in the frequency range 0.01 Hz to 10 Hz. Data collected from infrasound sensors are preprocessed using a hermite orthogonal basis inner product approach. The hermite preprocessed signals result in feature vectors that are used as input to a parallel bank of radial basis function neural networks (RBFNN) for classification. The spread and threshold values for each of the RBFNN are then optimized. Robustness of this classification method is tested by introducing unknown events outside the training set and counting errors. The hermite preprocessing method is shown to have superior performance compared to a standard cepstral preprocessing method.
A novel Artificial Neural Network (ANN) is presented, which has been designed for computationally intensive problems, and applied to the optimization of electromagnetic devices such as antennas and microwave devices. The ANN exploits a unique number representation in conjunction with a more standard neural network architecture. An ANN consisting of hetero-associative memory provided a very efficient method of computing the necessary geometrical values for the devices, when used in conjunction with a new randomization process. The number representation used provides significant insight into this new method of fault-tolerant computing. Further work is needed to evaluate the potential of this new paradigm.
A system is developed for tracking moving objects though natural scenery. A technique is presented for performing change detection on imagery to determine the difference between two images or a sequence of images. Form there an algorithm is presented to detect the presence of a new object and/or the deletion of objects. Then the application of a Variable Structure Interacting Multiple Model tracking filter is presented. The method of performing change detection is based upon the concept of image subspace projection. A set of basis image maps are formed when combined with a mixing matrix can recreate the original image. The subsequent images are then projected into the base image. The projected images is then subtracted from the original image to perform the change detection. Spatial Filtering is applied to increase the contrast between the change and the background then an adaptive filter is then applied to pass the locations of changes in the images into the tracking filter. Tracking is performed through the use of multiple motion models. The filter's motion models are adaptive added or deleted as required by the moving object's dynamics. The moving object's state is estimated through extended Kalman filtering.
KEYWORDS: Independent component analysis, Interference (communication), Principal component analysis, Signal processing, Neural networks, Signal detection, Signal to noise ratio, Sensors, Signal analyzers, Nonlinear optics
An important element of monitoring compliance of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) is an infrasound network. For reliable monitoring, it is important to distinguish between nuclear explosions and other sources of infrasound. This will require signal classification after a detection is made.
An integral part of the Comprehensive Nuclear Test Ban Treaty monitoring is an international infrasonic monitoring network that is capable of detecting and verifying nuclear explosions. Reliable detection of such events must be made from data that may contain other sources of infrasonic phenomena. Infrasonic waves can also result from volcanic eruptions, mountain associated waves, auroral waves, earthquakes, meteors, avalanches, severe weather, quarry blasting, high-speed aircraft, gravity waves, and microbaroms. This paper shows that a feedforward multi-layer neural network discriminator, trained by backpropagation, is capable of distinguishing between two unique infrasonic events recorded from single station recordings with a relatively high degree of accuracy. The two types of infrasonic events used in this study are volcanic eruptions and a set of mountain associated waves recorded at Windless Bight, Antarctica. An important element for the successful classification of infrasonic events is the preprocessing techniques used to form a set of feature vectors that can be used to train and test the neural network. The preprocessing steps used in our analysis for the infrasonic data are similar to those techniques used in speech processing, specifically speech recognition. From the raw time-domain infrasonic data, a set of mel-frequency cepstral coefficients and their associated derivatives for each signal are used to form, a set of feature vectors. These feature vectors contain the pertinent characteristics of the data that can be used to classify the events of interest as opposed to using the raw data. A linear analysis was first performed on the feature vector space to determine the best combination of mel-frequency cepstral coefficients and derivatives. Then several simulations were run to distinguish between two different volcanic events, and mountain associated waves versus volcanic events, using their infrasonic characteristics.
Principal component analysis (PCA) plays an important role in various areas. In many applications it is necessary to adaptively compute the principal components of the input data. Over the past several years, there have been numerous neural network approaches to adaptively extract principal components for PCA. One of he most popular learning rules for training a single-layer linear network for principal component extraction is Sanger's generalized Hebbian algorithm (GHA). We have extended the GHA (EGHA) by including a positive-definite symmetric weighting matrix in the representation error-cost function that is used to derive the learning rule to train the network. The EGHA presents the opportunity to place different weighting factors on the principal component representation errors. Specifically, if prior knowledge is available pertaining to the variances of each term of the input vector, this statistical information can be incorporated into the weighting matrix. We have shown that by using a weighted representation error-cost function, where the weighting matrix is diagonal with the reciprocals of the standard deviations of the input on the diagonal, more accurate results can be obtained using the EGHA over the GHA.
The Partial Least-Squares Regression (PLSR) approach to statistical calibration model development has been formulated using an inverse model. The inverse model PLSR algorithm is implemented using the Partial Least Squares neural NETwork (PLSNET) architecture. Generalized neural network learning rules derived from a statistical representation error criterion are presented. These learning rules will accommodate a quadratic optimization criterion, providing the linear solution. Optimization functions which grow less than quadratically can also be used to provide a robust solution when the empirical data contains impulsive and colored noise and outliers. The robust optimization criterion also accounts for the higher-order statistics associated with the input data. The inverse model PLSNET learning rules require fewer mathematical operations per weight update than the forward model robust PLSNET algorithms, resulting in faster convergence in many cases.
We have developed a robust Partial Least-Squares Regression (PLSR) neural network approach to statistical calibration model development. Generalized neural network learning rules derived from a weighted statistical representation error criterion that grows less than quadratically are presented. This optimization criterion allows for higher-order statistics associated with the inputs to be taken into account and also serves to robustify the results when the empirical data contains impulsive and colored noise and outliers. The learning rules presented are considered generalized because they can be used to implement several specialized cases including: robust PLSR, linear PLSR, weighted least-squares, and variance scaling. The same learning rules also implement steepest descent or Newton's method. Newton's method can be used to formulate an adaptive learning rate for training the network.
This paper presents a neurocomputing approach for solving the algebraic matrix Riccati equation. This approach is able to utilize a good initial condition to reduce the computation time in comparison to standard methods for solving the Riccati equation. The repeated solutions of closely related Riccati equations appears in homotopy algorithms to solve certain problems in fixed-architecture control. Hence, the new approach has the potential to significantly speed-up these algorithms. It also has potential applications in adaptive control. The structured neural network architecture is trained using error backpropagation based on a steepest-descent learning rule. An example is given which illustrates the advantage of utilizing a good initial condition (i.e., initial setting of the neural network synaptic weight matrix) in the structured neural network.
We present in this paper an adaptive linear neural network architecture called PLSNET. This network is based on partial least-squares (PLS) regression. The architecture is a modular network with stages that are associated with the desired number of PLS factors that are to be retained. PLSNET actually consists of two separate but coupled architectures, PLSNET-C for PLS calibration, and PLSNET-P for prediction (or estimation). We show that PLSNET-C can be trained by supervised learning with three standard Hebbian learning rules that extracts the PLS weight loading vectors, the regression coefficients, and the loading vectors for the univariate output component case (single target values). The PLS information that is extracted by PLSNET-C after training, i.e., three sets of synaptic weights, is used by the PLSNET-P as fixed weights (through the coupling) in its architecture. PLSNET-C can then yield predictions of the output variable given test measurements as its input. Two examples are presented, the first illustrates the typical improved predictive capability of PLSNET compared to classical least-squares, and the second shows how PLSNET can be used for parametric system identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.