PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This report deals with issues and concerns regarding target prioritization given a multi-target environment and a certain amount of information from a sensor suite. In the Hunter-Killer mode the commander is required to perform the critical task of determining the rank or priority of targets in a multi-target environment for handover to the gunner for negation. In the past prioritization was simply based on one criterion, namely the target class (i.e., tank, truck, jeep, etc.), where the tank would be designated as the highest priority. However, we believe that present and future battlefields will require higher order ranking schemes and criteria for ranking of the targets in a multi-target scenario. This report describes two concepts that can be utilized as a framework to perform ranking of targets. Both concepts were implemented and the performance of each with respect to a limited training set was evaluated. Initial performance results obtained with a multi-ordered mapping technique were adequate to warrant a more extensive study. The data to perform target proritization was considered to be available from a potential on-board sensor suite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, image motion information is extracted from two successive frames or even a sequence of frames. We show that with an image sensor that simultaneously samples the light intensity and its timederivative, image motion can be extracted from a single expanded frame. Image motion sensing can be performed by a set of Gabor filters and their gradient filters. A computational test on some synthetic data shows that the computation of instantaneous motion from one expanded frame is more accurate than from successive images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing tracking algorithms have difficulties with multiple objects in heavy clutter'. As a number of clutter objects increases, it is becoming increasingly difficult to maintain and especially to initiate tracks. A near optimal algorithm, the Multiple Hypothesis Tracking (MHT)2, initiates tracks by considering all possible associations between multiple objects and clutter event on multiple frames. This, however, requires combinatorially large amount of computation, which is difficult to handle even for neural networks, when a number of clutter objects is large. A partial solution to this problem is offered by the Joint Probability Density Association (JPDA) tracking algorithm3, which performs fuzzy associations of objects and tracks, eliminating combinatorial search. However, the JPDA algorithm performs associations only on the last frame using established tracks and is, therefore, unsuitable for track initiation. The problem is becoming even more complicated for imaging, incoherent sensors, when direct measurement of object velocity via the Doppler effect is unavailable. We have applied a previously developed MLANS neural network'5'6 to the problem of tracking multiple objects in heavy clutter. In our approach the MLANS performs a fuzzy classification of all objects in multiple frames into multiple classes of txacks and random clutter. This novel approach to tracking using an optimal classification algorithm results in a dramatic improvement of performance: the MLANS tracking combines advantages of both the JPDA and the MHT, it is capable of track initiation by considering multiple frames, and it eliminates combinatorial search via fuzzy associations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive and trainable hierarchical nearest neighbor controller (HNNC) is presented for performing difficult control functions. The controller combines concepts from the theory of finite automata, nearest neighbor decision theory and control theoiy. In the initial implement, the top level uses a finite state machine to assess the control situation and select an appropriate nearest neighbor controller in the second level to control the system using the nearest neighbor concept. The controllers in the second level are very simple "neural-like" controllers that perform very simple control tasks. A training procedure is used to generate the supervisory finite state machine and second level nearest neighbor control points In the state space that define the control law. A hierarchical nearest neighbor controller is presented to balance an inverted pendulum mounted on a moveable cart and to remotely position a trailer truck to a specified position in a constrained region using a video tracking system. These problems demonstrate the power and simplicity of the hierarchical nearest neighbor controller in nonlinear systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considerations in the design and implementation of a real-time video tracking system are presented. The availability of low cost, high speed programmable logic arrays (PLAs) has reduced the complexity and restrictions normally associated with video tracker design. The use of advanced Digital Signal Processor (DSP) devices allows for software programming of sophisticated tracking algorithms. A design is presented that tracks objects using a centroid algorithm within an RS-l70 video signal in real-time with dynamically adjustable parameters such as track window size, thresholds and edge sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TV tracking techniques are important in view of their passive natuie. They are generally used to augment radar trackers wheneverthe radarreceivers arejammed by the enemy. This paperdiscusses a TV trackerscheme employed formissile guidance to Une Of Sight(LOS). The tracker utilises the contrast and edge tracking techniques to track the bright exhaust plume of tl missile. The simple edge based tracker fails to generate accurate guidance commands when tl missile plume appears big in size. An auto edge selector is therefore employed to overcome the problem. Tl camera is zoomed in continuously to maintain the plume size to trackable limit and the appropriate zoom corrections are made in real time to guidance error signals. The tracker acquires the missile as it enters the Field Of View(FOV) of the TV sensor with high degree of confidence and subsequently tlBcks it to generate guidance errors. The tracker hardware has been developed and tested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple target tracking (MTT) has received much attention recently for various applications in the military as well as the Strategic Defense Initiative areas. Data association is one of the critical computation in MTT problems, because erroneous data associations often result in lost tracks. The joint probabilistic data association (JPDA) algorithm is a good approach to solve the data association problem. However, the computation complexity of this algorithm increases rapidly with the number of targets and radar returns. Neural networks have been considered to approximate the JPDA and ease the computation burden through the parallel processing. In this paper, we propose a neural network data association (NNDA) algorithm for the solution of the data association problems. Simulation results show the following three controversial issues: first, NNDA can track multiple targets with performance compatible with JPDA. Second, when the prediction filter can not provide an enough accurate prediction data ( This sometimes occurs when the prediction model can not match the tracking environment precisely enough), the neural computation provides better performace than JPDA. Third, the performance of NNDA is not affected by the number while that of JPDA degrades with the increase of target number. As a whole, this paper presents a neural network which not only possesses the intrinsic ability of parallel computation but also provides conditionally better tracking performance than JPDA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A data association technique based on the utilization of fused multisensor data which provides a compact description, consisting of both numerical and non - numerical attributes, of the targets present in the surveillance volume. The data association technique is comprised of two processes. The first is the process of seeded clustering --inwhich a cluster is sought around the predicted measurement from the existing track. The data contained in this cluster is fused to obtain a compact description of the targets constituting the cluster. The second is the process of target type matching --in which the target types contained in an existing track are matched with the target types contained in the seeded cluster. The presented method for data association provides a means by which a measure of confidence is assigned to each track (based on the evidence received) and it can be extended in a straight forward manner to handle data association in the context of multi-target tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multisensor fusion algorithm to classify the inputs (data or images) into classes (targets, backgrounds) is presented. The algorithm forms clusters and is trained without supervision. The clustering is done on the basis of the statistical properties of the set of inputs. This algorithm implements a clustering algorithm that is very similar to the simple sequential leader clustering algorithm and the Carpenter/Grossberg net algorithm (CGNA). The algorithm differs from CGNA in that (1) the data inputs and data pointers may take on real values, (2) it features an adaptive mechanism for selecting the number of clusters, and (3) it features an adaptive threshold. The problem of threshold selection is considered and the convergence of the algorithm is shown. An example is given to show the application of the algorithm to multisensor fusion for classifying targets and backgrounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The naval forces will encounter air, surface, underwater, electro-optic/infrared (EO/IR), communications, radar, electronic warfare, etc., threats. Technological advancements of future threats to the navy will place heavy demands (quicker reaction to faster, stealth threats) upon the ability to process and interpret tactical data provided by multiple and often dissimilar sensors. This emphasizes the need for a naval platform employing an automated distributed command and control system (CCS) which includes a multi-sensor data fusion (MSDF) function to increase probability of mission success facing the threats of the future. The main advantage of a distributed CCS is redundancy and reconfigurability resulting in a high degree of survivability and flexibility while accomplishing the mission. The MSDF function provides the combat system with a capability to analyze sensor data from multiple sensors and derive contact/track solutions, which would not be derived by the individual sensors. The command and control (C2) functions, including the MSDF function, operate within a number of general purpose C2 processors, communicating with each other and the sensor systems via a high speed data bus. Different sensors are more effective in different environmental conditions and for different geometrical parameters (elevation, distance, bearing, etc.). The MSDF function combines the capabilities of all the sensors providing the operators and other CCS functions with more accurate solutions faster than each sensor system operating alone. An architecture of a distributed CCS using an MSDF function to increase the probability of mission success of a naval platform is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor level fusion performs the detection and recognition of targets in the realm of automatic target recognition (ATR) much more efficiently than pixel level fusion. The advantages and disadvantages of a combined sensor system containing both forward looking infrared (FLIR) and laser radar (LADAR) sensors are discussed in detail, and an architecture for this dual mode sensing system is proposed. The processes of detection and recognition in such a combined system are also examined in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wiener filter is the optimum linear deconvolution filter for a single image. As the results of Wiener filtering are not always satisfactory, we changed the problem to that of estimating the object from two or more images made with different point spread functions. Very simple formulae result which have the intuitively required properties. Computer experiments show that distinct improvements can be made in this way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sensor data fusion problem can be formulated as a combinatorial optimization problem. Simulated annealing is a technique based on an analogy with the physical process of annealing which can find solutions to such problems arbitrarily close to an optimum. However, the computational effort involved can be prohibitive especially to obtain high quality solutions of large problems. Parallel processing offers the capability to provide the required computational power for real time performance in sensor data fusion applications by taking advantage of the massive parallelism and distributed representations of neural networks. Several types of neural networks, e.g., Gaussian, Boltzmann, and Cauchy machines, have been proposed to implement the technique of simulated annealing in parallel according to different cooling schedules but such neural networks have not previously been analyzed in terms of their capabilities for the specific problem of sensor data fusion. This paper presents the results of research conducted in order to evaluate the neural network approach to the combinatorial optimization problem intrinsic in real time sensor data fusion. A comparison with other advanced technique, i.e., genetic algorithms, is also being investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ventricular fibrillation is a potentially fatal medical condition in which the flow of blood through the body is terminated due to the lack of an organized electric potential in the heart. Automatic implantable defibrillators are becoming common as a means for helping patients confronted with repeated episodes of ventricular fibrillation. Defibrillators must first accurately detect ventricular fibrillation and then provide an electric shock to the heart to allow a normal sinus rhythm to resume. The detection of ventricular fibrillation by using an array of multiple sensors to distinguish between signals recorded from single (normal sinus rhythm) or multiple (ventricular fibrillation) sources is presented. An idealistic model is presented and the analysis of data generated by this model suggests that the method is promising as a method for accurately and quickly detecting ventricular fibrillation from signals recorded from sensors placed on the epicardium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A combination serial and parallel configuration of n sensors for decision fusion is analyzed. Unlike a parallel fusion scheme which fuses data only when it receives all information from all sensors, this configuration does not require data from all sensors before fusion. This configuration, therefore, is better than parallel fusion due to its speed and tendency to process data without time delay. Furthermore, it is better than serial distributed decision fusion in the sense that it removes the possibility of link failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents analysis on the ability to classify fixed-wing aircraft based on their acoustic signatures. Since only a small amount of data was available, the paper focuses on feature extraction. We analyzed a data set for a single propellor and a single jet aircraft. Both spectral and cepstral analyses were performed on the data. Both nonparametric and parametric methods were used to estimate the power spectrum. For the propellor aircraft, the frequency ratio between spectral lines was found to be a useful feature for classification. The cepstrum of both the propellor and jet aircraft acoustic data were found to contain features related to engine rotation rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a neural network based recognition scheme for 2-D objects directly from the boundary information. The encoded boundary of the object is directly fed as input to the neural network cutting short the feature extraction stage and hence making the scheme computationally simpler. Also, the described scheme is invariant to translation, rotation, and scale changes to the objects. Using isolated hand-written digits, we show that the proposed scheme provides recognition accuracy of up to 87%. The error backpropagation method is used as the learning algorithm for the neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teledyne Brown Engineering has developed a technique for building rugged, solid-optic processors that can withstand the rigors of battlefield operations. These optical processors have an extremely high processing throughput due to the inherent parallelism of optical systems. Their architecture is especially appropriate for target recognition applications when using the cross-correlation as the similarity measure. What's more, the optical architecture can accommodate fused sensor data as well as signals from each individual sensor. This paper discusses the target recognizer problem and the theoretical reasons why conventional processing systems have limited ability to perform sensor fusion for target recognition and describes the way an optical processor can provide real time target recognition with fused sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach to aircraft silhouette recognition using a genetic algorithm for pattern analysis and search tasks and a bimorph shape classifier is presented. The bimorph classifier produces an assortment of shapes derived from a medial axis transform language (MAT) by establishing a set of genes, a chromosome, that portrays the genetic makeup of each shape produced. Each gene represents a unique shape feature for that object and each chromosome a unique object. The chromosomes are used to generate the shapes embodying the classification space. The genetic algorithm then performs a search on the space until the exemplar shape is found that matches an unknown aircraft. The outcome of the search is a chromosome that constitutes the aircraft shape characteristics. The chromosome may then be compared to that of known aircraft to determine the type of aircraft in question. The procedures and results of utilizing this classification system on various aircraft silhouettes are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common mathematical foundation for the comparative analysis of statistical, fuzzy, and artificial neural pattern recognition or decision making systems is presented. This development uses abstract algebraic techniques to characterize the functions generating decision surfaces and the learning/training processes involved in each technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the use of Bayesian belief networks for the fusion of continuous and discrete information. Bayesian belief networks provide a convenient and straightforward way of modeling the relationships between uncertain quantities. They also provide efficient computational algorithms. Most current applications of belief networks are restricted to either discrete or continuous quantities. We present a methodology that allows both discrete and continuous variables in the same network. This extension makes possible the fusion of information from, or inferences about, such diverse quantities as sensor output, target location, target type or ID, intent, operator judgment, behavior profile, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology for the classifying problem based upon a pseudo k-means algorithm. Both supervised and unsupervised classifying algorithms are presented here to show the flexibility of the pseudo k-means algorithm. The supervised algorithm is computationally efficient compared with the k-nn algorithm. The unsupervised algorithm avoids the error and time consuming problem due to the improper selection of initial class centers in the k-means algorithm. The pseudo k-means algorithm is easy to extend to the high dimension situation. Examples are presented to illustrate the effectiveness of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of identifying each of multiple objects in a scene with object distortions and background clutter present. A unified correlator architecture is used with inference filters that hierarchically process the input image scene to perform detection, enhancement, recognition, and finally identification. The different levels of the processor use various processing techniques: hit-miss rank-order and erosion/dilation morphological filtering, distortion-invariant filtering, feature extraction, and neural net classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection, location, and target recognition require scanning and processing of large images and are computationally expensive. Their implementation in real time is thus quite difficult. The Zoom-lens model is suggested in this paper as a possible solution for decreasing the computational burden usually associated with automatic target recognition (ATR). The original field of view (FOV) is partitioned into overlapping regions of lower resolution whose likelihood to contain targets of interest is evaluated using prespecified attention thresholds and distributed associative memories (DAMs). The Zoom-lens system iteratively focuses its attention at ever-increasing resolution on those regions selected by the preceding preattentive stages until final recognition (or rejection) occurs. Experimental results reported herein demonstrate the feasibility of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes work undertaken by British Aerospace (BAe) on the development of a neural network classifier for automatic recognition of land based targets in infrared imagery. The classifier used a histogram segmentation process to extract regions from the infrared imagery. A set of features were calculated for each region to form a feature vector describing the region. These feature vectors were then used as the input to the neural classifier. Two neural classifiers were investigated based upon the multi-layer perceptron and radial basis function networks. In order to assess the merits of a neural network approach, the neural classifiers were compared with a conventional classifier originally developed by British Aerospace (Systems and Equipment) Ltd., under contract to RARDE (Chertsey), for the purpose of infrared target recognition. This conventional system was based upon a Schurman classifier which operates on data transformed using a Hotelling Trace Transform. The ability of the classifiers to perform practical recognition of real-world targets was evaluated by training and testing the classifiers on real imagery obtained from mock land battles and military vehicle trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important research objective is to develop systems which automatically generate target recognition programs. This paper presents evidence that such general goals are not feasible. Specifically, the problem of automatically synthesizing target recognition programs is shown to be NP-Complete. The intractability of this problem motivates a problem specification which is tolerant of errors. Although easier, this too is shown to be NP-Complete. These results indicate that automatic target recognition has computational limitations which are inherent in the problem specification, and not necessarily a lack of clever system designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most automatic target recognition (ATR) systems are based upon measuring a set of predetermined features someone has decided will separate the classes of targets from one another. However, this system requires the user to decide what features will work best. Maybe it would be best to look at the targets and decide what is different between them. This is the motivation behind taking the Karhunen-Loeve Transform (KLT) of the images. The KLT finds the most variance between the images thus leaving to the computer the decision of where the difference between the targets lies. In this paper, two approaches to feature generation for target classification of infrared images are addressed: a standard feature set approach, and a KLT approach. Each method is explained and results are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology for the evaluation of sensor performance utilizing the uniform theory of diffraction and method of moments. The target model is based on a combinatorial geometry representation and the sensor is described by its system response function. Data which can be obtained from the model are radar cross section, power spectral density, images, and range profiles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and evaluation of multi-source, multi-spectral, all-aspect airborne target identification algorithms has been proven to be cumbersome as well as disjointed. The algorithm development capability under this testbed concept encompasses model-based reasoning, information fusion, airborne target identification, and target/sensor phenomenology analysis. The evaluation capability assembles multiple sensor and target types coupled with all aspect viewing in an operationally representative air-to-air environment. The importance of developing better techniques for establishing positive target identification for beyond visual ranges has increased in tactical importance, as a result of the Persian Gulf War. In addition to supporting the evaluation of algorithms and associated sensors, this testbed will support on- going R&D in the Air-To-Air Non-Cooperative Target Recognition (NCTR) arena.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar-based noncooperative target recognition (NCTR) can be attempted based on various types of radar signatures. One natural choice for such a signature domain is that of ultra-high range resolution (UHRR) profiles. These profiles provide a one-dimensional `map' of the target scatterers in the range dimension (with respect to the radar line of sight). Research in this area has shown that this signature domain seems to hold much promise, but that there are concomitant challenges that arise in connection with its exploitation. Radar waveform development, algorithm selection and development, data generation, and algorithm training and testing are all areas that can present acute challenges. This paper focuses on the issues related to the extreme variability that one may expect in UHRR signatures of fixed-wing targets. The degree of variability is quantified and it is shown that this extreme variability contributes to challenges related to algorithm development, training, and testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Determining the spatial image resolution required to perform automated target recognition (ATR) can provide crucial information to designers of sensors and systems employing ATR technology. We present an analytic framework for performing this determination based on a functional decomposition of the algorithmic portion of an ATR system. The effects of changes in resolution in each component are taken into consideration separately, then combined to provide a hierarchical description of the required spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates the application of fractal random process models and their related scaling parameters as features in the analysis and segmentation of clutter in high-resolution polarimetric synthetic aperture radar (SAR) imagery. Specifically, the fractal dimension of natural clutter sources, such as grass and trees, is computed and used as a texture feature for a Bayesian classifier. The SAR shadows are segmented in a separate manner using the original backscatter power as a discriminant. The proposed segmentation process yields a three-class segmentation map for the scenes considered in this study (with three clutter types: shadows, trees and grass). The difficulty of computing texture metrics in high-speckle SAR imagery is also addressed. In particular, a two-step preprocessing approach consisting of polarimetric minimum speckle filtering followed by non-coherent spatial averaging is used. The relevance of the resulting segmentation maps to constant-false-alarm-rate (CFAR) target detection techniques is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor data fusion is concerned with the integration and extraction of information from data obtained from two or more sources. Assuming the data are contaminated with noise, we present the necessary definitions and concepts to formulate multisensor data fusion as a problem of inference; specifically, we show in general and in a simple example how to assign probabilities for hypotheses expressed as propositions when data from two sources supply information relevant to the hypotheses. The example is concerned with target identification with a pulsed radar and a continuous-wave radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a suboptimal decision-theoretic estimator for differential delay and differential Doppler which is appropriate for narrowband signals, accommodates relatively unconstrained noise environments and operates without prior knowledge of the signal's spectral or temporal characteristics. Under certain adverse environmental conditions, this estimator is significantly more accurate than a conventional ambiguity surface estimator. The computational penalty for this improved performance is relatively minor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mappings from multidimension to one dimension, or the inverse mappings, are theoretically described by space filling curves, i.e., Peano curves or Hilbert curves. The Peano Scan is an application of the Peano curve to the scanning of images, and it is used for analyzing, clustering, or compressing images, and for limiting the number of the colors used in an image. In this paper an efficient method for visual data compression is presented, combining generalized Peano Scan, wavelet decomposition, and adaptive subband coding technique. The Peano Scan is incorporated with the encoding scheme in order to cluster highly correlated pixels. Using wavelet decomposition, an adaptive subband coding technique is developed to encode each subband separately with an optimum algorithm. Discrete Cosine Transform (DCT) is applied on the low spatial frequency subband, and high spatial frequency subbands are encoded using Run Length encoding technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A sonar bandwidth compression (BWC) technique which, unlike conventional methods, adaptively varies the coding resolution in the compression process based on a priori information is described. This novel approach yields a robust compression system whose performance exceeds the conventional methods by factors of 2-to-1 and 1.5-to-1 for display- formatted and time series sonar data, respectively. The data is first analyzed by a feature extraction routine to determine those pixels of the image that collectively comprise intelligence-bearing signal features. The data is then split into a foreground image which contains the extracted source characteristic and a larger background image which is the remainder. Since the background image is highly textured, it suffices to code only the local statistics rather than the actual pixels themselves. This results in a substantial reduction of the bit rate required to code the background image. The feature-based compression algorithm developed for sonar imagery data is also extended to the sonar time series data via a novel approach involving an initial one-dimensional DCT transformation of the time series data before the actual compression process. The unique advantage of this approach is that the coding is done in an alternative two-dimensional image domain where, unlike the original time domain, it is possible to observe, differentiate, and prioritize essential features of data in the compression process. The feature-based BWC developed for sonar data is potentially very useful for applications involving highly textured imagery. Two such applications are synthetic aperture radar and ultrasound medical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe several noise reduction algorithms for signals which contain nonlinear (chaotic) components. The most promising method utilizes empirical global equations of motion as an underlying predictive model. Numerical results of the algorithm are presented, demonstrating significant improvements in SNR (up to 30 dB in a single pass) even when the input SNR is very low (0 dB or lower). Ramifications of the technique and comparisons with other methods for chaotic signal processing are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report describes a study to investigate the lossless compression of multispectral visible and thermal imagery data. The imagery is obtained from remotely sensed data acquired from airborne scanners maintained and operated by NASA at the Stennis Space Center. The aim is to determine the degree of compression possible and then implement algorithms that perform lossless data compression on images. The application of this technique lay in further compressing data that has already been subjected to a lossy technique called Vector Quantization. The output data from the vector quantization algorithm was compressed without any further increase in the RMS error. Initially, the data was mapped to a difference transform. This transformed image was then converted into symbols using shift-extended codes of a specific bit-size. These symbols were then coded using Huffman coding. The complexity of the implementation increases with the bit-size. Hence the effect of the bit-size on the compression ratio was also examined. The data from a NASA 6 channel sensor called Thermal Infrared Multispectral Scanner (TIMS) resulted in additional compression of 5.33 (for an image vector quantized with four codewords) to 1.28 (for an image vector quantized with 128 codewords). The data from a 7 channel sensor called Calibrated Airborne Multispectral Scanner (CAMS) resulted in additional compression of 7 (for an image vector quantized with four codewords) to 2.22 (for an image vector quantized with 128 codewords). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. These modules are very general in nature and are thus capable of analyzing any sets or types of images or voluminous data sets. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Preprocessing is beneficial before classification with neural networks because eliminating irrelevant data produces faster learning due to smaller datasets and due to a reduction of confusion caused by irrelevant data. In this paper we demonstrate a further benefit due to smoothing that may be accomplished at the same time. A common trade off with neural networks is between accuracy of classification of training sets versus accuracy of classification of testing sets not used for training. Classification of testing sets requires the network to interpolate. We show that the smoothing obtained by data compression, by omitting low frequency components of the wavelet transform, can enhance interpolation, thus producing improved classification on testing data sets. A wavelet transform decomposes a signal obtained from a radar simulator into frequency and spatial domains using a Mexican hat wavelet. Varying cut-off frequencies are used in omitting higher frequency components of the wavelet transform. An inverse wavelet transform shows the lest square degradation in signal due to smoothing. We demonstrate that omitting high frequency terms results in faster computation in neural network learning and provides better interpolation, that is increases classification performance with testing data sets. The reasons are explained. The wavelet compression results are compared with using low pass filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experiments were performed to demonstrate a new sonar concept. The concept involves assuming locations at which scatterers may exist and computing the strengths of scatterers at these locations that best match the data. This involves precomputing a matrix using the geometry of the sensors and transmission pulse information. The matrix is then reduced in size using singular value decomposition, which also takes care of ill-conditioning due to locating potential scatterers close together. In real-time operation of the sonar the incoming data is premultiplied by the matrix to produce a map of scatterers. A computer simulation showed the effects of the matrix rank on the scatterer map. As the rank moves away from full rank, the scatterer positions become blurred. In a laboratory experiment, we used one transmitting transducer and two receiving transducers. The signal was passed through an analog to digital converter. We demonstrate that simple scatterers can be located from this data using the new approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss an initial effort to generate pattern recognizers using a multi- resolution Gabor stack of filtered images and a simple evolutionary search algorithm. The generated feature detectors are sets of pixel detectors that measure intensities and pass these values as feature vectors to neural net classifiers. We demonstrate the use of random search to solve a discrimination problem in which tank images are separated from other military vehicle images. The techniques and results used in this paper for discrimination of grey-scale images are reminiscent of similar approaches used to generate pattern recognizers for binary images. A sparse sampling of the Gabor image stack, using only 35 pixel detectors, produces feature vectors which are readily separated by linear perceptrons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modern identification algorithm to reduce the complexity of estimating parameters for discrete time-invariant linear systems and nonlinear systems is presented. The algorithm requires no a priori knowledge of the input or of the order of the system. An identification unbiased estimator method is presented which reduces the computational complexity of covariance matrix inversion. Probability one convergence of the estimated parameters to their true values is presented, and stability of the identification algorithm is discussed. An example is presented to illustrate the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration techniques have gained importance for many applications such as area correlation tracking, handing over recognized target scenes from one sensor to another sensor, tracking an area of interest before detection, tracking point targets, and the latest being multi- target handling capability for defense needs, scene stabilization, etc. No single image registration algorithm (IRA) can work satisfactorily for all applications and in all environments. This paper analyzes the suitability of image registration algorithms for infrared images. Infrared images are characterized by low contrast and sensor nonuniformities such as offset-errors, gain variations resulting in fixed pattern noise (FPN). Particularly, in focal plane arrays (FPA), the output of each detector is characterized by `ax + b' where `a' and `b' are gain and dc offset terms respectively, and `x' is photon flux level falling on the detectors. These parameters and especially their variations from detector element to detector element effect the performance of image registrations algorithms. In this paper, the basic IRAs are analyzed in this context of infrared images with low contrast and FPN. Simulated results using real world images are presented. Novel and inexpensive confidence and redundancy measures have been proposed to improve the performance by detecting misregistrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes matrix based algorithms for computing wavelet transform representations with application to multiresolution analysis. Structure of the algorithm presented is well suited for programming purpose and also for the implementation on VLSI processors. By using overlap-add or overlap-save techniques, constant matrix size can be used to accommodate arbitrary data lengths. Performance of the algorithm described in this paper is illustrated by decomposing an image into details and smoothed components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method to estimate the bounds of temperatures and emissivities from thermal data. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) -- a 6 channel thermal sensor. Since this is an under-determined set of equations i.e., there are seven unknowns (six emissivities and one temperature) and six equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. However, using some realistic initial bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00, respectively. A set of images obtained with the TIMS are then used as real imagery data. The data was acquired over Utah Lake, Utah, a large freshwater lake near Salt Lake City, Utah, in early April 1991. It will be used to identify water temperatures for detection of underwater thermal, saline, and fresh water springs. An image consisting mostly of water is analyzed. The temperatures of the pixels are calculated to an accuracy of less than 1 deg K and the emissivities are calculated to an accuracy of less than .01.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates the successful application of a pattern recognition technique to detect abnormal operating conditions in power systems. Specifically, the paper discusses the application of the minimum entropy method to derive a 2-class classifier system that enables classification of power system behavior into either the secure or insecure class. Security violations are detected on the basis of a line-overload criteria. Classifier results for the New England Power test system are provided. The major benefits obtained by the application pattern recognition techniques are the rapid detection of abnormal operating conditions and the substantial reduction in computation as compared to traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on the development of a hybrid optical/electronic signal processor for laser radar signals in fire control applications. The breadboard system being developed consists of three subsystems: (1) a signal generator producing target-representative signals, (2) the signal processor consisting of a radiometric channel and a Doppler channel, and (3) a data acquisition, analysis, and display subsystem. The radiometric channel provides target ladar cross section (LCS) resolved in crossrange, while the Doppler channel provides target radial velocity, also resolved in crossrange. Data from the two channels is fused and processed within the data analysis subsystem. Results are to be displayed in near real-time. The breadboard system will be used to demonstrate the capabilities of hybrid signal processor technology and to investigate processing laser radar returns for noncooperative target recognition, target orientation determination, and target trajectory estimation functions. It is anticipated that these functions will enhance the effectiveness of advanced fire control systems in future helicopters and ground vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.