PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The objective of this paper is to discuss the issues that are involved in the design of a multisensor fusion system and provide a systematic analysis and synthesis methodology for the design of the fusion system. The system under consideration consists of multifrequency (similar) radar sensors. However, the fusion design must be flexible to accommodate additional dissimilar sensors such as IR, EO, ESM, and Ladar. The motivation for the system design is the proof of the fusion concept for enhancing the detectability of small targets in clutter. In the context of down-selecting the proper configuration for multisensor (similar and dissimilar, and centralized vs. distributed) data fusion, the issues of data modeling, fusion approaches, and fusion architectures need to be addressed for the particular application being considered. Although the study of different approaches may proceed in parallel, the interplay among them is crucial in selecting a fusion configuration for a given application. The natural sequence for addressing the three different issues is to begin from the data modeling, in order to determine the information content of the data. This information will dictate the appropriate fusion approach. This, in turn, will lead to a global fusion architecture. Both distributed and centralized fusion architectures are used to illustrate the design issues along with Monte-Carlo simulation performance comparison of a single sensor versus a multisensor centrally fused system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlinear adaptive detector/estimator is introduced for single and multiple sensor data processing. The problem of target detection from returns of monostatic sensor(s) is formulated as a nonlinear joint detection/estimation (JDE) problem on the unknown parameters in the signal return. The unknown parameters involve the presence of the target, its range, and azimuth. The problems of detecting the target and estimating its parameters are considered jointly. A bank of spatially and temporally localized nonlinear filters is used to estimate the a posteriori likelihood of the existence of the target in a given space-time resolution cell. Within a given cell, the localized filters are used to produce refined spatial estimates of the target parameters. A decision logic is used to decide on the existence of a target within any given resolution cell based on the a posteriori estimates reduced from the likelihood functions. The inherent spatial and temporal referencing in this approach is used for automatic referencing required when multiple sensor data is fused together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been a great deal of theoretical study into decentralized detection networks composed of similar (often identical), independent sensors, and this has produced a number of satisfying theoretical results. At this point it is perhaps worth asking whether or not there is a great deal of point to such study -- certainly two sensors can provide twice the illumination of one, but what does this really translate to in terms of performance? We shall take as our metric the ground area covered with a specified Neyman-Pearson detection performance. To be fair, the comparison is of a multi-sensor network to a single-sensor system where both have the same aggregate transmitter power. The situations examined are by no means exhaustive but are, we believe, representative. Is there a case? The answer, as might be expected, is `sometimes.' When the statistical situation is well-behaved there is very little benefit to a fused system; however, when the environment is hostile the gains can be significant. We shall see, depending on the situation, gains from co-location, gains from separation, optimal gains from operation at a `fusion range,' and sometimes no gains at all.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the application of the backpropagation neural network to data fusion for automatic target recognition using three knowledge sources: continuous wave (cw) coherent (X-band) radar, which provides us with high resolution Doppler signature measurements, together with a surveillance radar, which provides positional information of airborne targets, and priori information of flight times of targets flying regular flight paths, obtained from Adelaide Airport Flight time tables.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an investigation of the discrimination of heavy and light space objects based on infrared (IR) multi-spectral surveillance data. A time-sequence model of sensor measurements was used to produce test data, upon which the signal processing and discrimination algorithms were to be tested. The signal processing algorithms are based primarily on the estimation of the sinusoid that modulates the signal in the three IR bands, this frequency being one useful discrimination feature. Both frequency domain and novel-time domain techniques were investigated. The time domain technique employs binary median filtering of the original time sequence of measurements with its quadratically modeled trend removed. A second feature for discrimination is also proposed, based upon the quality of fit of the estimated sinusoid to the original time sequence. This combination of features from multiple IR bands was fused using the back-propagation neural network (BPNN) and the polynomial neural network (PNN), which were shown to provide excellent discrimination of the two target classes of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processor resource requirements for a central-level multi-hypothesis tracking (MHT) fusion system have been estimated to be beyond most of the currently known general purpose processors for naval applications. A benchmark MHT fusion system has been selected for Command and Control System (CCS) for a frigate class naval platform of the year 2000 and beyond. The system parameters have been selected to support the Anti-Air Warfare (AAW) mission requirements of a frigate which has a long range radar (LRR), a medium range radar (MRR), an electronic support measure (ESM) sensor, and an infra-red search and track (IRST) sensor. Appropriate fusion parameters have been selected to support the frigate mission, and the real-time capability to run the algorithms, the time required to perform a cycle of the central-level MHT fusion system has been estimated for a general purpose processor. This paper presents a comparative analysis of the two implementation strategies for the two modes of operation of the central-level benchmark MHT fusion system, by analyzing the system and fusion parameters selected in this study, estimating peak and average processor resource requirements, and evaluating the timing delays between contact detection and fusion for the two approaches. Based on the estimated processor and timing requirements of these approaches, this paper also presents a concurrent computing implementation, that is expected to permit the real-time execution of the central-level MHT fusion system for the AAW frigate within currently available computer technology for naval applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multisensor tactical threat assessment system is being developed by the US Army CECOM RDEC Intelligence and Electronic Warfare Directorate (lEWD). Roughly speaking, the current system consists of a tactical hypothesis generator and a hypothesis evaluation algorithm. The hypothesis generator is capable of developing complex multi-Actor tactical plans. Plans have a symbolic representation which is then instantiated with actual tactical data to produce an activity model. The hypothesis evaluation algorithm uses the activity model to synthesize possible Actor actions which may satisfy the plan goals. Multi-Actor plans are synthesized individually for each Actor. Thus, each Actor maintains his own peculiar "world-view' of the battlefield during the problem solving process. The resulting implementation is a highly distributed and parallel problem solver. This methodology assumes a deterministic relationship between Actor action execution and the Actor's digital feature space which is a raster picture element (pixel). Spatial changes caused by Actor action execution within a pixel are broadcast to other Actors within the domain using a standard AT BlackBoard (BB). Nondeterministic relationships between Actors are produced whenever the local spatial or temporal goals of two or more Actors conflict with one another. For example, inter-Actor conflict may be produced when two or more Actors attempt to move to the same pixel at the same instant of time. Actor movement can become deadlocked if both Actors cannot co-occupy the same pixel, i.e. ,the McNeil paradox. Recently, a realistic solution to the McNeil paradox was developed by the author. The solution algorithm uses Actor action execution information which has been posted on the BB to develop a global strategy which effectively mediates Actor conflicts. The approach is general enough to encourage the development of a general theory of Actor conflict, including both cooperative and adversarial Actor relationships. The author will show that for certain types of Actor conflicts the complexity of the BB inter-Actor conflict mediation algorithm is 0(n).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the feasibility of applying data fusion techniques to resolve the sensitivity versus false alarm dilemma of rotorcraft transmission diagnostics. Traditionally, data fusion techniques have been applied almost exclusively to automatic target recognition problems, but the processing concepts are well-suited to address complex machine diagnostics and prognostics. Processing methods such as data alignment and association, hierarchical inferencing, situation assessment, threat assessment, time predictions, sensor management, and human-computer interface are common to both automatic target recognition and transmission diagnostics. The benefits of this approach include: (1) improved decision support, (2) reduced false alarm rates, (3) improved mission effectiveness, and (4) enhanced safety.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines a schema for the fusion of simultaneous measurements of LWIR and visible signature data. This phenomenology-based schema is composed of LWIR and visible synergistic algorithms developed by the authors for implementation on the Midcourse Space Experiment (MSX). LWIR and visible synergistics are presented with Optical Signatures Code (OSC) predictions of the Spirit 3 LWIR radiometer and the Space-Based Visible (SBV) sensor, each resident on the MSX spacecraft. Fundamental LWIR and visible signature phenomenologies are examined with respect to multisensor fusion functions such as thermal mass and dynamics exploitation. Methods to extract LWIR and visible signature information are outlined and an example is included for the engagement given in the MSX Late Midcourse Principal Investigator MSX V experiment plan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The remotely measured surface vibration signatures of tactical military ground vehicles are investigated for use in target classification and identification friend or foe (IFF) systems. The use of remote surface vibration sensing by a laser radar reduces the effects of partial occlusion, concealment, and camouflage experienced by automatic target recognition systems using traditional imagery in a tactical battlefield environment. Linear Predictive Coding (LPC) efficiently represents the vibration signatures and nearest neighbor classifiers exploit the LPC feature set using a variety of distortion metrics. Nearest neighbor classifiers achieve an 88 percent classification rate in an eight class problem, representing a classification performance increase of thirty percent from previous efforts. A novel confidence figure of merit is implemented to attain a 100 percent classification rate with less than 60 percent rejection. The high classification rates are achieved on a target set which would pose significant problems to traditional image-based recognition systems. The targets are presented to the sensor in a variety of aspects and engine speeds at a range of 1 kilometer. The classification rates achieved demonstrate the benefits of using remote vibration measurement in a ground IFF system. The signature modeling and classification system can also be used to identify rotary and fixed-wing targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As range images, obtained by active laser radar (LADAR), contain the 3-D information necessary for 3-D environment understanding, great attention has been attracted, in the field of computer vision, to the processing of the range images in order to get the 3-D features of the environment. Unfortunately, most of the already proposed processing methods of range images are extensively time-consuming. Therefore, the use of range images to obtain 3-D information about environment is largely limited. This presentation proposes a method, based on data fusion, to obtain 3-D features of polyhedrons using co-registered range and intensity images. First, feature points and edges of the candidate planes of the objects in the intensity image are acquired by analyzing the intensity variation. Then, the candidate 3-D vertices, edges, and planes in range image can be gotten using the correspondence of the two co- registered images. Next, the candidate planes are verified by computing and analyzing the curvatures and normals at some feature points and edges on the candidate planes in the range image. Finally, the verified candidates are regarded as actual planes of the sensed object, and are used to construct a hierarchical representation of the object. Experiment results on simulated data have been given to show the feasibility of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of intelligent use of sensors in a multi-sensor, multi-target surveillance system is discussed. The problem is to make the optimal assignment of targets to sensors subject to given constraints on sensor capacity and for a given definition of optimal. We have found previous work on the sensor management problem to have deficiencies due to the way information is used to optimize the assignment. There are numerous formulations of such `information' based approaches in the literature. This paper attempts to put the problem on a first principles basis. The approach taken here it to determine the predicted gain in information content of a track j after it is updated with data from sensor i for all pairs i,j. This information content can be predicted without making the actual observation by using the properties of the Kalman covariance matrix. The particular assignment of tracks to sensors that maximizes the total information gain subject to the constraints on the sensors is then generated using linear programming methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following the acceptance of the linear Gauss Markov paradigm pioneered by Kalman, the engineering practice for the design of target tracking applications had been maturing over the last two decades. In recent years however two emerging facts have called for a renewed attention from the research community: (1) the generalization of multiple sensor architectures, motivated by higher requirements in terms of target description and robustness to electronic warfare, and (2) the availability of affordable imaging sensors, following progress in infrared detectors technology. The purpose of this communication is to report on some recent work addressing the issues raised by these two new aspects of tracking application design. Ideas are illustrated using an air defense scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The central problem in multitarget-multisensor tracking is the data association problem of partitioning the observations into tracks and false alarms so that an accurate estimate of the true tracks can be recovered. Many previous and current methodologies are based on single scan processing, which is real-time, but often leads to a large number of partial and incorrect assignments, and thus incorrect track identification. The fundamental difficulty is that data association decisions once made are irrevocable. Deferred logic methods such as multiple hypothesis tracking allow correction of these misassociations and are thus considered to be the method for tracking a large number of targets. The corresponding data association problems are however NP-hard and must be solved in real-time. The current work develops a class of algorithms that produce near-optimal solutions in real-time and are potentially orders of magnitude faster than existing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A parallel algorithm is presented for the extraction of the optical flow in an image which has a noisy background. The approach generalizes to higher dimensional spaces so that it can also be used to model the evolution of an n dimensional noisy signal, by assuming that valid signal points adhere to a restricted domain of temporal models, while noise points don't. By analyzing the flow and input field, hypothesis regarding targets are determined. These hypothesis can be fused with other sensors and the global hypothesis fed back to the mapping unit to improve the determination of the optical flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation is a first step towards successful tracking and object recognition in 2-D pictures. Mostly the pictures are segmented with respect to quantities as range, intensity, etc. Here a method is presented for segmentation of 2-D laser range pictures with respect to both range and variance simultaneously. This is very useful since man-made objects differ from the background in the terrain by their smoothness. The approach is based on modeling horizontal scans of the terrain as piecewise constant functions. Since the environment has a complicated and irregular structure we use multiple models for modeling different segments in the laser range image. The switching between different models, i.e., ranges belonging to different segments in a horizontal scan, are modeled by a hidden Markov model. The method is of relatively low computational complexity and the maximal complexity can be controlled by the user. Real data is used for illustration of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss three measures to determine whether a given noise time sequence or time varying image has a deterministically generated chaotic component and the strength of that component: Lyapunov coefficients, Kolmogorov entropy, and fractal dimension. Results of computer experiments show that either a neural network or a polynomial model may be successfully used to model a logistic function chaotic sequence generator. Polynomials are also shown to model a Lorentz system. In all cases, the model generates chaotic noise with the same measures as the real noise data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-target and multi-background classification algorithm using neural networks is presented. The algorithm uses a feedforward neural network algorithm, a double window filter, and thresholds to classify an image into targets and backgrounds. This algorithm's performance differs from that of the K-nearest neighbor (K-NN) classifier algorithm in that (1) it provides noiseless classification, (2) it is faster, and (3) it provides better accuracy. Examples are given to illustrate the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new data clustering algorithm using a self organizing method is presented. This algorithm forms clusters and is trained without supervision. The clustering is done on the basis of the statistical properties of the set of data. This algorithm differs from the K-means algorithm and other clustering algorithms in that the number of desired clusters is not required to be known a priori. It also removes noise and is fast. The convergence of the algorithm is shown. An example is given to show the application of the algorithm to clustering data and to compare the results obtained using this algorithm with those obtained using the K-means algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the problem of robust fast recognition of partially occluded or incomplete views of `flat' objects. Robustness is accomplished through hypothesis confirmation using complementary or supporting information available for the current hypothesis and by model-based hypothesis verification. Classification speed is obtained by pruning the hypothesis hierarchy using simple pruning procedures based on structural properties derived from current object representation. In addition, classification speed is also improved through the use of simple model-based decision making procedures instead of computationally expensive transformations. Normalized Interval Vertex Descriptors (NIVD) are used to represent objects. NIVDs are representations derived from the physical characteristics of an object (vertices and sides) that are easy to obtain, especially for polygon like shapes. They provide not only a compact representation, but they also allow the definition of features that can be used to speed up the classification process. Experimental results of this process are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classification of acoustic signatures of airborne targets using a Hybrid Artificial Neural System (ANS) is described in this paper. The acoustic data used is field data taken from various helicopters. Data used in this study was composed of multiple classes of helicopter signatures, each having several time-series segments. Test results indicate greater than 96 percent correct classification on multiple helicopter classes. The results also show that the ANS can generalize, when trained using reduced time-series segments sampled from original signatures of a target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army has a critical need for the capability provided by a multifunction sensor. This is (in effect) a smart sensor system that can adapt to environmental conditions and adjust its mode of operation to effectively counter any threat it meets. It will have an intelligent signal processor which has all of the system's sensor signals to choose from. The processor chooses the appropriate signal information to rapidly detect, acquire, track, and automatically identify all targets in the vicinity of the sensor under a wide variety of battlefield scenarios and environmental conditions. The multiphenomenology signal information provides the flexibility to overcome the adverse effects of clutter, countermeasures (both active and passive), illumination, obscurants, target orientation, and weather. It should be noted, however, that the types of sensory information required is dependent on the mission and the operating environment. For instance, a strategic defense sensor operating in space can use (and will need) different types of sensor data than the multifunction sensor employed on an attack helicopter. In fact, the sensor configuration on a helicopter operating in Saudi Arabia may be quite different from one that is deployed to Vietnam. For the purpose of this paper we generalize about the technologies desired for an adaptable, `smart' sensor system. We do not specify a particular mission nor define a specific threat. However, in any case, we can assume the need to fuse sensor signal information in an intelligent processor to provide robust performance in the battlefield environment. 12
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teledyne Brown Engineering (TBE) has developed a high fidelity, staring, seeker modeling capability to support simulation, analysis, realtime hardware testing, and man-in-the-loop testing of strategic defense concepts. TBE's Seeker Analysis Toolbox (SAT) is an integrated collection of algorithms for seeker modeling, including: scene generation (via Strategic Scene Generation Model), optical transfer phenomena, focal plane behavior, and analog as well as digital signal processing. A discussion of SAT capabilities and results is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser Radar (LADAR) is considered by researchers in the Automatic Target Recognition (ATR) field to be the best sensor for accurate target identification. While it is known that a classifier utilizing both boundary and depth contour information can outperform a classifier which uses only the boundary contour information, the precise contribution of target depth information beyond that of boundary contour data in LADAR imagery has not been thoroughly investigated. This paper addresses that question. The test set used in the experiment includes approximately 50 LADAR images acquired in 1986 at the U.S. Army A. P. Hill test site using Raytheon's Tri-Service Laser Radar Sensor. Through the testing, each image is reduced in size by a factor of 4, 9, and 16 to simulate longer-range data and to expand the test set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatio-temporal constraint equation for computation of the optical flow holds only over local spatio-temporal regions where motion is translational with constant velocity or over non- moving background regions where the image velocity is zero. Where the expression is true, it is possible to estimate the image motion vector of local image regions by minimizing the squared error over the local region. The expression is not true in the moving boundaries of moving objects over a stationary background, over regions with multiple moving objects, or over objects not in purely translational motion. Under these conditions, the accurate computation of motion vectors is not possible using this method. However, the error squared term, itself, may be used as a moving target indicator able to segment moving targets from noise and background clutter. This paper proposes and assesses the feasibility of using the error measure to detect moving boundaries in high noise images. We assess the performance of this error-squared measure in localizing object motion in high noise environments for two filtering functions G(x,y): the Gaussian function and the Gabor function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the compression of high-dimensional sensor data using vector quantization. Two metrics are presented for compression with the frequency sensitive competitive learning (FSCL) vector quantization (VQ) algorithm, and several indices of partitional validity are used to analyze the resulting VQ codebook clusters. Cluster analysis is used to determine the compressibility of the data. The results of this cluster analysis will help determine the effect of data compression on the performance of a target recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Herein is described a method which objective is to enhance the speed at which terrain inter visibility calculations are performed for unknown or "pop-up" ground based radar threats. The problem with current inter visibility techniques is their inability to perform in real time, thereby jeopardizing the survivability and success of the mission. The main computational hurdle is the intensive checking of the radar extensions against the surrounding terrain. These very repetitive line-of-sight calculations result in a detailed mapping of volumes within which an object is non-detectable. We outline a set of high level methods which checks a selected set of lines for intersect with a subset of terrain elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a computationally rapid method of target motion detection using the spatio-temporal constraint equation [1]. This method requires significantly fewer computations than standard methods based on the same equation, and appears to work well even under poor image conditions, such as low contrast, small object size, camera jitter, poor focus, and minimal in-plane motion. This paper will describe both the standard method and our more rapid approach, and will discuss enhancements added to mitigate the effects of registration error and camera jitter. Results, indicating object motion in infrared video images, will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optimal time-frequency receiver based on wavelet transforms is presented in this paper. To characterize the transient in time-varying radar signals, a more generalized representation which can reflect both the time and the frequency behavior of signals is desirable. The joint time-frequency representation provides a way to localize information in both the time and the frequency domain, simultaneously. We use wavelets sampled in both the time and the frequency domain as an orthonormal basis to represent radar signals in the joint time- frequency domain. With the wavelet representation, the optimum time-frequency receiver can be derived. The joint time-frequency representation allows us to separate signals and interferences which may not be separatable by conventional time gating or frequency filtering. In this paper, we also describe some general issues on the detection of known signals, the detection of signals with unknown parameters and the wavelet based estimation of parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High speed spatial light modulators are typically binary in operation, which complicates their use as spatial filter masks when large dynamic range filters are needed. Wavelet image transformations may be implemented by a series of spatial filtering operations, but these filters typically must satisfy minimum uncertainty-principle or other constraints. For these constraints to hold, the amplitudes of the filter masks must closely follow the amplitudes of the appropriate wavelet kernel masks and therefore simple use of binary spatial light modulator (SLM) filter masks is impossible. A bit-slice method is therefore presented which can circumvent the dynamic range limitations of typical SLMs when used in coherent optical processors. The method takes advantage of the parallelism and linearity of optical processing and only results in a (log2N)2 increase in complexity for images with N gray levels. Computer modeled results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for determining the viewing aspect of an arbitrary image is presented. The image must be of a shape that has bilateral symmetry. The method involves finding the transformation of the input image to a symmetric representation. This transform can then be used to derive a template corresponding to the particular aspect of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing of Boolean imagery compressed by runlength encoding (RLE) frequently exhibits greater computational efficiency than the processing of uncompressed imagery, due to the data reduction inherent in RLE. In a previous publication, we outlined general methods for developing operators that compute over RLE Boolean imagery. In this paper, we present sequential and parallel algorithms for a variety of operations over RLE imagery, including the customary arithmetic and logical Hadamard operations, as well as the global reduce functions of image sum and maximum. RLE neighborhood-based operations, as well as the more advanced RLE operations of linear transforms, connected component labelling, and pattern recognition are presented in the companion paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This is the second of two papers which describe algorithms for the processing of Boolean imagery compressed by runlength encoding (RLE). In the previous paper, we presented sequential and parallel algorithms for a variety of operations over RLE imagery, including the customary arithmetic and logical Hadamard operations, as well as the global reduce functions of image sum and maximum. In this paper, we discuss RLE neighborhood-based operations, as well as the more advanced RLE operations of linear transforms, connected component labelling, and pattern recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Line drawings can be obtained from digital images using a combination of various edges processing techniques such as edge detection, edge thinning, perceptual organization, the Hough transform, and others. Our interest has been the extraction of surfaces for use in subsequent object recognition algorithms. Current approaches of the surface extraction require a pre-defined data structure of the vertice and edge of an object. Using the data structure, all edge directions are taken clockwise, thus if the edge is counted twice with different directions, the edge is considered as the common edge of two different surfaces. Consequently, the computation cost is very high and increases tremendously for complex objects. In this paper, we propose a very simple algorithm to extract both whole (bounding) and component surfaces. Our approach is based on the spatial position of contours without any geometric constraints. The approach locates boundaries of lines in an image that are easily measured by a city-block distance transformation. The surface is then obtained by peeling off the outside boundary of the contour. The component surfaces are then separated by the set of inside boundaries, if present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new adaptive thresholding technique is presented that maximizes the contour edge information within an image. Early work by Attneave suggested that visual information in images is concentrated at the contours. He concluded that the information associated with these points and their nearby neighbors is essential for image perception. Resnikoff has suggested a measurement of information gain in terms of direction. This measurement determines information gained from a measure of an angle direction along image contours relative to other measures of information gain for other positions along the curve. Hence, one form of information measure is the angular entropy of contours within an image. Our adaptive thresholding algorithm begins by varying the threshold value between a minimum and a maximum threshold value and then computing the total contour entropy over the entire binarized edge image. Next, the threshold value that yields the highest contour entropy is selected as the optimum threshold value. It is at this threshold value that the binarized image contains the greatest amount of image features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generally, 3D-information is obtained through the stereo-vision method, which needs two or more images of an object at different view positions. In this paper, we apply the fractal method to obtain 3D-information from remote sensing images of landscape, which needs only one image. Fractal is a useful method for describing very irregular cases. First, we prove that the remote sensing images of landscape satisfy fractal characteristics. Then, based on this conclusion, according to the relationship between Holder constant (alpha) and scaler T, we calculate the relative depth of each point, deeply, estimate the surface normal direction and area value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of multichannel signal detection in additive correlated non- Gaussian noise using the innovations approach. While this problem has been addressed extensively for the case of additive Gaussian noise, the corresponding problem for the non- Gaussian case has received limited attention. This is due to the fact that there is no unique specification for the joint probability density function (PDF) of N correlated non-Gaussian random variables. We overcome this problem by using the theory of spherically invariant random processes (SIRP) and derive the innovations based detectors. It is found that the optimal estimators for obtaining the innovations processes are linear and that the resulting detector is canonical for the class of PDFs arising from SIRPs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion of information from multiple sources is an increasingly important area of research and application. This problem is often complicated by various sensors having different limitations and fields of view. Further complications result from the absence of prior knowledge. in addition to fusing diverse information, it is also necessary to manage multiple sensors with various limitations efficiently for optimal overall system performance. We have solved this set of problems using the MLANS neural network that employs model based approach and fuzzy decision logic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of fractal statistics for characterizing and synthesizing scenes and signals has in recent time been demonstrated as feasible. Traditionally global fractal dimensions based on morphological coverings were used to quantify the texture of sampled data sets. This texture could be used to describe the second order statistics in 2D scenes or in the jaggedness and fine structure of time series. With the realization of the benefits of fractal analysis has come a need for faster and more efficient computational algorithms. ROSETA is an algorithm which yields substantial computational performance improvements by calculating entropy based statistics instead of morphological geometric statistics. ROSETA may be used as a robust general purpose analytical tool and several examples of its implementation are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.