PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7338, including the Title Page, Copyright information, Tabe of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this research is to develop control methods to attenuate laser beam jitter using a fast-steering mirror.
Adaptive filter controllers using Filtered-X least mean square and Filtered-X recursive least square algorithms are
explored. The disturbances that cause beam jitter include mechanical vibrations on the optical platform (narrowband)
and atmospheric turbulence (broadband). Both feedforward filters (with the use of auxiliary reference sensor(s)) and
feedback filters (with only output feedback) are investigated. Hybrid adaptive filters, which are a combination of
feedback and feedforward, are also examined. For situations when obtaining a coherent feedforward reference signal is
not possible, methods for incorporating multiple semi-coherent reference signals into the control law are developed. The
controllers are tested on a jitter control testbed to prove their functionality. The testbed is equipped with shakers
mounted to the optical platform and a disturbance fast-steering mirror to simulate the effects of atmospheric propagation.
Experimental results showed that the feedback adaptive filter controller was superior to the feedforward technique, and
the hybrid method achieved the best overall results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional mirror line-of-sight stabilization approaches, such as the heliostat and coleostat, are configured such that
the sensor input line-of-sight is always oriented parallel to a gimbal axis. While provisions must be made to
accommodate the inherent two-to-one mirror kinematics in these approaches, the resulting angular rates orthogonal to
the line-of-sight are linear functions of the angular rates of the gimbals and of the base to which the sensor and gimbals
are attached, and they are un-coupled such that the two orthogonal line-of-sight axes can be controlled independently. If,
however, the sensor and gimbal cannot be oriented such that the sensor input line-of-sight is parallel to one of the gimbal
axes, the line-of-sight angular rate kinematics become non-linear and coupled. The purpose of this paper is to present
the development of the angular rate kinematic equations for such a system. The angular rate equations which result are
coupled and non-linear but account for the line-of-sight motion caused by both the angular motion of the gimbals and the
angular motion of the base and thus can be used to inertially stabilize and point the line-of-sight about two orthogonal
axes using measurements from the two gimbal angle transducers and three orthogonal gyros.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gimbaled planar mirrors are used to point and stabilize a camera's or laser's line-ofsight
(LOS). The mirror with its reflection property adds another degree of complexity to
the already complex area of LOS pointing and stabilization modeling and control. For
example, when the optics and detector are located off gimbal and utilizing a 2-axis gimbaled
mirror to point the LOS, the image at the detector rolls one for one with the outer gimbal
rotation. This is difficult to understand unless the equations are developed to show this.
The LOS pointing kinematic equations for a planar gimbaled mirror begin with the
mirror reflectance equation [2,3]. This equation describes the reflected ray or vector in
terms of the incoming ray and the mirror unit normal, and ultimately creates the
mathematical relationship between the reflected ray, the base (or a user-defined reference
frame), and the detector reference frame. This kinematic relationship is differentiated to
form the LOS rate equations from which one can easily see what states are necessary, and
how these states are combined, for inertially stabilizing the LOS about its roll, pitch, and
yaw axes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deformable bimorph mirrors with high damage threshold dielectric coatings are demonstrated as both intra- and extracavity
components with a pulsed energy diode-pumped Nd:YAG zigzag slab laser operating at the 150 mJ level. Two
resonator configurations were tested for intra-cavity operation; with a plane-plane resonator the far-field brightness was
enhanced by up to a factor of 1.5, whilst with a cross-Porro resonator an enhancement of up to 2.2 was achieved. As an
extracavity component far-field beam steering of ±2 mrad was demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources.
This ignores information that may be available through modern mission management systems which could be fused into
the detection process in order to provide enhanced performance. By way of an example relating to target detection, this
paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of
enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation,
sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric
characteristics of a target in terms of probability density functions. An important consideration in the construction of the
target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the
imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive
architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an
air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as
strategies for managing poor quality or absent a-priori information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive image pre-processor has been developed as a high-performance front-end for a next-generation multi-target
tracking (MTT) system. The tracking system is designed to track targets across potentially multiple and distributed
electro-optic video sensors. Typically a pre-processor operates to enhance targets and assist the tracking. However, they
frequently rely on expert knowledge to configure the algorithm for the particular application and hence do not cope
adequately given unexpected variations or generic application. The pre-processor developed for our MTT system
achieves a significantly improved and robust performance by using an adaptive approach based on wavelet
decomposition and a "supporting-classifier" method. It is capable of detecting and dynamically maintaining a target-definition
optimized for tracking, whilst maximally suppressing non-related clutter. This paper presents an overview of
the architecture and demonstrates its performance on real video scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shadows constitute a problem in many moving object detection and tracking algorithms in video. Usually, moving
shadow regions lead to larger regions for detected objects. Shadow pixels have almost the same chromaticity as the
original background pixels but they only have lower brightness values. Shadow regions usually retain the underlying
texture, surface pattern, and color value. Therefore, a shadow pixel can be represented as a.x where x is the actual
background color vector in 3-D RGB color space and a is a positive real number less than 1. In this paper, a shadow
detection method based on two-dimensional (2-D) cepstrum is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the important tasks in video surveillance is to detect and track targets moving independently in a scene.
Most real-time research to date has focused on scenarios from stationary cameras where there is limited movement
in the background, such as videos taken at traffic lights or from buildings where there is no background proximal
to the background. A more robust method is needed when there are moving background objects such as trees
or flags close in the camera or when the camera is moving. In this paper we first introduce a variant of the
multimodal mean (MM) background model that we call the spatial multimodal mean (SMM) background model
that is better suited for these scenarios while improving the speed of the mixture of Gaussians (MoG) background
model. It approximates the multimodal MoG background with the generalization that each pixel has a random
spatial distribution. The SMM background model is well suited for real-time nonstationary scenes since it models
each pixel with a spatial distribution and the simplifications make it computationally feasible to apply image
transformations. We then describe how this can be integrated into a real-time MTI system that does not require
the estimation of depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper solves four problems associated with typical correlation tracking systems. The first problem is that
uncertainty in the position observation of an object is not propagated from the detection stage to the tracking stage. The
second problem is that the shape of the reference template always lags the actual shape of the object. The third problem
is the need for a separate acquisition process to generate the initial reference template. The fourth problem is the inability
to track multiple objects. To overcome these problems we developed the Shape Estimating Filter (SEF), a homogeneous
extension of the basic correlation tracker; and its multi-target counterpart the Competitive Attentional Correlation
Tracker using Shape (CACTuS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced image registration technique with sub-pixel accuracy has been developed and applied for TD (time-differencing)
process [1]. The TD process can help to suppress heavy background clutter for improved moving
target detection. After processing a CFAR (constant false alarm rate) thresholding detector on the time-differenced
image frames, we have developed and applied an image domain moving target tracking (IDMTT) process for robust
moving target tracking. The IDMTT process uses a unique location feature by mapping and associating the real
moving targets in the previous time-differenced frame with the ghost moving targets in the current time-differenced
frame. The accurate location mapping and associating information between time frames is provided by the
registration process. Preliminary tests for the IDMTT process are promising. Robust moving target tracking can be
achieved even under quite low signal-to-clutter noise-ratio (SCNR = 0.5).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for detecting an object's motion in images that suffer from camera shake or images with camera egomotion
is proposed. This approach is based on edge orientation codes and on the entropy calculated from a histogram of the edge
orientation codes. Here, entropy is extended to spatio-temporal entropy. We consider that the spatio-temporal entropy
calculated from time-series orientation codes can represent motion complexity, e.g., the motion of a pedestrian. Our
method can reject false positives caused by camera shake or background motion. Before the motion filtering, object
candidates are detected by a frame-subtraction-based method. After the filtering, over-detected candidates are evaluated
using the spatio-temporal entropy, and false positives are then rejected by a threshold. This method could reject 79 to 96
[%] of all false positives in road roller and escalator scenes. The motion filtering decreased the detection rate somewhat
because of motion coherency or small apparent motion of a target. In such cases, we need to introduce a tracking method
such as Particle Filter or Mean Shift Tracker. The running speed of our method is 32 to 46 ms per frame with a 160×120
pixel image on an Intel Pentium 4 CPU at 2.8 GHz. We think that this is fast enough for real-time detection. In addition,
our method can be used as pre-processing for classifiers based on support vector machines or Boosting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Liquid Crystal Tunable Filter (LCTF) camera could form part of an inexpensive system for color-aided target
tracking; however, the standard tracking techniques will need to be adapted to the cyclic color information where
only one wavelength is measured each timestep. A bayesian multi-hypothesis tracking algorithm is well adapted
for this scenario, as it allows for track association decisions to be delayed until complete spectra are gathered.
The design and tuning of bayesian multi-hypothesis tracker will be described and its behavior demonstrated for
a scenario in which a fixed-mounted LCTF camera is used in vehicle tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In radar tracking, the Preferred Ordering Theorem for updating the state vector in rectangular coordinates using an
Extended Kalman Filter states that the measurement components of a detection should be used sequentially in the order
azimuth first, then elevation, and range last. Such is counterintuitive to a common belief that the most accurate
measurement should be used first, since that is usually range. However, it is observed here that the theorem can lose its
efficacy as a track converges - and that the expected value of an EKF update is not always well-defined. An "extension"
is therefore given, which is dubbed the DKF after Desargues, since it is based on an analysis involving projective
geometry. With this approach the conditional update of a track in rectangular coordinates becomes well defined in the
sense that a preferred order is obviated, and it can now be updated using a range or angle observation, separately or
sequentially in either order, with less error. In this presentation the basic issues are illustrated, and the DFK is defined
and contrasted with the EKF for the two-dimensional motionless target case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration
of all possible measurement-to-track associations, which does not involve any approximation in its original
formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As
a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the
computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature
information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems,
the extraction of features from the raw sensor data is typically independent of the subsequent association and
filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby
there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature
extraction is input into measurement-to-track association while the prediction step feeds back the parameters
to be used in the next round of feature extraction. The motivation for this forward and backward interaction
between feature extraction and tracking is to improve the performance in both steps. This approach allows for
a more rational partitioning of the feature space and removes unlikely features from the assignment problem.
Simulation results demonstrate the benefits of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An air-to-air missile is always submitted to extremes conditions of temperature, such as a hot runway in the desert
dropping down to very cold conditions at high altitudes. It is evident that the optical system must be able to provide
satisfactory image quality under any circumstances without causing any major degradation to the image. Under this
perspective, two different designs of optical systems will be considered for this missile: one catadioptric, using a
modified Cassegrain telescope and another one purely dioptric. Both optical systems must be able to focus energy in two
different arrays of detectors, one for the near infrared radiation and the other one for the medium infrared. Due to the
special missile flight profile, the temperature operational range will be determined and considered in order to design and
athermalize the optical systems. Due to the large temperatures range, the missile optical system will experience
deformation effects that will cause defocus and image degradation. A correct choice of materials, including the telescope
body and dome shroud must be determined to minimize the defocus effect. Also a thermal compensator ought to be
strategically placed on both designs to provide focus correction for all the temperatures range. Following that, the optical
designs will be analyzed for effects of stray light and ghost image to find out what are the most suitable absorbing paint
and anti-reflective coatings to be used. In the last step, both systems will be classified accordingly to their characteristics
of performance, weight, size, viability and price and the best will integrate the missile optical system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical systems may contain mechanical structure, optics, sensors, and active control to improve image quality or to
point and stabilize the line-of-sight. A single model that includes structural, optical, and active control elements is
beneficial for trade studies and defining hardware requirements. The process and benefits of representing structural and
optical elements as a state space model are discussed. A state space model is derived for a reaction-less steering mirror.
Steering mirror control and performance are discussed. A method for creating state space models directly from finite
element normal modes is also described. A single closed loop model that represents both structural and optical effects
in the state space form can be used to quickly evaluate system performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides comparative evaluations of two visual object tracking algorithms - the Shape Estimating Filter
(SEF), a homogeneous extension of the basic correlation tracker; and its multi-object counterpart the Competitive
Attentional Correlation Tracker using Shape (CACTuS). The CACTuS is evaluated comparatively against its
predecessor to show direct improvement in tracking effectiveness. Our approach will involve an evaluation framework
consisting of a range of modern, peer reviewed tracking performance metrics, allowing for a detailed multi-faceted
analysis of tracking results. As such we provide an overview of current performance evaluation methods, including
techniques for multi-object tracker evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-differencing process, with the help of image registration techniques, is useful for image-domain moving target
detection under heavy clutter conditions. Time-differencing between two well-registered image frames can
significantly suppress the heavy static background clutter, and thus improve moving target detection. However, we
may still lose detection of a moving target from time to time under heavy clutter conditions (Pd < 100%), and also
we may lose detection of a moving target when this target stops moving. For example, a moving vehicle will
temporarily stop moving in front of a red light or a stop sign. In general, the performance of a conventional tracking
process depends on the performance of the detection process. In this paper, we present our newly developed image-domain
moving target tracking and process using an adaptive local target correlation tracker. Once we started to
track a target, the correlation tracker can continue to track this target, no matter whether we can still detect this
target in the future image frames or not. Both single and multiple target tracking capabilities using the correlation
tracker have been developed. Furthermore, while continuing to track a moving vehicle, we apply a super-resolution
image enhancement (SRIE) process developed at SAIC [1] to improve the vehicle resolution and signal-to-noise
ratio (SNR) for better automatic target recognition (ATR) or human/pilot-monitored recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.