We show design and performance results for an Unattended Ground Sensors (UGS) Automatic Target Recognition
(ATR) target classifier using infrared (IR) imagery. Our goal was to develop a basic ATR capability to separate human
vs. animal vs. vehicle vs. non-target. Our current UGS video capability accurately detects tracks and transmits targetcentered
long wave infrared and visible imagery to a base station. We demonstrate an ATR capability to classify and
transmit only targets of interest to the user while excluding others. We describe the ATR development process which
includes data collection, building a truthed dataset, feature development, classifier training and performance evaluation.
Over the past decade, technological advances have enabled the use of increasingly intelligent systems for battlefield
surveillance. These systems are triggered by a combination of external devices including acoustic and seismic sensors.
Such products are mainly used to detect vehicles and personnel.
These systems often use infra-red imagery to record environmental information, but Textron Defense Systems' Terrain
Commander is one of a small number of systems which analyze these images for the presence of targets. The Terrain
Commander combines acoustic, infrared, magnetic, seismic, and visible spectrum sensors to detect nearby targets in
military scenarios. When targets are detected by these sensors, the cameras are triggered and images are captured in the
infrared and visible spectrum.
In this paper we discuss a method through which such systems can perform target tracking in order to record and
transmit only the most pertinent surveillance images. This saves bandwidth which is crucial because these systems often
use communication systems with throughputs below 2400bps. This method is expected to be executable on low-power
processors at frame rates exceeding 10HZ.
We accomplish this by applying target activated frame capture algorithms to infra-red video data. The target activated
frame capture algorithms combine edge detection and motion detection to determine the best frames to be transmitted to
the end user. This keeps power consumption and bandwidth requirements low. Finally, the results of the algorithm are
analyzed.
This project focuses on developing electro-optic algorithms which rank images by their likelihood of containing
vehicles and people. These algorithms have been applied to images obtained from Textron's Terrain Commander 2
(TC2) Unattended Ground Sensor system.
The TC2 is a multi-sensor surveillance system used in military applications. It combines infrared, acoustic, seismic,
magnetic, and electro-optic sensors to detect nearby targets. When targets are detected by the seismic and acoustic
sensors, the system is triggered and images are taken in the visible and infrared spectrum.
The original Terrain Commander system occasionally captured and transmitted an excessive number of images,
sometimes triggered by undesirable targets such as swaying trees. This wasted communications bandwidth, increased
power consumption, and resulted in a large amount of end-user time being spent evaluating unimportant images. The
algorithms discussed here help alleviate these problems.
These algorithms are currently optimized for infra-red images, which give the best visibility in a wide range of
environments, but could be adapted to visible imagery as well. It is important that the algorithms be robust, with minimal
dependency on user input. They should be effective when tracking varying numbers of targets of different sizes and
orientations, despite the low resolutions of the images used. Most importantly, the algorithms must be appropriate for
implementation on a low-power processor in real time. This would enable us to maintain frame rates of 2 Hz for
effective surveillance operations.
Throughout our project we have implemented several algorithms, and used an appropriate methodology to
quantitatively compare their performance. They are discussed in this paper.
The design of an Unattended Ground Sensor (UGS) requires a tradeoff between cost and performance. For designs using a low cost IR microbolometer an array size of 160x120 pixels is a cost effective solution. However, this array size has limited resolving capability. Our goal is to make the best use of the available pixel information from this sensor. There are many reports describing super-resolution (SR) processing as a way to improve image resolution. The definition of SR adopted here is a process where a single high resolution image is created from a sequence of low resolution sub-pixel shifted images. The authors demonstrate the implementation of one SR algorithm from the literature and its benefits to UGS systems using both IR and visible imagery. We describe a software application where the analyst can input a low resolution image frame sequence to produce a high resolution output. The frame sequence can be of a globally shifted frame sequence, a static scene with moving objects or both.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.