PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, we describe efforts made to implement multiperspective mosaicking of infrared and color video data for the purpose of under vehicle inspection. It is desired to create a large, high-resolution mosaic that may be used to quickly visualize the entire scene shot by a camera making a single pass underneath the vehicle. Several constraints are placed on the video data in order to facilitate the assumption that the entire scene in the sequence exists on a single plane. Therefore, a single mosaic is used to represent a single video sequence. Phase correlation is used to perform motion analysis in this case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most mobile robots use a combination of absolute and relative sensing techniques for position estimation. Relative positioning techniques are generally known as dead-reckoning. Many systems use odometry as their only dead-reckoning means. However, in recent years fiber optic gyroscopes have become more affordable and are being used on many platforms to supplement odometry, especially in indoor applications. Still, if the terrain is not level (i.e., rugged or rolling terrain), the tilt of the vehicle introduces errors into the conversion of gyro readings to vehicle heading. In order to overcome this problem vehicle tilt must be measured and factored into the heading computation. A unique new mobile robot is the Segway Robotics Mobility Platform (RMP). This functionally close relative of the innovative Segway Human Transporter (HT) stabilizes a statically unstable single-axle robot dynamically, based on the principle of the inverted pendulum. While this approach works very well for human transportation, it introduces as unique set of challenges to navigation equipment using an onboard gyro. This is due to the fact that in operation the Segway RMP constantly changes its forward tilt, to prevent dynamically falling over. This paper introduces our new Fuzzy Logic Expert rule-based navigation (FLEXnav) method for fusing data from multiple gyroscopes and accelerometers in order to estimate accurately the attitude (i.e., heading and tilt) of a mobile robot. The attitude information is then further fused with wheel encoder data to estimate the three-dimensional position of the mobile robot. We have further extended this approach to include the special conditions of operation on the Segway RMP. The paper presents experimental results of a Segway RMP equipped with our system and running over moderately rugged terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo is a key component in many autonomous navigation tasks. These applications demand real-time performance and consequently, the state-of-the-art uses local correlation-based algorithms that lend themselves to algorithmic and hardware optimization. These systems perform well in simple terrain or on open ground. However, when discrete objects such as trees are present in the scene, correlation-based approaches exhibit inherent difficulties. Some of these difficulties are introduced during the preprocessing stage that attempts to compensate for photometric variations between the cameras. Other difficulties occur during the correlation stage due to occlusion. As a result, object portions appear enlarged, contracted, or missing, as the range data bleeds between the foreground object and the background. This complicates subsequent obstacle detection, representation and modeling. These problems have been addressed by more sophisticated stereo algorithms based on energy minimization and global optimization schemes. Such complex algorithms, however, are computationally demanding and not amenable to real-time implementation. Our solution uses a better preprocessing method, intelligent use of edge cues, and a variation of the traditional shiftable window approach to enhance the stereo correlation at and near depth discontinuities. There is additional computational overhead involved, but we are able to maintain real-time performance. We present details of our new algorithm and several results in complex natural environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new method for the registration of multiple sensors applied to a mobile robotic inspection platform. Our main technical challenge is automating the integration process for various multimodal inputs, such as depth maps, and multi-spectral images. This task is approached through a unified framework based on a new registration criterion that can be employed for both 3D and 2D datasets. The system embedding this technology reconstructs 3D models of scenes and objects that are inspected by an autonomous platform in high security areas. The models are processed and rendered with corresponding multi-spectral textures, which greatly enhances both human and machine identification of threat objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A UGV (unmanned ground vehicle) can better accomplish autonomous mobility through fusion of multiple sensors that provide information concerning the vehicle's location and orientation within the world. Fusion requires an understanding of the constituent elements. This paper describes the application of an egomotion estimator to a sequence of data from a forward-facing video camera on a surrogate UGV, compares the resulting egomotion estimates to those from a co-located inertial navigation system, and defines measures of consistency of the paired observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas:
(1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water.
(2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based)
(3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data.
Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order for an Unmanned Ground Vehicle (UGV) to operate effectively it must be able to perceive its environment in an accurate, robust and effective manner. This is done by creating a world representation which encompasses all the perceptual information necessary for the UGV to understand its surroundings. These perceptual needs are a function of the robots mobility characteristics, the complexity of the environment in which it operates, and the mission with which the UGV has been tasked. Most perceptual systems are designed with predefined vehicle, environmental, and mission complexity in mind. This can lead the robot to fail when it encounters a situation which it was not designed for since its internal representation is insufficient for effective navigation. This paper presents a research framework currently being investigated by Defence R&D Canada (DRDC), which will ultimately relieve robotic vehicles of this problem by allowing the UGV to recognize representational deficiencies, and change its perceptual strategy to alleviate these deficiencies. This will allow the UGV to move in and out of a wide variety of environments, such as outdoor rural to indoor urban, at run time without reprogramming. We present sensor and perception work currently being done and outline our research in this area for the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems. Potential solutions using low-cost video cameras are particularly alluring. Recent results in 3D scene reconstruction from a single moving camera seem particularly relevant, but robot designers who attempt to use such 3D techniques have uncovered a variety of practical concerns. We present lessons-learned from developing a single-camera 3D scene reconstruction system that provides both a real-time camera motion estimate and a rough model of major 3D structures in the robot’s vicinity. Our objective is to use the motion estimate to supplement GPS (indoors in particular) and to use the model to provide guidance for further vision processing (look for signs on walls, obstacles on the ground, etc.). The computational geometry involved is closely related to traditional two-camera stereo, however a number of degenerate cases exist. We also demonstrate how SFM can use used to improve the performance of two specific robot navigation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Casual daily observation provides convincing evidence that animals offer a wealth of inspiration for legged machines. However the lessons of animal motor science are largely written in the grammar of materials properties, and their meaning hidden by the complex interaction of multiply layered functional hierarchies. This paper will review some of the lessons of biological running that we have been able to articulate and begin to prescribe rigorously as manifest in the hexapod robot RHex. Although there is a long way to go before our mathematical analysis catches up with the full range of behaviors this remarkable machine exhibits, we are nevertheless able to make increasingly precise statements about certain control principles and the role they may play in RHex's performance. This ongoing research effort serves as a test case to underscore the huge and still largely untapped potential for mining bioinspiration in legged locomotion systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Defense Advanced Research Projects Agency (DARPA) has been looking to biology to provide insights into how organisms efficiently navigate in their environment. Specifically, we have sought to understand how organisms, from mammals to insects, can perform rapid locomotion over rough, uneven terrain. These biomimetic principles have been applied to various robotic platforms and the development continues as engineers and roboticists strive to emulate the behavior of natural systems. The following attempts to briefly capture the technology progression to current funded efforts, with a vision towards future robotic interests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last decade the world has seen numerous autonomous vehicle programs. Wheels and track designs are the basis for many of these vehicles. This is primarily due to four main reasons: a vast preexisting knowledge base for these designs, energy efficiency of power sources, scalability of actuators, and the lack of control systems technologies for handling alternate highly complex distributed systems. Though large efforts seek to improve the mobility of these vehicles, many limitations still exist for these systems within unstructured environments, e.g. limited mobility within industrial and nuclear accident sites where existing plant configurations have been extensively changed. These unstructured operational environments include missions for exploration, reconnaissance, and emergency recovery of objects within reconfigured or collapsed structures, e.g. bombed buildings. More importantly, these environments present a clear and present danger for direct human interactions during the initial phases of recovery operations. Clearly, the current classes of autonomous vehicles are incapable of performing in these environments. Thus the next generation of designs must include highly reconfigurable and flexible autonomous robotic platforms. This new breed of autonomous vehicles will be both highly flexible and environmentally adaptable. Presented in this paper is one of the most successful designs from nature, the snake-eel-worm (SEW). This design implements shape memory alloy (SMA) actuators which allow for scaling of the robotic SEW designs from sub-micron scale to heavy industrial implementations without major conceptual redesigns as required in traditional hydraulic, pneumatic, or motor driven systems. Autonomous vehicles based on the SEW design posses the ability to easily move between air based environments and fluid based environments with limited or no reconfiguration. Under a SEW designed vehicle, one not only achieves vastly improved maneuverability within a highly unstructured environment, but also gains robotic manipulation abilities, normally relegated as secondary add-ons within existing vehicles, all within one small condensed package. The prototype design presented includes a Beowulf style computing system for advanced guidance calculations and visualization computations. All of the design and implementation pertaining to the SEW robot discussed in this paper is the product of a student team under the summer fellowship program at the DOEs INEEL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mandelbrot, through his analysis of Fractals (Mandelbrot, 1977), has shown that the complexity of the physical geometry of nature is similar at all scales. This implies that a robot of fixed dimensions will always be too big to get through some passageways, and too small to get over some other obstacles. However, as others have demonstrated, increasing the number of the vehicle’s motion degrees of freedom (dof) may permit it to change its conformation and dimensions, affording to it a greater range of environmental dimensionality through which it may move. This paper contains a description of our multi dof unmanned ground vehicle (UGV), including the variety of basic behaviors of which it is capable. Our UGV is a six-dof, sensor-rich small mobile robot composed of three segments -- a central core and two tracked pods. The rotations of the pod tracks are the primary mobility mode (2-dof) of the vehicle. The pods are attached to the core at opposite ends, each by a single "L" axle that rotates through 180 degrees (2-dof), serving to improve balance and leverage. The pods can rotate 360 degrees about their end of the axle (2-dof) providing increased mobility over obstacles. The UGV in compact form is 17.6" long, 16.2" wide, and 4.6" tall, but can extend to 49" long to climb over obstacles or cross chasms, or rise to 16" high to straddle low obstacles. In its extended mode its maximum width is 9.5" permitting it to squeeze through an opening of that size. The UGV can independently draw in its two outer pods to grasp and longitudinally traverse horizontal pipes or logs or travel within a narrow culvert.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Series Elastic Actuators provide many benefits in force control of robots in unconstrained environments. These benefits include high force fidelity, extremely low impedance, low friction, and good force control bandwidth. Series Elastic Actuators employ a novel mechanical design architecture which goes against the common machine design principal of "stiffer is better." A compliant element is placed between the gear train and driven load to intentionally reduce the stiffness of the actuator. A position sensor measures the deflection, and the force output is accurately calculated using Hooke’s Law (F=Kx). A control loop then servos the actuator to the desired output force. The resulting actuator has inherent shock tolerance, high force fidelity and extremely low impedance. These characteristics are desirable in many applications including legged robots, exoskeletons for human performance amplification, robotic arms, haptic interfaces, and adaptive suspensions. We describe several variations of Series Elastic Actuators that have been developed using both electric and hydraulic components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The DARPA Sponsored Compliant Surface Robotics (CSR) program pursues development of a high mobility, lightweight, modular, morphable robot for military forces in the field and for other industrial uses. The USTLAB effort builds on proof of concept feasibility studies and demonstration of a 4, 6, or 8 wheeled modular vehicle with articulated leg-wheel assemblies. In Phase I, basic open plant stability was proven for climbing over obstacles of ~18 inches high and traversing ~75 degree inclines (up, down, or sideways) in a platform of approximately 15 kilograms. At the completion of Phase II, we have completed mechanical and electronics engineering design and achieved changes which currently enable future work in active articulation, enabling autonomous reconfiguration for a wide variety of terrains, including upside down operations (in case of flip over), and we have reduced platform weight by one third. Currently the vehicle weighs 10 kilograms and will grow marginally as additional actuation, MEMS based organic sensing, payload, and autonomous processing is added. The CSR vehicle’s modular spider-like configuration facilitates adaptation to many uses and compliance over rugged terrain. The developmental process and the vehicle characteristics will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The German experimental robotics program PRIMUS (PRogram for Intelligent Mobile Unmanned Systems) is focused on solutions for autonomous driving in unknown open terrain, over several project phases under specific realization aspects for more than 12 years. The main task of the program is to develop algorithms for a high degree of autonomous navigation skills with off-the-shelf available hardware/sensor technology and to integrate this into military vehicles. For obstacle detection a Dornier-3D-LADAR is integrated on a tracked vehicle "Digitized WIESEL 2". For road-following a digital video camera and a visual perception module from the Universitaet der Bundeswehr Munchen (UBM) has been integrated. This paper gives an overview of the PRIMUS program with a focus on the last program phase D (2001 - 2003). This includes the system architecture, the description of the modes of operation and the technology development with the focus on obstacle avoidance and obstacle classification using a 3-D LADAR. A collection of experimental results and a short look at the next steps in the German robotics program will conclude the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On-board television system TELAN includes one or several monitors with fragmented screen space (for example with liquid crystals), three and more small-sized video cameras (color and/or monochrome), adaptive means of their switching and, possibly, means of video recording. The means of adaptive switching provide automatic lead-out to the screen of the monitor of information, optimum for current transport situation. Advantages of such television system are:
(1) practically circular review, i.e. absence of "blind/dead" zones;
(2) substantial increase of safety of driving, as it allows to boost the rate of the proper response of driver in pre-emergency and other critical situations; (3) effective protection against blinding by headlights of the going behind and/or overtaking automobile; (4) high quality of the image even under bad conditions of supervision (for example in complete darkness, fog); (5) broad-range functionalities, including opportunity of automatic recording of pre-emergency conditions, automatic recording followed by the command of alarm system etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The excimer laser has proven to be the laser of choice in various biomedical applications for both soft and hard tissues. The excimer laser-tissue interaction is vastly different from other lasers due to the high energies of each photon, the short pulse duration, and small volume of tissue effected. In addition to the particle ejection, heat generation and spectral emission, the interaction also produces acoustical disturbances in both the air and in the tissue. The plume dynamics were detected with a second laser (Nd;YAG at 532 nm) illuminating the particles and a CCD camera detecting the (90°) scattered radiation to form an image. A similar setup was used to detect the acoustical disturbances, but this time the forward scattered radiation off of the information about these acoustical disturbance we designed and built an ultrasonic probe to do so. The luminescence was measured with a time resolved spectroscopy system. The thermal effects were measured with a thermal camera. By measuring these different effects our understanding of the interaction is enhanced, the parameters for a specific medical laser application can be optmized for the best results, and each one can be used as a real-time (before the next pulse) feedback control system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A small-scale supervised autonomous bulldozer in a remote site was developed to experience agent based human intervention. The model is based on Lego Mindstorms kit and represents combat equipment, whose job performance does not require high accuracy. The model enables evaluation of system response for different operator interventions, as well as for a small colony of semiautonomous dozers. The supervising human may better react than a fully autonomous system to unexpected contingent events, which are a major barrier to implement full autonomy. The automation is introduced as improved Man Machine Interface (MMI) by developing control agents as intelligent tools to negotiate between human requests and task level controllers as well as negotiate with other elements of the software environment. Current UGVs demand significant communication resources and constant human operation. Therefore they will be replaced by semi-autonomous human supervisory controlled systems (telerobotic). For human intervention at the low layers of the control hierarchy we suggest a task oriented control agent to take care of the fluent transition between the state in which the robot operates and the one imposed by the human. This transition should take care about the imperfections, which are responsible for the improper operation of the robot, by disconnecting or adapting them to the new situation. Preliminary conclusions from the small-scale experiments are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperation is currently the accepted method of controlling UGVs in dynamic and unpredictable real-world situations. It is easily understood and trusted. But teleoperation can be difficult due to limited sensory feedback, and it requires constant and continuous one-on-one operator attention and interaction. It is difficult, if not impossible, for one operator to control multiple independent vehicles or perform parallel mission tasks. Supervisory control is an approach to reduce operator burden while maintaining confidence and trust. In supervisory control, the operator establishes control measures then monitors progress and refines guidance while the UGV navigates with limited autonomy. Supervisory control is intended to complement, not replace, teleoperation. It is intended to be used for coarse or approximate navigation on relatively benign terrain, reserving teleoperation for treacherous terrain, delicate maneuvers, and fine adjustment of position. In this paper we develop a framework of guidelines for effective navigation supervisory control design. We present a "One Touch" Point-and-Go supervisory control interface. We demonstrate visual ground location tracking, the key enabling technology for small UGVs with limited navigation sensors limited to a single camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. That is why mobile robotics problems are complex with many unanswered questions. To reach a high degree of autonomous operation, a new level of learning is required. On the one hand, promising learning theories such as the adaptive critic and creative control have been proposed, while on other hand the human brain’s processing ability has amazed and inspired researchers in the area of Unmanned Ground Vehicles but has been difficult to emulate in practice. A new direction in the fuzzy theory tries to develop a theory to deal with the perceptions conveyed by the natural language. This paper tries to combine these two fields and present a framework for autonomous robot navigation. The proposed creative controller like the adaptive critic controller has information stored in a dynamic database (DB), plus a dynamic task control center (TCC) that functions as a command center to decompose tasks into sub-tasks with different dynamic models and multi-criteria functions. The TCC module utilizes computational theory of perceptions to deal with the high levels of task planning. The authors are currently trying to implement the model on a real mobile robot and the preliminary results have been described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The iRobot PackBot is a combat-tested, man-portable UGV that has been deployed in Afghanistan and Iraq. The PackBot is also a versatile platform for mobile robotics research and development that supports a wide range of payloads suitable for many different mission types. In this paper, we describe four R&D projects that developed experimental payloads and software using the PackBot platform. CHARS was a rapid development project to develop a chemical/radiation sensor for the PackBot. We developed the CHARS payload in six weeks and deployed it to Iraq to search for chemical and nuclear weapons. Griffon was a research project to develop a flying PackBot that combined the capabilities of a UGV and a UAV. We developed a Griffon prototype equipped with a steerable parafoil and gasoline-powered motor, and we completed successful flight tests including remote-controlled launch, ascent, cruising, descent, and landing. Valkyrie is an ongoing research and development project to develop a PackBot payload that will assist medics in retrieving casualties from the battlefield. Wayfarer is an applied research project to develop autonomous urban navigation capabilities for the PackBot using laser, stereo vision, GPS, and INS sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the Army transforms to the Objective Force, particular attention must be paid to operations in Complex and Urban Terrain. Because our adversaries realize that we don’t have battlefield dominance in the urban environment, and because population growth and migration to urban environments is still on the increase, our adversaries will continue to draw us into operations in the urban environment. The Army Research Laboratory (ARL) is developing technology to equip our soldiers for the urban operations of the future. Sophisticated small robotic platforms with diverse sensor suites will be an integral part of the Future Force, and must be able to collaborate not only amongst themselves but also with their manned partners. The Army Research Laboratory has developed a Reconnaissance, Surveillance, and Target Acquisition (RSTA) sensor payload for integration onto an iRobot Packbot. The RSTA sensor payload is equipped with an acoustic array that will detect and localize on an impulsive noise event, such as a sniper's weapon firing. Additionally, the robot sensor head is equipped with visible and thermal camera for operations both day and night. The RSTA sensor head equipped Packbot can then be deployed by dismounted soldiers to enhance their situational awareness in the urban environment. The information from one Packbot can then be fused with other sensors as part of a sensor network. Sensor equipped Packbots provides an awesome capability to the future dismounted infantry soldier during warfighting and peacekeeping operations in complex and urban terrain by enhancing their situational awareness and improving their survivability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As reported by Blitch, current Search and Rescue robots have proven inadequate in the field. Shortfalls in mobility include: inadequate relationship between traction and drag, inadequate self-righting,
inadequate sensor protection and too many protrusions to snag. Because autonomous navigation is often impossible but tele-operation may be difficult, sliding autonomy is critical. In addition, next generation SR robots need plug-n-play sensor options and modular cargo holds to deliver daughter-bots or other specialized rescue equipment. Finally, dust and smoke have caused both sensors and robots to fail in the field. Many of the needs of Search and Rescue teams are shared by all Emergency Response robots: EOD, SWAT, HazMat and other law enforcement officers. We discuss how next-generation designs solve many of the problems currently facing ER robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over a six year period the US Army Tank-Automotive and Armaments Command's Intelligent Mobility Program sponsored research to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles. In this paper we describe the Intelligent Mobility Program’s research accomplishments achieved at Utah State University’s (USU) Center for Self-Organizing and Intelligent Systems (CSIOS). The CSOIS program was based on USU’s "smart wheel" technology, which enables design of an omni-directional vehicle (ODV). Through the course of the program, USU researchers built thirty robots using eight distinct ODV robot designs. These robots were also demonstrated in a number of application scenarios. The program has culminated in the actual fielding of the final robot developed, the ODIS-T2, which was designed for undervehicle inspection at security checkpoints. The design and deployment of these robots required research advances in mechanical and vetronics design, sensor integration, control engineering and intelligent behavior generation algorithms, system integration, and human interface. An overview of the USU-developed robotics technology is presented that details the technology development and technical accomplishments achieved by the TACOM-USU Intelligent Mobility Program, with a focus on the actual hardware produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In August of 2003 at Fort Benning, Georgia, a team from the Army Research Lab (ARL) developed and tested a "system-of-systems" that connects Soldiers, robots, and unattended ground sensors (UGS) into a network, in order to increase situational awareness. This paper will detail the demonstration of the data fusion across the network, in particular, acoustic and robotic sensors as well as the digital mapping capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simple radio control cars commonly sold as toys can provide a viable starting platform for the development of low-cost intelligent Unmanned Ground Vehicles (UGVs) for the study of robot collectives. In a collaborative effort, Sandia National Labs and New Mexico Tech have successfully demonstrated proof-of-concept by utilizing low-cost radio control cars manufactured by Nikko. Initial tests have involved using a small number (two to ten) of these UGVs to successfully demonstrate both collaborative and independent behavior simultaneously. In the tests individuals share their locations with the collective to cover an area, thus demonstrating collaborative behavior. Independent behavior is demonstrated as each member of the collective maintains a desired compass heading while simultaneously avoiding obstacles in its path. These UGVs are powered by high-capacity rechargeable batteries and equipped with a custom-designed microcontroller board with a stackable modular interface and wireless communication. The initial modular sensor configuration includes a digital compass and GPS unit for navigation as well as ultrasonic sensors for obstacle avoidance. This paper describes the design and operations of these UGVs, their possible uses, and the advantages of using a radio control car platform as a low-cost starting point for the development of intelligent UGV collectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Army and Office of the Secretary of Defense agreed in May 2003 that the Future Combat Systems (FCS) Program had achieved sufficient maturity to pass what is referred to as "Milestone B." This milestone cleared the way for the Army and DARPA to award Boeing/SAIC FCS Lead System Integrator a 7 year System Design and Development Contract with options leading to production of systems and a Fielded Operational Capability in the year 2012. The breadth of the FCS Program is unique for DoD. It encompasses at least 7 variants of manned ground vehicles, 6 variants of unmanned ground vehicles, 4 unmanned aerial vehicles, unattended sensors, and the critical integration of these assets through a common Command/Control/Communications (C4ISR) backbone and protocol. As such, it has both internal program developments and strong linkages with existing programs in weapons, communications, sensors, command and control, and soldier integrated systems. An important new capability area for FCS is the integrated use of Unmanned Systems (both air and ground). This paper will deal with the LSI efforts associated with the UGV systems and additional detail will be available from the contractor teams working with us on each of these systems in later talks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Armed Robotic Vehicle (ARV) is a medium unmanned ground vehicle with semi-autonomous and operator-controlled capability. Two different variants provide Intelligence, Surveillance, Reconnaissance/Target Acquisition (ISR/TA) and direct/indirect fires in support of mounted and dismounted operations. This paper will describe the current system engineering studies proceeding on the ARV and provide insight into key characteristics of the ARV concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During FY03, the U.S. Army Research Laboratory undertook a series of experiments designed to assess the maturity of autonomous mobility technology for the Future Combat Systems Armed Robotic Vehicle concept. The experiments assessed the technology against a level 6 standard in the technology readiness level (TRL) maturation schedule identified by a 1999 Government Accounting Office report. During the course of experimentation, 646 missions were conducted over a total distance of ~560 km and time of ~100 hr. Autonomous operation represented 96% and 88% of total distance and time, respectively. To satisfy the TRL 6 "relevant environment" standard, several experimental factors were varied over the three-site test as part of a formal, statistical, experimental design. This paper reports the specific findings pertaining to relevant-environment questions that were posed for the study and lends additional support to the Lead System Integrator decision that TRL 6 has been attained for the autonomous navigation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor
reconnaissance and surveillance missions in hostile environment. In
such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for
multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are
detailed. More precisely, we show how we combine manual controls,
obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes which can provide an immediate "operational added value." The paper details the "robotic convoy" theme (named TEL1), which main purpose is to develop a robotic leader-follower function so that several unmanned vehicles can autonomously follow teleoperated, autonomous or on-board driven leader. Two modes have been implemented: Perceptive follower: each autonomous follower anticipates the trajectory of the vehicle in front of it, thanks to a dedicated perception equipment. This mode is mainly based on the use of perceptive data, without any communication link between leader and follower (to lower the cost of future mass development and extend the operational capabilities). Delayed follower: the leader records its path and transmits it to the follower; the follower is able to follow the recorded trajectory again at any delayed time. This mode uses localization data got from inertial measurements. The paper presents both modes with detailed algorithms and the results got from the military acceptance tests performed on wheeled 4x4 vehicles (DARDS French ATD).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Joint Architecture for Unmanned Systems (JAUS) Operator Control Units and Payloads Committee (OPC) will be conducting a series of experiments to expedite the production of cost-effective interoperable unmanned systems, user control interfaces, payloads, et cetera. The objective of the initial experiment will be to demonstrate teleoperation of heterogeneous unmanned systems. The experiment will test Level 1 compliance between multiple JAUS subsystems and will include unmanned air, ground, and surface vehicles developed by vendors in the government and commercial sectors. Insight gained from participants initial planning, development, and integration phases will help identify areas of the JAUS standard which can be improved to better facilitate interoperability between Operator Control Units (OCU) and unmanned systems. The process of preparing Mobius, an OCU developed by Autonomous Solutions, Inc., for JAUS Level 1 compliance is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for enhanced tactical force protection capabilities is evidenced from our recent experiences in Iraq and Afghanistan and occurs wherever U.S. Forces maintain a forward presence in a potentially hostile environment. Levels of force protection proficiency vary widely from combat units whose mission is to close with and destroy the enemy to combat support/combat service support units performing maintenance and logistics functions. We must provide force protection capabilities that are not only good enough to get the job done, but affordable for the entire force. Addressing the force protection challenge requires an investment in research and development to deliver affordable, scalable, modular and sustainable force protection equipment. This can be accomplished through an evolutionary acquisition strategy of capability upgrades in the near, mid and far-terms that leverage the Army's investments in unmanned ground sensors (UGS), unmanned ground vehicles (UGV) and surveillance radar and imaging technology. This approach addresses the field's immediate tactical force protection requirements, while working towards full integration with the Future Combat System. Futuristic Tactical Force Protection will consist of a fully integrated system of systems architecture that will include UGVs, UGS and Unmanned Aerial Vehicles (UAVs) that are networked with the Future Force.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current man-portable robotic systems are too heavy for troops to pack during extended missions in rugged terrain and typically require more user support than can be justified by their limited return in force multiplication or improved effectiveness. As a consequence, today’s systems appear organically attractive only in life-threatening scenarios, such as detection of chemical/biological/radiation hazards, mines, or improvised explosive devices. For the long term, significant improvements in both functionality (i.e., perform more useful tasks) and autonomy (i.e., with less human intervention) are required to increase the level of general acceptance and, hence, the number of units deployed by the user. In the near term, however, the focus must remain on robust and reliable solutions that reduce risk and save lives. This paper describes ongoing efforts to address these needs through a spiral development process that capitalizes on technology transfer to harvest applicable results of prior and ongoing activities throughout the technical community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capacity through the use of robots with on board visual, NBC and HAZMAT sensors to rapidly and continuously screen convoys and staged exposed assets would be a force multiplier and measurably improve base and force protection at both inbound and outbound DOD and commercial facilities. This paper chronicles our experiment with the ODIS robot at the Ports of Los Angeles (POLA) and Long Beach (POLB) in July of 2003. POLA & POLB are responsible for moving over 30% of the United States trade goods. Queues of 54’ container trucks routinely exceed 100 trucks, extending for over a mile from the port entrances. Spotted equipment and convoys at staging areas are a high visibility and value assets to a terrorist incident. The POLA/POLB scenario is also representative of TRANSCOM operations at the port of Basra during current operation in Iraq. The California Highway Patrol is responsible for physically inspecting these vehicles for roadworthiness and contraband, a dangerous and dirty job. We will also discuss the use of ODIS robots for this task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The viability of Unmanned Systems as tools is increasingly recognized in many domains. As technology advances, the autonomy on board these systems also advances. In order to evaluate the systems in terms of their levels of autonomy, it is critical to have a set of standard definitions that support a set of metrics. As autonomy cannot be evaluated quantitatively without sound and thorough technical basis, the development of autonomy levels for unmanned systems must take into account many factors such as task complexity, human interaction, and environmental difficulty. An ad hoc working group assembled by government practitioners has been formed to address these issues. The ultimate objectives for the working group are: (1) To determine the requirements for metrics for autonomy levels of unmanned systems. (2) To devise methods for establishing metrics of autonomy for unmanned systems. (3) To develop a set of widely recognized standard definitions for the levels of autonomy for unmanned systems. This paper describes the interim results that the group has accomplished through the first four workshops that the group held. We report on the initial findings of the workshops toward developing a generic framework for the Autonomy Levels for Unmanned Systems (ALFUS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A need exists for United States military forces to perform collaborative engagement operations between unmanned systems. This capability has the potential to contribute significant tactical synergy to the Joint Force operating in the battlespace of the future. Collaborative engagements potentially offer force conservation, perform timely acquisition and dissemination of essential combat information, and can eliminate high value and time critical targets. Collaborative engagements can also add considerably to force survivability by reducing soldier and equipment exposure during critical operations. This paper will address a multiphase U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) Joint Technology Center (JTC) Systems Integration Laboratory (SIL) program to assess information requirements, Joint Architecure for Unmanned Systems (JAUS), on-going Science and Technology initiatives, and conduct simulation based experiments to identify and resolve technical risks required to conduct collaborative engagements using unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV). The schedule outlines an initial effort to expand, update and exercise JAUS, provide early feedback to support user development of Concept of Operations (CONOPs) and Tactics, Techniques and Procedures (TTPs), and develop a Multiple Unified Simulation Environment (MUSE) system with JAUS interfaces necessary to support an unmanned system of systems collaboartive engagement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current problem of integrating new payloads into current Unmanned Vehicle Systems is becoming more prevalent as the vehicles become more capable. The increase in payload capacity allows for new types of payloads to be carried and facilitates the option of carrying multiple payloads per mission. This added capability creates integration difficulties. Modifying the vehicle and ground station software with each new payload is unacceptable. Software integration, regression and safety concerns make the integration effort costly and lengthy. The Modular Mission Payload Architecture addressed these issues with a web-centric network-based solution that migrates the command and control interface of modular mission payloads from inside the ground control station and vehicle management systems to a separate controller in the vehicle. This migration and a common interface minimize modifications to system hardware and software to incorporate new payloads, thus reducing engineering development time, facilitating quicker integration at lower costs and allowing payload commonality across services and platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key issues in military urban operations is the ability to obtain timely situational awareness of the target area. One solution utilizes Unmanned Ground Vehicles (UGVs) to provide this information but challenges remain as to how to accurately emplace and control these vehicles from extended ranges. This research and development project, Stork, demonstrated the capability to aerially insert a UGV from a UAV into an area of operations and then use a communications relay pod on the UAV to extend the range of control of UGVs. The UGV insertion was done using a parachute delivery system from after an altitude of 400 feet. The communications relay pod effectively increased the tele-operated control range of the UGV from typical 1-2 km line-of-sight limitation. Tele-operated control was demonstrated out to a distance of 26 km. Transparent to the physical elements of the demonstration was the integration of the Joint Architecture for Unmanned Systems (JAUS) on the UGVs which allows a single operator control unit (OCU) to control multiple disparate UGVs simply by selecting a particular UGV from a drop-down menu. The ability to control multiple vehicles on the ground at the extended range and switch control from one vehicle to the next and back was also successfully demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a comprehensive vision for unmanned systems that will shape the future of Naval Warfare within a larger Joint Force concept, and examines the broad impact that can be anticipated across the Fleet. The vision has been articulated from a Naval perspective in NAVSEA technical report CSS/TR-01/09, Shaping the Future of Naval Warfare with Unmanned Systems, and from a Joint perspective in USJFCOM Rapid Assessment Process (RAP) Report #03-10 (Unmanned Effects (UFX): Taking the Human Out of the Loop). Here, the authors build on this foundation by reviewing the major findings and laying out the roadmap for achieving the vision and truly transforming how we fight wars. The focus is on broad impact across the Fleet - but the implications reach across all Joint forces. The term "Unmanned System" means different things to different people. Most think of vehicles that are remotely teleoperated that perform tasks under remote human control. Actually, unmanned systems are stand-alone systems that can execute missions and tasks without direct physical manned presence under varying levels of human control - from teleoperation to full autonomy. It is important to note that an unmanned system comprises a lot more than just a vehicle - it includes payloads, command and control, and communications and information processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robots have important applications in high speed, rough-terrain scenarios. In these scenarios, unexpected and hazardous situations can occur that require rapid hazard avoidance maneuvers. At high speeds, there is limited time to perform re-planning based on detailed vehicle and terrain models. Furthermore, detailed models often do not accurately predict the robot’s performance due to model parameter and sensor uncertainty. This paper presents a method for high speed hazard avoidance. The method is based on the concept of the trajectory space, which is a compact model-based representation of a robot’s dynamic performance limits in uneven, natural terrain. A Monte Carlo method for analyzing system performance despite model parameter uncertainty is briefly presented, and its integration with the trajectory space is discussed. Simulation results for the hazard avoidance algorithm are presented and demonstrate the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mobility requirement for Unmanned Ground Vehicles (UGVs) is expected to increase significantly as the number of conflicts shift from open terrain operations to the increased complexity of urban settings. In preparation for this role Defence R&D Canada-Suffield is exploring novel mobility platforms utilizing intelligent mobility algorithms that will each contribute to improved UGV mobility. The design of a mobility platform significantly influences its ability to maneuver in the world. Highly configurable and mobile platforms are typically best suited for unstructured terrain. Intelligent mobility algorithms seek to exploit the inherent dexterity of the platform and available world representation of the environment to allow the vehicle to engage extremely irregular and cluttered environments. As a result, the capabilities of vehicles designed with novel platforms utilizing intelligent mobility algorithms will outperform larger vehicles without these capabilities. However, there exist many challenges in the development of UGV systems to satisfy the increased mobility requirement for future military operations. This paper discusses a research methodology proposed to overcome these challenges, which primarily involves the definition and development of novel mobility platforms for intelligent mobility research. It addresses intelligent mobility algorithms and the incorporation of world representation and perception research in the creation of necessary synergistic systems. In addition, it presents an overview of the novel mobility platforms and research activities at Defence R&D Canada-Suffield aimed at advancing UGV mobility capabilities in difficult and relevant military environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of the Raptor system developed for DARPA's PerceptOR program, three path planning methods have been integrated together in the framework of a command-arbitration based architecture. These methods combine reactive and deliberative elements, performing path planning within different planning horizons. Short range path planning (< 10 m) is done by a module called OAradials. OAradials is purely reactive, evaluating arcs corresponding to possible steering commands for the proximity of discrete obstacles, abrupt elevation changes, and unsafe slope conditions. Medium range path planning (<30 m) is performed by a module called Biased Random Trees - Follow Path (BRT-FP). Based on LaValle and Kuffner's rapidly exploring random trees planning algorithm, BRT-FP continuously evaluates the local terrain map in order to generate a good path that advances the robot towards the next intermediate waypoint in a user-specified plan. A pure-pursuit control algorithm generates candidate steering commands intended to keep the robot on the generated path. Long range path planning is done by the Dynamic Planner (DPlanner) using Stentz' D* algorithm. Use of D* allows efficient exploitation of prior terrain data and dynamic replanning as terrain is explored. Outputs from DPlanner generate intermediate goal points that are fed to the BRT-FP planner. A command-level arbitration scheme selects steering commands based on the weighted sum of the steering preferences generated by the OAradials and BRT-FP path planning behaviors. This system has been implemented on an ATV platform that has been actuated for autonomous operation, and tested on realistic cross-country terrain in the context of the PerceptOR program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to maneuver autonomously on rough terrain, a mobile robot must constantly decide whether to traverse or circumnavigate terrain features ahead. This ability is called Obstacle Negotiation (ON). A critical aspect of ON is the so-called traversability analysis, which evaluates the level of difficulty associated with the traversal of the terrain. This paper presents a new method for traversability analysis, called T-transformation. It is implemented in a local terrain map as follows: (1) For each cell in the local terrain map, a square terrain patch is defined that symmetrically overlays the cell; (2) a plane is fitted to the data points in the terrain patch using a least-square approach and the slope of the least-squares plane and the residual of the fit are computed and used to calculate the Traversability Index (TI) for that cell; (3) after each cell is assigned a TI value, the local terrain map is transformed into a traversability map. The traversability map is further transformed into a traversability field histogram where each element represents the overall level of difficulty to move along the corresponding direction. Based on the traversability field histogram our reactive ON system then computes the steering and velocity commands to move the robot toward the intended goal while avoiding areas of poor traversability. The traversability analysis algorithm and the overall ON system were verified by extensive simulation. We verified our method partially through experiments on a Segway Robotics Mobility Platform (RMP), albeit only on flat terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military and security operations often require that participants move as quickly as possible, while avoiding harm. Humans judge how fast they can drive, how sharply they can turn and how hard they can brake, based on a subjective assessment of vehicle handling, which results from responsiveness to driving commands, ride quality, and prior experience in similar conditions. Vehicle handling is a product of the vehicle dynamics and the vehicle-terrain interaction. Near real-time methods are needed for unmanned ground vehicles to assess their handling limits on the current terrain in order to plan and execute extreme maneuvers. This paper describes preliminary research to develop on-the-fly procedures to capture vehicle-terrain interaction data and simple models of vehicle response to driving commands, given the vehicle-terrain interaction data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Defence Research and Development Canada's (DRDC has been given strategic direction to pursue research to increase the independence and effectiveness of military vehicles and systems. This has led to the creation of the Autonomous Intelligent Systems (AIS) prgram and is notionally divide into air, land and marine vehicle systems as well as command, control and decision support systems. This paper presents an overarching description of AIS research issues, challenges and directions as well as a nominal path that vehicle intelligence will take. The AIS program requires a very close coordination between research and implementation on real vehicles. This paper briefly discusses the symbiotic relationship between intelligence algorithms and implementation mechanisms. Also presented are representative work from two vehicle specific research program programs. Work from the Autonomous Air Systems program discusses the development of effective cooperate control for multiple air vehicle. The Autonomous Land Systems program discusses its developments in platform and ground vehicle intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The value of unmanned vehicles is directly related to the applications to which it can be successfully applied. Many applications exist and have been identified as suitable for unmanned vehicles, especially those involving dull, dirty, difficult, and dangerous tasks. This paper will highlight applications, missions, and capabilities that have been demonstrated on the TAGS platform to date as well as future application and mission considerations. When evaluating real world applications for this type of vehicle, one must take into account and balance the complexity inherent to the control and safeguarding requirements of a large autonomous ground vehicle with the simplicity required for commercial or military field use. In addition, suitability for a particular application may be limited by the size, weight, fuel consumption, reliability, terrain crossing capability, and other abilities of a vehicle and the intelligent software system and sensors commanding it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach to modeling and simulation of vehicles interacting with the environment (terrain) in a realistic, three-dimensional setting and to assess vehicle mobility based on simulation results. To reliably predict vehicle performance under realistic off-road conditions, lumped-parameter models commonly used in vehicle dynamics are not adequate. In this work, high fidelity, multibody dynamics approach is employed to capture vehicle nonlinear dynamic characteristics. Because all vehicle control forces/moments
are generated at the patch where tire and terrain interacts, tire modeling, soil modeling, and tire-soil interaction modeling are critical. In this work, tire is modeled as multiple-input-multiple-output system with parameters determined via high-fidelity physical-based finite element model and/or test data; soil is modeled using the Bekker-Wong approach with parameters determined using high-fidelity physical-based finite element soil model and/or test data. Although the Bekker-Wong approach is relatively old, effective implementation to achieve its fully potential is possible only recently, with the advent of the so-called dynamic terrain database. A computational algorithm for such an implementation is presented. Dynamic terrain allows natural treatment of the multiple-pass problem in spatial and dynamic fashion, as opposed to the approaches found in the literature that can only deal with planar, steady-state
rolling in an ad hoc fashion. Tire-terrain interaction is modeled using a hybrid approach of empirical and semi-empirical models. A complete simulation environment can be constructed by integrating all the models and mobility analysis of vehicles be perform on soft terrain. An example is presented to demonstrate the approach. Conclusions and future research directions are presented at the end of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional nondestructive evaluation (NDE) techniques include visual inspection, eddy current scanning, ultrasonics, and fluorescent dye penetration. These techniques are limited to local evaluation, often miss small buried defects, and are useful only on polished surfaces. Advanced NDE techniques include laser ultrasonics, holographic interferometry, structural integrity monitoring, shearography, and thermography. A variation of shearography, employing reflective shearographic interferometry, has been developed. This new shearographic interferometer is discussed, together with models to optimize its performance and experiments demonstrating its use in NDE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple and practical approach to calibrate extrinsic parameters of camera mounted in intelligent vehicle is presented. Because mapping of three parallel lines in the world into image plane have same vanishing point and different slope, according to perspective projection principle, an inherent relationship exists among camera extrinsic parameters and vanishing point and slope. Such the analytic expression of camera extrinsic parameters can be built by mathematical derivation, without iterative and optimizing calculation. After positioning intersection points and slopes of imaging lines manually or automatically, extrinsic parameters can be calculated directly. The experimental results show that the method can be applied in various circumstances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include -- but are not limited to --under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstacle-breach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to effectively navigate any environment, a robotic vehicle needs to understand the terrain and obstacles native to that environment. Knowledge of its own location and orientation, and knowledge of the region of operation, can greatly improve the robot’s performance. To this end, we have developed a mobile system for the fast digitization of large-scale environments to develop the a priori information needed for prediction and optimization of the robot’s performance. The system collects ground-level video and laser range information, fusing them together to develop accurate 3D models of the target environment. In addition, the system carries a differential Global Positioning System (GPS) as well as an Inertial Navigation System (INS) for determining the position and orientation of the various scanners as they acquire data. Issues involved in the fusion of these various data modalities include: Integration of the position and orientation (pose) sensors’ data at varying sampling rates and availability; Selection of "best" geometry in overlapping data cases; Efficient representation of large 3D datasets for real-time processing techniques. Once the models have been created, this data can be used to provide a priori information about negative obstacles, obstructed fields of view, navigation constraints, and focused feature detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new real-time and intelligent target tracking system basing on the pan/tilt controlled-stereo camera embedded on an UGV (unmanned ground vehicle) is proposed. In the proposed system,
firstly the face area of a moving person is detected from a sequence of left images captured by a stereo camera using a threshold value of YCbCr color model and then, the stereo camera is kept to track the
moving target in real-time by controlling the pan/tilt system. Secondly a depth map of the target face is extracted by using a relationship between the geometry of a stereo camera system and the disparity map of a rectified stereo image. Finally, from this extracted depth map, a distance from the camera system to the target and the coordinates of the target are calculated and these target coordinates are used for extracting 3D information of the target to plan paths and construct the coordinates maps. From some experiments using 1,280 frames of the input stereo image pairs, it is analyzed that displacement on the center position of the face after tracking is kept to be very low values of 0.6% for 1,280 frames on average. Also, the error ratio between the measured and computed distance values of the target from the camera system is kept to be very low value of 0.5% on average. In addition, the proposed system has achieved a speed of 0.04 sec/frame for target detection and 0.06 sec/frame for target tracking. These experimental results suggest a possibility of implementing a practical stereo camera-based automatic
target tracking system in the UGV having a high degree of accuracy and a very fast response time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is an increasing interest in developing new Mobile Robots because of their applications in a variety of areas. Mobile robots can reach places, which are either inaccessible or unsafe for human beings. TACOM has developed a lab where new mobile robots can be tested. However to save cost and time it is advisable to test robots in a virtual environment before they are tested in a real Lab. The objective of this paper is to explore techniques whereby mobile robots can be tested in a simulated environment. Different techniques have been studied for such simulations and testing in a virtual environment. In particular, State flow and Zed3d software, VRML and Fuzzy Logic approaches have been exploited for this purpose. Different robots, obstacles and terrains have been simulated. It is hoped that such work will prove useful in the study of development and testing of mobile robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A brief survey of current autonomous vehicle (AV) projects is presented with intent to find common infrastructure or subsystems that can be configured from commercially available modular robotic components, thereby providing developers with greatly reduced timelines and costs and encouraging focus on the selected problem domain. The Modular Manipulator System (MMS) robotic system, based on single degree of freedom rotary and linear modules, is introduced and some approaches to autonomous vehicle configuration and deployment are examined. The modules may be configured to provide articulated suspensions for very rugged terrain and fall recovery, articulated sensors and tooling plus a limited capacity for self repair and self reconfiguration. The MMS on-board visually programmed control software (Model Manager) supports experimentation with novel physical configurations and behavior algorithms via real-time 3D graphics for operations simulation and provides useful subsystems for vision, learning and planning to host intelligent behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents an approach to the sensorial device and control system of an autonomous vehicle intended for navigating and performing precise load/unload tasks in industrial environments. The control system is able to perform turns, line following, and arbitrary curve following specified as splines. It is based on a multivariable design using the technique of pole placement in state space. The control system uses results from parameter estimation modules to adapt to the changing responses of traction motors when loaded or unloaded, such estimators are Kalman filters that recover the vehicle motion parameters from measurements performed by the
positioning sensor. Several steering configurations are possible since the control system provides a radius of turn as output. So differential drive, tricycle drive or Ackerman steering can be done by transforming this radius in motor orders, depending on the geometry of the vehicle. The only sensor the system relies on is a laser-based local positioning system consisting of a rotating laser and retro-reflectors. Robust algorithms for signal analysis and position/orientation estimation have been developed. The sensor is able to detect reflectors 25 meters away in daylight or in dusty
industrial environments using a low-cost 1 mW laser. The system has been tested on two mobile bases, using differential drive and
tricycle drive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.