Under the Urban Environment Exploration project, the Space and Naval Warfare Systems Center Pacic (SSC-
PAC) is maturing technologies and sensor payloads that enable man-portable robots to operate autonomously
within the challenging conditions of urban environments. Previously, SSC-PAC has demonstrated robotic capabilities to navigate and localize without GPS and map the ground
oors of various building sizes.1 SSC-PAC has
since extended those capabilities to localize and map multiple multi-story buildings within a specied area. To
facilitate these capabilities, SSC-PAC developed technologies that enable the robot to detect stairs/stairwells,
maintain localization across multiple environments (e.g. in a 3D world, on stairs, with/without GPS), visualize
data in 3D, plan paths between any two points within the specied area, and avoid 3D obstacles. These technologies have been developed as independent behaviors under the Autonomous Capabilities Suite, a behavior
architecture, and demonstrated at a MOUT site at Camp Pendleton. This paper describes the perceptions and
behaviors used to produce these capabilities, as well as an example demonstration scenario.
The Space and Naval Warfare (SPAWAR) Systems Center Pacific (SSC Pacific) has a long and extensive history in
unmanned systems research and development, starting with undersea applications in the 1960s and expanding into
ground and air systems in the 1980s. In the ground domain, we are addressing force-protection scenarios using large
unmanned ground vehicles (UGVs) and fixed sensors, and simultaneously pursuing tactical and explosive ordnance
disposal (EOD) operations with small man-portable robots. Technology thrusts include improving robotic intelligence
and functionality, autonomous navigation and world modeling in urban environments, extended operational range of
small teleoperated UGVs, enhanced human-robot interaction, and incorporation of remotely operated weapon systems.
On the sea surface, we are pushing the envelope on dynamic obstacle avoidance while conforming to established
nautical rules-of-the-road. In the air, we are addressing cooperative behaviors between UGVs and small vertical-takeoff-
and-landing unmanned air vehicles (UAVs). Underwater applications involve very shallow water mine
countermeasures, ship hull inspection, oceanographic data collection, and deep ocean access. Specific technology thrusts
include fiber-optic communications, adaptive mission controllers, advanced navigation techniques, and concepts of
operations (CONOPs) development. This paper provides a review of recent accomplishments and current status of a
number of projects in these areas.
Under various collaborative efforts with other government labs, private industry, and academia, SPAWAR Systems
Center Pacific (SSC Pacific) is developing and testing advanced autonomous behaviors for navigation, mapping, and
exploration in various indoor and outdoor settings. As part of the Urban Environment Exploration project, SSC
Pacific is maturing those technologies and sensor payload configurations that enable man-portable robots to
effectively operate within the challenging conditions of urban environments. For example, additional means to
augment GPS is needed when operating in and around urban structures. A MOUT site at Camp Pendleton was
selected as the test bed because of its variety in building characteristics, paved/unpaved roads, and rough terrain.
Metrics are collected based on the overall system's ability to explore different coverage areas, as well as the
performance of the individual component behaviors such as localization and mapping. The behaviors have been
developed to be portable and independent of one another, and have been integrated under a generic behavior
architecture called the Autonomous Capability Suite. This paper describes the tested behaviors, sensors, and
behavior architecture, the variables of the test environment, and the performance results collected so far.
The fusion of multiple behavior commands and sensor data into intelligent and cohesive robotic movement
has been the focus of robot research for many years. Sequencing low level behaviors to create high level
intelligence has also been researched extensively. Cohesive robotic movement is also dependent on other
factors, such as environment, user intent, and perception of the environment. In this paper, a method for
managing the complexity derived from the increase in sensors and perceptions is described. Our system
uses fuzzy logic and a state machine to fuse multiple behaviors into an optimal response based on the
robot's current task. The resulting fused behavior is filtered through fuzzy logic based obstacle avoidance
to create safe movement. The system also provides easy integration with any communications protocol,
plug-and-play devices, perceptions, and behaviors. Most behaviors and the obstacle avoidance parameters
are easily changed through configuration files. Combined with previous work in the area of navigation and
localization a very robust autonomy suite is created.
Sensors commonly mounted on small unmanned ground vehicles (UGVs) include visible light and thermal cameras,
scanning LIDAR, and ranging sonar. Sensor data from these sensors is vital to emerging autonomous robotic behaviors.
However, sensor data from any given sensor can become noisy or erroneous under a range of conditions, reducing the
reliability of autonomous operations. We seek to increase this reliability through data fusion. Data fusion includes
characterizing the strengths and weaknesses of each sensor modality and combining their data in a way such that the
result of the data fusion provides more accurate data than any single sensor. We describe data fusion efforts applied to
two autonomous behaviors: leader-follower and human presence detection. The behaviors are implemented and tested
in a variety of realistic conditions.
Many envisioned applications of mobile robotic systems require the robot to navigate in complex urban environments. This need is particularly critical if the robot is to perform as part of a synergistic team with human forces in military operations. Historically, the development of autonomous navigation for mobile robots has targeted either outdoor or indoor scenarios, but not both, which is not how humans operate. This paper describes efforts to fuse component technologies into a complete navigation system, allowing a robot to seamlessly transition between outdoor and indoor environments. Under the Joint Robotics Program's Technology Transfer project, empirical evaluations of various localization approaches were conducted to assess their maturity levels and performance metrics in different exterior/interior settings. The methodologies compared include Markov localization, global positioning system, Kalman filtering, and fuzzy-logic. Characterization of these technologies highlighted their best features, which were then fused into an adaptive solution. A description of the final integrated system is discussed, including a presentation of the design, experimental results, and a formal demonstration to attendees of the Unmanned Systems Capabilities Conference II in San Diego in December 2005.
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Weapon payloads are becoming increasingly important components of unmanned ground vehicles (UGVs). However weapon payloads are extremely difficult to teleoperate. This paper explores the issues involved with automating several aspects of the operations of a weapon payload. These operations include target detection, acquisition, and tracking. Various approaches to these issues are discussed, and the development and results from two different working prototype systems developed at Space and Naval Warfare Systems Center, San Diego (SSC San Diego) are presented. One approach employs a motion-based scheme for target identification, while the second employs an appearance based scheme. Target selection, arming and firing remain teleoperated in both systems.
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile systems in the Joint Robotics Program (JRP) Robotic Systems Pool (RSP). The approach is to harvest prior and on-going developments that address the technology needs identified by emergent in-theatre requirements and users of the RSP. The component technologies are evaluated on a transition platform to identify the best features of the different approaches, which are then integrated and optimized to work in harmony in a complete solution. The result is an enabling mechanism that continuously capitalizes on state-of-the-art results from the research environment to create a standardized solution that can be easily transitioned to ongoing development programs. This paper focuses on particular research areas, specifically collision avoidance, simultaneous localization and mapping (SLAM), and target-following, and describes the results of their combined integration and optimization over the past 12 months.
In addition to the challenges of equipping a mobile robot with the appropriate sensors, actuators, and processing electronics necessary to perform some useful function, there coexists the equally important challenge of effectively controlling the system’s desired actions. This need is particularly critical if the intent is to operate in conjunction with human forces in a military application, as any low-level distractions can seriously reduce a warfighter’s chances of survival in hostile environments. Historically there can be seen a definitive trend towards making the robot smarter in order to reduce the control burden on the operator, and while much progress has been made in laboratory prototypes, all equipment deployed in theatre to date has been strictly teleoperated. There exists a definite tradeoff between the value added by the robot, in terms of how it contributes to the performance of the mission, and the loss of effectiveness associated with the operator control unit. From a command-and-control perspective, the ultimate goal would be to eliminate the need for a separate robot controller altogether, since it represents an unwanted burden and potential liability from the operator’s perspective. This paper introduces the long-term concept of a supervised autonomous Warfighter’s Associate, which employs a natural-language interface for communication with (and oversight by) its human counterpart. More realistic near-term solutions to achieve intermediate success are then presented, along with actual results to date. The primary application discussed is military, but the concept also applies to law enforcement, space exploration, and search-and-rescue scenarios.
In the area of logistics, there currently is a capability gap between the one-ton Army robotic Multifunction Utility/Logistics and Equipment (MULE) vehicle and a soldier’s backpack. The Unmanned Systems Branch at Space and Naval Warfare Systems Center (SPAWAR Systems Center, or SSC), San Diego, with the assistance of a group of interns from nearby High Tech High School, has demonstrated enabling technologies for a solution that fills this gap. A small robotic transport system has been developed based on the Segway Robotic Mobility Platform (RMP). We have demonstrated teleoperated control of this robotic transport system, and conducted two demonstrations of autonomous behaviors. Both demonstrations involved a robotic transporter following a human leader. In the first demonstration, the transporter used a vision system running a continuously adaptive mean-shift filter to track and follow a human. In the second demonstration, the separation between leader and follower was significantly increased using Global Positioning System (GPS) information. The track of the human leader, with a GPS unit in his backpack, was sent wirelessly to the transporter, also equipped with a GPS unit. The robotic transporter traced the path of the human leader by following these GPS breadcrumbs. We have additionally demonstrated a robotic medical patient transport capability by using the Segway RMP to power a mock-up of the Life Support for Trauma and Transport (LSTAT) patient care platform, on a standard NATO litter carrier. This paper describes the development of our demonstration robotic transport system and the various experiments conducted.
Unmanned weapons remove humans from deadly situations. However some systems, such as unmanned guns, are difficult to control remotely. It is difficult for a soldier to perform the complex tasks of identifying and aiming at specific points on targets from a remote location. This paper describes a computer vision and control system for providing autonomous control of unmanned guns developed at Space and Naval Warfare Systems Center, San Diego (SSC San Diego). The test platform, consisting of a non-lethal gun mounted on a pan-tilt mechanism, can be used as an unattended device or mounted on a robot for mobility. The system operates with a degree of autonomy determined by a remote user that ranges from teleoperated to fully autonomous. The teleoperated mode consists of remote joystick control over all aspects of the weapon, including aiming, arming, and firing. Visual feedback is provided by near-real-time video feeds from bore-site and wide-angle cameras. The semi-autonomous mode provides the user with tracking information overlayed over the real-time video. This provides the user with information on all detected targets being tracked by the vision system. The user uses a mouse to select a target, and the gun automatically aims the gun at the target. Arming and firing is still performed by teleoperation. In fully autonomous mode, all aspects of gun control are performed by the vision system.
Recent innovations in real-time machine vision, distributed computing, software architectures, high-speed communication, and mobile robotic systems are expanding the available technology for intelligent system development. These technologies allow the realization of intelligent systems that provide the capabilities for a user to experience events from remote locations and to interact with that location using an array of robotic devices. In this paper we describe research being done in the UCSD CVRR that will lead to the realization of a powerful and integrated traffic-incident detection, monitoring, and recovery system. Sensor clusters utilizing both rectilinear and omni-directional cameras will automate information gathering about the incident and provide a real- time televiewing interface to emergency response crews. Ultimately, this system will have a direct impact on reducing incident related highway congestion by improving the quality of information to which emergency personnel have access.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.