Competitions provide a technique for building interest and collaboration in targeted research areas. This paper will
present a new competition that aims to increase collaboration amongst Universities, automation end-users, and
automation manufacturers through a virtual competition. The virtual nature of the competition allows for reduced
infrastructure requirements while maintaining realism in both the robotic equipment deployed and the scenarios. Details
of the virtual environment as well as the competitions objectives, rules, and scoring metrics will be presented.
Robot navigation in complex, dynamic and unstructured environments demands robust mapping and localization
solutions. One of the most popular methods in recent years has been the use of scan-matching schemes where
temporally correlated sensor data sets are registered for obtaining a Simultaneous Localization and Mapping
(SLAM) navigation solution. The primary bottleneck of such scan-matching schemes is correspondence determination,
i.e. associating a feature (structure) in one dataset to its counterpart in the other. Outliers, occlusions,
and sensor noise complicate the determination of reliable correspondences. This paper describes testing scenarios
being developed at NIST to analyze the performance of scan-matching algorithms. This analysis is critical for the
development of practical SLAM algorithms in various application domains where sensor payload, wheel slippage,
and power constraints impose severe restrictions. We will present results using a high-fidelity simulation testbed,
the Unified System for Automation and Robot Simulation (USARSim).
Simulation of robots in a virtual domain has multiple benefits. End users can use the simulation as a training tool to
increase their skill with the vehicle without risking damage to the robot or surrounding environment. Simulation allows
researchers and developers to benchmark robot performance in a range of scenarios without having the physical robot or
environment present. The simulation can also help guide and generate new design concepts. USARSim (Unified
System for Automation and Robot Simulation) is a tool that is being used to accomplish these goals, particularly within
the realm of search and rescue. It is based on the Unreal Tournament 2004 gaming engine, which approximates the
physics of how a robot interacts with its environment. A family of vehicles that can benefit from simulation in
USARSim are WhegsTM robots. Developed in the Biorobotics Laboratory at Case Western Reserve University,
WhegsTM robots are highly mobile ground vehicles that use abstracted biological principles to achieve a robust level of
locomotion, including passive gait adaptation and enhanced climbing abilities. This paper describes a WhegsTM robot
model that was constructed in USARSim. The model was configured with the same kinds of behavioral characteristics
found in real WhegsTM vehicles. Once these traits were implemented, a validation study was performed using identical
performance metrics measured on both the virtual and real vehicles to quantify vehicle performance and to ensure that
the virtual robot's performance matched that of the real robot.
This paper presents the motivation behind the new joint NIST/IEEE Virtual Manufacturing Automation Competition (VMAC). This competition strives to take the Automated Guided Vehicle (AGV) user community driven requirements and turn them into a low-entry-barrier competition. The objectives, scoring, performance metrics, and operation of the competition are explained. In addition, the entry-barrier lowering infrastructure that is provided to competitors is presented.
Research efforts in Urban Search And Rescue (USAR) robotics have grown substantially in recent years. A
virtual USAR robotic competition was established in 2006 under the RoboCup umbrella to foster collaboration
amongst institutions and to provide benchmark test environments for system evaluation. In this paper we
describe the physics based software simulation framework that is used in this competition and the rules and
performance metrics used to determine the league's winner. The framework allows for the realistic modeling of
robots, sensors, and actuators, as well as complex, unstructured, dynamic environments. Multiple heterogeneous
agents can be concurrently placed in the simulation environment thus allowing for team or group evaluations.
Urban Search and Rescue Simulation (USARSim) is an open source package that provides a high-resolution,
physics based simulation of robotic platforms. The package provides models of several common robotic platforms
and sensors as well as sample worlds and a socket interface into a commonly used commercial-off-the-shelf (COTS)
simulation package. Initially introduced to support the development of search and rescue robots, USARSim has
proved to be a tool with a broader scope, from robot education to human robot interfaces, including cooperation,
and more. During Robocup 2006, a new competition based on USARSim will be held in the context of the urban
search and rescue competitions.
The Mobility Open Architecture Simulation and Tools (MOAST) is a framework that builds upon the 4-D
Real-time Control Systesm (4D/RCS) architecture to analyze the performance of autonomous vehicles and multiagent
systems. MOAST provides controlled environments that allow for the transparent transference of data
between a matrix of real and virtual components. This framework is glued together through well-defined interfaces
and communications protocols, and detailed specifications on individual subsystem input/output (IO). This
allows developers to freely swap components and analyze the effect on the overall system by means of comparison
to baseline systems with a limited set of functionality. When taken together, the combined USARSim/MOAST
system may be used to provide a comprehensive development and testing environment for complex robotic
systems.
This paper will provide an overview of each system and describe how the combined system may be used for
stand-alone simulated development and test, or hardware-in-the-loop development and testing of autonomous
mobile robot systems.
Tactical behaviors for autonomous ground and air vehicles are an area of high interest to the Army. They are critical for the inclusion of robots in the Future Combat System (FCS). Tactical behaviors can be defined at multiple levels: at the Company, Platoon, Section, and Vehicle echelons. They are currently being defined by the Army for the FCS Unit of Action. At all of these echelons, unmanned ground vehicles, unmanned air vehicles, and unattended ground sensors must collaborate with each other and with manned systems. Research being conducted at the National Institute of Standards and Technology (NIST) and sponsored by the Army Research Lab is focused on defining the Four Dimensional Real-time Controls System (4D/RCS) reference model architecture for intelligent systems and developing a software engineering methodology for system design, integration, test and evaluation. This methodology generates detailed design requirements for perception, knowledge representation, decision making, and behavior generation processes that enable complex military tactics to be planned and executed by unmanned ground and air vehicles working in collaboration with manned systems.
This paper will describe how the Mobility Open Architecture Tools and Simulation (MOAST) framework can facilitate performance evaluations of RCS compliant multi-vehicle autonomous systems. This framework provides an environment that allows for simulated and real architectural components to function seamlessly together. By providing repeatable environmental conditions, this framework allows for the development of individual components as well as component performance metrics. MOAST is composed of high-fidelity and low-fidelity simulation systems, a detailed model of real-world terrain, actual hardware components, a central knowledge repository, and architectural glue to tie all of the components together. This paper will describe the framework’s components in detail and provide an example that illustrates how the framework can be utilized to develop and evaluate a single architectural component through the use of repeatable trials and experimentation that includes both virtual and real components functioning together
This paper describes NIST’s efforts in evaluating what it will take to achieve autonomous human-level driving skills in terms of time and funding. NIST has approached this problem from several perspectives: considering the current state-of-the-art in autonomous navigation and extrapolating from there, decomposing the tasks identified by the Department of Transportation for on-road driving and comparing that with accomplishments to date, analyzing computing power requirements by comparison with the human brain, and conducting a Delphi Forecast using the expert researchers in the field of autonomous driving. A detailed description of each of these approaches is provided along with the major finding from each approach and an overall picture of what it will take to achieve human level driving skills in autonomous vehicles.
This paper presents a cost-based adaptive planning agent that is operating at the route-segment level of a deliberative hierarchical planning system for autonomous road driving. At this level, the planning agent is responsible for developing fundamental driving maneuvers that allow a vehicle to travel safely amongst moving and stationary objects. This is facilitated through the use of an incrementally expanded planning graph that provides the ability to implement a dynamic cost function. This cost function varies to comply with particular road, regional, or event driven situations, and when coupled with the incremental graph expansion allows for the agent to implement hard and soft system constraints.
This paper describes a world model that combines a variety of sensed inputs and a priori information and is used to generate on-road and off-road autonomous driving behaviors. The system is designed in accordance with the principles of the 4D/RCS architecture. The world model is hierarchical, with the resolution and scope at each level designed to minimize computational resource requirements and to support planning functions for that level of the control hierarchy. The sensory processing system that populates the world model fuses inputs from multiple sensors and extracts feature information, such as terrain elevation, cover, road edges, and obstacles. Feature information from digital maps, such as road networks, elevation, and hydrology, is also incorporated into this rich world model. The various features are maintained in different layers that are registered together to provide maximum flexibility in generation of vehicle plans depending on mission requirements. The paper includes discussion of how the maps are built and how the objects and features of the world are represented. Functions for maintaining the world model are discussed. The world model described herein is being developed for the Army Research Laboratory's Demo III Autonomous Scout Vehicle experiment.
In this paper, we will describe a value-driven graph search technique that is capable of generating a rich variety of single and multiple vehicle behaviors. The generation of behaviors depends on cost and benefit computations that may involve terrain characteristics, line of sight to enemy positions, and cost, benefit, and risk of traveling on roads. Depending on mission priorities and cost values, real-time planners can autonomously build appropriate behaviors on the fly that include road following, cross-country movement, stealthily movement, formation keeping, and bounding overwatch. This system follows NIST's 4D/RCS architecture, and a discussion of the world model, value judgment, and behavior generation components is provided. In addition, techniques for collapsing a multidimensional model space into a cost space and planning graph constraints are discussed. The work described in this paper has been performed under the Army Research Laboratory's Robotics Demo III program.
This paper outlines the goals and work accomplished thus far for both the man-machine interface and mission planning elements of the experimental unmanned vehicle program. It is the gaol of the XUV program to make available to the user an interface and tools that will allow for seamless transition between mission planning, rehearsal, and execution on multiple collaborating autonomous vehicles in a platoon group.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.