A method for the autonomous geolocation of ground vehicles in forest environments is discussed. The method provides an estimate of the global horizontal position of a vehicle strictly based on finding a geometric match between a map of observed tree stems, scanned in 3D by Light Detection and Ranging (LiDAR) sensors onboard the vehicle, to another stem map generated from the structure of tree crowns analyzed from high resolution aerial orthoimagery of the forest canopy. Extraction of stems from 3D data is achieved by using Support Vector Machine (SVM) classifiers and height above ground filters that separate ground points from vertical stem features. Identification of stems from overhead imagery is achieved by finding the centroids of tree crowns extracted using a watershed segmentation algorithm. Matching of the two maps is achieved by using a robust Iterative Closest Point (ICP) algorithm that determines the rotation and translation vectors to align the datasets. The alignment is used to calculate the absolute horizontal location of the vehicle. The method has been tested with real-world data and has been able to estimate vehicle geoposition with an average error of less than 2 m. It is noted that the algorithm’s accuracy performance is currently limited by the accuracy and resolution of aerial orthoimagery used. The method can be used in real-time as a complement to the Global Positioning System (GPS) in areas where signal coverage is inadequate due to attenuation by the forest canopy, or due to intentional denied access. The method has two key properties that are significant: i) It does not require a priori knowledge of the area surrounding the robot. ii) Uses the geometry of detected tree stems as the only input to determine horizontal geoposition.
Teleoperated vehicles are playing an increasingly important role in a variety of military functions. While advantageous
in many respects over their manned counterparts, these vehicles also pose unique challenges when it comes to safely
avoiding obstacles. Not only must operators cope with difficulties inherent to the manned driving task, but they must
also perform many of the same functions with a restricted field of view, limited depth perception, potentially disorienting
camera viewpoints, and significant time delays. In this work, a constraint-based method for enhancing operator
performance by seamlessly coordinating human and controller commands is presented. This method uses onboard
LIDAR sensing to identify environmental hazards, designs a collision-free path homotopy traversing that environment,
and coordinates the control commands of a driver and an onboard controller to ensure that the vehicle trajectory remains
within a safe homotopy. This system's performance is demonstrated via off-road teleoperation of a Kawasaki Mule in an
open field among obstacles. In these tests, the system safely avoids collisions and maintains vehicle stability even in the
presence of "routine" operator error, loss of operator attention, and complete loss of communications.
It is widely recognized that simulation is pivotal to vehicle development, whether manned or unmanned. There are few
dedicated choices, however, for those wishing to perform realistic, end-to-end simulations of unmanned ground vehicles
(UGVs). The Virtual Autonomous Navigation Environment (VANE), under development by US Army Engineer
Research and Development Center (ERDC), provides such capabilities but utilizes a High Performance Computing
(HPC) Computational Testbed (CTB) and is not intended for on-line, real-time performance. A product of the VANE
HPC research is a real-time desktop simulation application under development by the authors that provides a portal into
the HPC environment as well as interaction with wider-scope semi-automated force simulations (e.g. OneSAF). This
VANE desktop application, dubbed the Autonomous Navigation Virtual Environment Laboratory (ANVEL), enables
analysis and testing of autonomous vehicle dynamics and terrain/obstacle interaction in real-time with the capability to
interact within the HPC constructive geo-environmental CTB for high fidelity sensor evaluations. ANVEL leverages
rigorous physics-based vehicle and vehicle-terrain interaction models in conjunction with high-quality, multimedia
visualization techniques to form an intuitive, accurate engineering tool. The system provides an adaptable and
customizable simulation platform that allows developers a controlled, repeatable testbed for advanced simulations.
ANVEL leverages several key technologies not common to traditional engineering simulators, including techniques
from the commercial video-game industry. These enable ANVEL to run on inexpensive commercial, off-the-shelf
(COTS) hardware. In this paper, the authors describe key aspects of ANVEL and its development, as well as several
initial applications of the system.
An omnidirectional mobile robot is able, kinematically, to move in any direction regardless of current pose. To date,
nearly all designs and analyses of omnidirectional mobile robots have considered the case of motion on flat, smooth
terrain. In this paper, an investigation of the design and analysis of an omnidirectional mobile robot for use in rough
terrain is presented. Kinematic and geometric properties of the active split offset caster drive mechanism are
investigated along with system and subsystem design guidelines. An optimization method is implemented to explore the
design space. Use of this method results in a robot that has higher mobility than a robot designed using engineering
judgment. A point design generated by the optimization method is shown for a meter-scale mobile robot.
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances
in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual
UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control
system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for
one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in
operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single
operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual
servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI)
techniques from the entertainment software industry are being used to develop video-game style interfaces that require
little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive
interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less
burdensome than many current generation systems.
mand).
The ability of autonomous unmanned ground vehicles (UGVs) to rapidly and effectively predict terrain negotiability is a
critical requirement for their use on challenging terrain. Most methods for assessing traversability, however, assume
precise knowledge of vehicle and terrain properties. In practical applications, uncertainties are associated with the
estimation of the vehicle/terrain parameters, and these uncertainties must be considered while determining vehicular
mobility. Here a computationally inexpensive method for efficient mobility prediction based on the stochastic response
surface (SRSM) approach is presented that considers imprecise knowledge of terrain and vehicle parameters while
analyzing various metrics associated with UGV mobility. A conventional Monte Carlo method and the proposed
response surface methodology have been applied to two simulated cases of mobility analysis, and it has been shown that
the SRSM method is an efficient tool as compared to conventional Monte Carlo methods for the analysis of vehicular
mobility in uncertain environments.
KEYWORDS: Sensors, Performance modeling, Monte Carlo methods, Computer simulations, Motion models, Mobile robots, Space robots, Model-based design, Algorithm development, Kinematics
Mobile robots have important applications in high speed, rough-terrain scenarios. In these scenarios, unexpected and hazardous situations can occur that require rapid hazard avoidance maneuvers. At high speeds, there is limited time to perform re-planning based on detailed vehicle and terrain models. Furthermore, detailed models often do not accurately predict the robot’s performance due to model parameter and sensor uncertainty. This paper presents a method for high speed hazard avoidance. The method is based on the concept of the trajectory space, which is a compact model-based representation of a robot’s dynamic performance limits in uneven, natural terrain. A Monte Carlo method for analyzing system performance despite model parameter uncertainty is briefly presented, and its integration with the trajectory space is discussed. Simulation results for the hazard avoidance algorithm are presented and demonstrate the effectiveness of the method.
High-speed unmanned ground vehicles have important potential applications, including reconnaissance, material transport, and planetary exploration. During high-speed operation, it is important for a vehicle to sense changing terrain conditions, and modify its control strategies to ensure aggressive, yet safe, operation. In this paper, a framework for terrain characterization and identification is briefly described, composed of 1) vision-based classification of upcoming terrain, 2) terrain parameter identification via wheel-terrain interaction analysis, and 3) terrain classification based on auditory wheel-terrain contact signatures. The parameter identification algorithm is presented in detail. The algorithm derives from simplified forms of classical terramechanics equations. An on-line estimator is developed to allow rapid identification of critical terrain parameters. Simulation and experimental results show that the terrain estimation algorithm can accurately and efficiently identify key terrain parameters for sand.
During the last decade, there has been significant progress toward a supervised autonomous robotic capability for remotely controlled scientific exploration of planetary surfaces. While planetary exploration potentially encompasses many elements ranging from orbital remote sensing to subsurface drilling, the surface robotics element is particularly important to advancing in situ science objectives. Surface activities include a direct characterization of geology, mineralogy, atmosphere and other descriptors of current and historical planetary processes-and ultimately-the return of pristine samples to Earth for detailed analysis. Toward these ends, we have conducted a broad program of research on robotic systems for scientific exploration of the Mars surface, with minimal remote intervention. The goal is to enable high productivity semi-autonomous science operations where available mission time is concentrated on robotic operations, rather than up-and-down-link delays. Results of our work include prototypes for landed manipulators, long-ranging science rovers, sampling/sample return mobility systems, and more recently, terrain-adaptive reconfigurable/modular robots and closely cooperating multiple rover systems. The last of these are intended to facilitate deployment of planetary robotic outposts for an eventual human-robot sustained scientific presence. We overview our progress in these related areas of planetary robotics R&D, spanning 1995-to-present.
While significant recent progress has been made in development of mobile robots for planetary surface exploration, there remain major challenges. These include increased autonomy of operation, traverse of challenging terrain, and fault-tolerance under long, unattended periods of use. We have begun work which addresses some of these issues, with an initial focus on problems of high risk access, that is, autonomous roving over highly variable, rough terrain. This is a dual problem of sensing those conditions which require rover adaptation, and controlling the rover actions so as to implement this adaptation in a well understood way (relative to metrics of rover stability, traction, power utilization, etc.). Our work progresses along several related technical lines: 1) development a fused state estimator which robustly integrates internal rover state and externally sensed environmental information to provide accurate configuration information; 2) kinematic and dynamical stability analysis of such configurations so as to determine predicts for a needed change of control regime (e.g., traction control, active c.g. positioning, rover shoulder stance/pose); 3) definition and implementation of a behavior-based control architecture and action-selection strategy which autonomously sequences multi-level rover controls and reconfiguration. We report on these developments, both software simulations and hardware experimentation. Experiments include reconfigurable control of JPS's Sample Return Rover geometry and motion during its autonomous traverse over simulated Mars terrain.
Future planetary exploration missions will use rovers to perform tasks in rough terrain. Working in such conditions, a rover could become trapped due to loss of wheel traction, or even tip over. The Jet Propulsion Laboratory has developed a new rover with the ability to reconfigure its structure to improve tipover stability and ground traction. This paper presents a method to control this reconfigurability to enhance system tipover stability. A stability metric is defined and optimized online using a quasi-static model. Simulation and experimental results for the Sample Return Rover (SSR) are presented. The method is shown to be practical and yields significantly improved stability in rough terrain.
Generally, there are multiple sensor suites on existing rover platforms such as NASA's Sample Return Rover (SRR) and the Field Integrated Design and Operations (FIDO) rover at JPL. Traditionally, these sensor suites have been used in isolation for such tasks as planetary surface traversal. For example, although distant obstacle information is known from the narrow FOV navigation camera (NAVCAM) suite on SRR or FIDO, it is not explicitly used at this time for augmentation of the wide FOV hazard camera (HAZCAM) information for obstacle avoidance. This paper describes the development of advanced rover navigation techniques. These techniques include an algorithm for the generation of range maps using the fusion of information from the NAVCAMs and HAZCAMs, and an algorithm for registering range maps to an a priori model-based range map for relative rover position and orientation determination. Experimental result for each of these techniques are documented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.