PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Planning paths for omni-directional vehicles (ODVs) can be computationally infeasible because of the large space of possible paths. This paper presents an approach that avoids this problem through the use of abstraction in characterizing the possible maneuvers of the ODV as a grammar of parameterized mobility behaviors and describing the terrain as a covering of object-oriented functional terrain features. The terrain features contain knowledge on how best to create mobility paths -- sequences of mobility behaviors -- through the object. Given an approximate map of the environment, the approach constructs a graph of mobility paths that link the location of the vehicle with the goals. The actual paths followed by the vehicle are determined by an A* search through the graph. The effectiveness of the strategy is demonstrated in actual tests with a real robotic vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Four-dimensional/RCS is the reference model architecture currently being developed for the Demo III Experimental Unmanned Vehicle program. Four-dimensional/RCS integrates the NIST (National Institute of Standards and Technology) RCS (Real-time Control System) with the German (Universitat der Bundeswehr Munchen) VaMoRs 4-D approach to dynamic machine vision. The four-dimensional/RCS architecture consists of a hierarchy of computational nodes each of which contains behavior generation (BG), world modeling (WM), sensory processing (SP), and value judgement (VJ) processes. Each node also contains a knowledge database (KD) and an operator interface. These computational nodes are arranged such that the BG processes represent organizational units within a command and control hierarchy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the point stabilization of mobile robots via state space exact feedback linearization is presented. The state space exact feedback linearization has not been possible for the point stabilization of mobile robots due to the restricted mobility caused by nonholonomic constraints. Under our proposed coordinates, however, the point stabilization problem can be exactly transformed into the problem of controlling a linear time invariant system. Thus, the point stabilization of robots can be easily formulated with Linear Quadratic control theory. As the linear system developed via state space exact feedback linearization is perfectly decoupled into an aggregate of SISO system, the controller design of a mobile robot is decomposed into SISO LQ control. The mobile robot, using the designed LQ control methods, can move to the target point with or without satisfying heading angle constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach to modeling unmanned ground vehicle (UGV) mobility performance and vehicle dynamics for evaluating the feasibility and cost of alternative motion plans. Feasibility constraints include power, traction, and roll stability limits. Sensor stabilization performance is considered in a system-level constraint requiring that the obstacle detection distance exceed the stopping distance. Mission time and power requirements are inputs to a multi- attribute cost function for planning under uncertainty. The modeling approach combines a theoretical first-principles mathematical model with an empirical knowledge-based model. The first-principles model predicts performance in an idealized deterministic environment. On-board vehicle dynamics control, for dynamic load balancing and traction management, legitimize some of the simplifying assumptions. The knowledge- based model uses historical relationships to predict the mean and variance of total system performance accounting for the contributions of unplanned reactive behaviors, local terrain variations, and vehicle response transients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NASA's next-generation Mars rovers are capable of capturing panoramic stereo imagery of their surroundings. Three- dimensional terrain maps can be derived from such imagery through stereo image correlation. While 3-D data is inherently valuable for planning a path through local terrain, obstacle detection is not fully reliable due to anomalies and noise in the range data. We present an obstacle-detection approach that first identifies potential obstacles based on color-contrast in the monocular imagery and then uses the 3-D data to project all detected obstacles into a 2-D overhead-view obstacle map where noise originating from the 3-D data is easily removed. We also developed a specialized version of the A* search algorithm that produces optimally efficient paths through the obstacle map. These paths are of similar quality as those generated by the traditional A* , at a fraction of the computational cost. Performance gains by an order of magnitude are accomplished by a two-stage approach that leverages the specificity of obstacle shape on Mars. The first stage uses depth-first A* to quickly generate a somewhat sub-optimal path through the obstacle map. The following refinement stage efficiently eliminates all extraneous way-points. Our implementation of these algorithms is being integrated into NASA's award-winning Web-Interface for Telescience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent attention has been given to the deployment of an adaptable sensor array realized by multi-robotic systems (or swarms). Our group has been studying the collective, autonomous behavior of these such systems and their applications in the area of remote-sensing and emerging threats. To accomplish such tasks, an interdisciplinary research effort at Sandia National Laboratories are conducting tests in the fields of sensor technology, robotics, and multi- agents architectures. Our goal is to coordinate a constellation of point sensors using unmanned robotic vehicles (e.g., RATLERs, Robotic All-Terrain Lunar Exploration Rover- class vehicles) that optimizes spatial coverage and multivariate signal analysis. An overall design methodology evolves complex collective behaviors realized through local interaction (kinetic) physics and artificial intelligence. Learning objectives incorporate real-time operational responses to environmental changes. This paper focuses on our recent work understanding the dynamics of many-body systems according to the physics-based hydrodynamic model of lattice gas automata. Three design features are investigated. One, for single-speed robots, a hexagonal nearest-neighbor interaction topology is necessary to preserve standard hydrodynamic flow. Two, adaptability, defined by the swarm's rate of deformation, can be controlled through the hydrodynamic viscosity term, which, in turn, is defined by the local robotic interaction rules. Three, due to the inherent nonlinearity of the dynamical equations describing large ensembles, stability criteria ensuring convergence to equilibrium states is developed by scaling information flow rates relative to a swarm's hydrodynamic flow rate. An initial test case simulates a swarm of twenty-five robots maneuvering past an obstacle while following a moving target. A genetic algorithm optimizes applied nearest-neighbor forces in each of five spatial regions distributed over the simulation domain. Armed with this knowledge, the swarm adapts by changing state in order to avoid the obstacle. Simulation results are qualitatively similar to a lattice gas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Center for Self-Organizing and Intelligent Systems has built several vehicles with ultra-maneuverable steering capability. Each drive wheel on the vehicle can be independently set at any angle with respect to the vehicle body and the vehicles can rotate or translate in any direction. The vehicles are expected to operate on a wide range of terrain surfaces and problems arise in effectively controlling changes in wheel steering angles as the vehicle transitions from one extreme running surface to another. Controllers developed for smooth surfaces may not perform well on rough or 'sticky' surfaces and vice versa. The approach presented involves the development of a model of the steering motor with the static and viscous friction of the steering motor load included. The model parameters are then identified through a series of environmental tests using a vehicle wheel assembly and the model thus obtained is used for control law development. Four different robust controllers were developed and evaluated through simulation and vehicle testing. The findings of this development will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous off-road vehicles face the daunting challenge of successfully navigating through terrain in which unmapped obstacles present hazards to safe vehicle operation. These obstacles can be sparsely scattered or densely clustered. The obstacle avoidance (OA) system on-board the autonomous vehicle must be capable of detecting all non-negotiable obstacles and planning paths around them in a sufficient computing interval to permit effective operation of the platform. To date, the reactive path planning function performed by OA systems has been essentially an exhaustive search through a set of preprogrammed swaths (linear trajectories projected through the on-board local obstacle map) to determine the best path for the vehicle to travel toward achieving a goal state. Historically, this function is a large consumer of computational resources in an OA system. A novel reactive path planner is described that minimizes processing time through the use of pre-computed indices into an n over n + 1 tableau structure with the lowest level in the tableau representing the traditional 'histogram' result. The tableau method differs significantly from other reactive planners in three ways: (1) the entire tableau is computed off-line and loaded on system startup, minimizing computational load; (2) the real-time computational load is directly proportional to the number of grid points searched and proportional to the square of the number of paths; and (3) the tableau is independent of grid resolution. Analytical and experimental comparisons of the tableau and histogram methods are presented along with generalization into an autonomous mobility system incorporating multiple feature planes and path cost evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing dynamic position (DP) control and evaluation systems for semi-submersible vessel system called a Mobile Offshore Base (MOB). In concept, the MOB is a self-propelled prepositioned floating base consisting of three to five vessels, and comprising a mile-long runway to accommodate C-17 take-off and landing operations and allow cargo transfer from container ships. Separate MOB barges would embark toward a preposition point about 100 km offshore, assemble along a line, then execute a military mission in a variety of sea states. Specific concepts call for them to be mechanically or electronically linked, while a concept refinement uses a hybrid approach, linking them mechanically during low sea states and electronically once the environmental disturbances increase. We discuss issues and approaches with MOB control, with a focus on the overarching control architecture. We frame our discussion, however, on microsimulation techniques derived from a discipline best described as simulation of dynamically reconfigurable multi-agent hybrid dynamic systems. Specifically we describe the intended use of our microsimulation technique to evaluate various control concepts and ultimately, to test the feasibility of employing DP on the MOB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the Joint Robotics Program (JRP) is to develop and field a family of unmanned ground vehicle systems for a range of military applications in accordance with user requirements. The program's structure calls for fielding first generation systems, maturing requisite technology and upgrading capabilities in an evolutionary manner. In the near term, acquisition programs emphasize teleoperation for battlefield environments, more autonomous functioning for structured environments, and extensive opportunities for users to operate UGVs. Semi-autonomous mobility in unstructured environments is the main thrust of the JRP technology base. Recent successes with prototypical countermine systems in Bosnia, as well as soldiers' and Marines' experimentation with reconnaissance unmanned ground vehicles (UGVs) have led to an explosion of requirements in other mission areas. Users are developing requirements for UGVs that: convoy with manned vehicles; carry and deliver supplies; carry and employ weapons; can be carried in a backpack and reconnoiter inside multi-story buildings. The JRP has made considerable progress over its ten year existence, and is poised to provide our Armed Forces with a 'leap-ahead' capability in the 21st Century.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotics has been identified by numerous recent Department of Defense (DOD) studies as a key enabling technology for future military operational concepts. The Demo III Program is a multiyear effort encompassing technology development and demonstration on testbed platforms, together with modeling simulation and experimentation directed toward optimization of operational concepts to employ this technology. Primary program focus is the advancement of capabilities for autonomous mobility through unstructured environments, concentrating on both perception and intelligent control technology. The scout mission will provide the military operational context for demonstration of this technology, although a significant emphasis is being placed upon both hardware and software modularity to permit rapid extension to other military missions. The Experimental Unmanned Vehicle (XUV) is a small (approximately 1150 kg, V-22 transportable) technology testbed vehicle designed for experimentation with multiple military operational concepts. Currently under development, the XUV is scheduled for roll-out in Summer 1999, with an initial troop experimentation to be conducted in September 1999. Though small, and relatively lightweight, modeling has shown the chassis capable of automotive mobility comparable to the current Army lightweight high-mobility, multipurpose, wheeled vehicle (HMMWV). The XUV design couples multisensor perception with intelligent control to permit autonomous cross-country navigation at speeds of up to 32 kph during daylight and 16 kph during hours of darkness. A small, lightweight, highly capable user interface will permit intuitive control of the XUV by troops from current-generation tactical vehicles. When it concludes in 2002, Demo III will provide the military with both the technology and the initial experience required to develop and field the first generation of semi-autonomous tactical ground vehicles for combat, combat support, and logistics applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The TARDEC Intelligent Mobility program addresses several essential technologies necessary to support the army after next (AAN) concept. Ground forces in the AAN time frame will deploy robotic unmanned ground vehicles (UGVs) in high-risk missions to avoid exposing soldiers to both friendly and unfriendly fire. Prospective robotic systems will include RSTA/scout vehicles, combat engineering/mine clearing vehicles, indirect fire artillery and missile launch platforms. The AAN concept requires high on-road and off-road mobility, survivability, transportability/deployability and low logistics burden. TARDEC is developing a robotic vehicle systems integration laboratory (SIL) to evaluate technologies and their integration into future UGV systems. Example technologies include the following: in-hub electric drive, omni-directional wheel and steering configurations, off-road tires, adaptive tire inflation, articulated vehicles, active suspension, mine blast protection, detection avoidance and evasive maneuver. This paper will describe current developments in these areas relative to the TARDEC intelligent mobility program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Naval Research Laboratory (NRL) has spearheaded the development and application of Covariance Intersection (CI) for a variety of decentralized data fusion problems. Such problems include distributed control, onboard sensor fusion, and dynamic map building and localization. In this paper we describe NRL's development of a CI-based navigation system for the NASA Mars rover that stresses almost all aspects of decentralized data fusion. We also describe how this project relates to NRL's augmented reality, advanced visualization, and REBOT projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the 'feel' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Center for Self-Organizing and Intelligent Systems (CSOIS) is engaged in developing autonomous ground vehicles. A significant problem for such vehicles is obstacle detection and avoidance. After studying various methods of detection, a scanning laser system was chosen that can detect objects at a distance of up to thirty feet while traveling between five and ten miles per hour. Once an object is detected, the vehicle must avoid it. The project employs a mission-level path planner that predetermines the path of a vehicle. One avoidance scheme is to inform the path planner of the obstacle and then let it re-plan the path. This is the global approach to the problem, which allows the use of existing software for maneuvering the vehicle. However, replanning is time consuming and lacks knowledge of the entire obstacle. An alternative approach is to use local avoidance, whereby a vehicle determines how to get by an obstacle without help from the path planner. This approach offers faster response without requiring the computing resource of the path planner. The disadvantage is that during local avoidance the vehicle ignores the global map of known obstacles and does not know to turn control back to the path planner if mission efficiency is adversely affected. This paper will describe a method for combining the current global path planner with a local obstacle avoidance technique to efficiently complete required tasks in a partially unknown environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efforts are underway to develop the capability for small unmanned underwater vehicles to use the Earth's gravitational field for autonomous navigation. A main aspect of navigation is vehicle localization on an existing gravity map. We have developed machine vision-like algorithms that match the onboard gravimeter measurements to the map values. In gravity maps there are typically a dearth of distinctive topographic features such as peaks, ridges, ravines, etc. Moreover, because the gravity field can only be measured in-place, probing for such features is infeasible as it would require extensive surveys. These factors, make the commonly used feature matching approach impractical. The localization algorithms we have developed are based on matching with contours of constant field value. These algorithms are tested on simulated data with encouraging results. Although these algorithms are developed for underwater navigation using gravity maps, they are equally applicable to other domains, for example vehicle localization on an existing terrain map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Carnegie Mellon University's CyberScout project, we are developing mobile robotic technologies that will extend the sphere of awareness and mobility of small military units while exploring issues of command and control, task decomposition, multi-agent collaboration, efficient perception algorithms, and sensor fusion. This paper describes our work on robotic all-terrain vehicles (ATVs), one of several platforms within CyberScout. We have retrofitted two Polaris ATVs as mobile robotic surveillance and reconnaissance platforms. We describe the computing, sensing, and actuation infrastructure of these platforms, their current capabilities, and future research and applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The High Velocity Tele-operated Rover, HVTR, is motivated by a goal to exceed human physical speed with small ground vehicles for operations in urban environment. A typical small (man-packable) ground vehicles' speed tops out at 1 - 2 m/sec (2 - 4 mph). Limited speed is attributed to real-time sensing and processing of the external environment. Low speed makes traversing multiple city blocks taxing on the patience of a human operator. Traversing around a block may take 10 - 20 minutes. Even with operator assistance using video does not significantly increase the speed. This is due to the low perspective of the camera view and camera vibration in outdoor setting. To increase the speed capability of a small rover, a paradigm shift is proposed. Before a mission in an urban environment, the rover system is equipped with available pre- mission data. This includes overhead images and roadmaps of the mission area. The features of this system include: high speed rover operation; intuitive operator interface for tele- operation; night time operation; operation through obscurants (such as smoke screens); GPS useful but not necessary; low bandwidth communication; long range reconnaissance in urban environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of a project for the Defense Advanced Research Projects Agency, Sandia National Laboratories is developing and testing the feasibility of using of a cooperative team of robotic sentry vehicles to guard a perimeter and to perform surround and diversion tasks. This paper describes on-going activities in the development of these robotic sentry vehicles. To date, we have developed a robotic perimeter detection system which consists of eight 'Roving All Terrain Lunar Explorer Rover' (RATLERTM) vehicles, a laptop-based base-station, and several Miniature Intrusion Detection Sensors (MIDS). A radio frequency receiver on each of the RATLER vehicles alerts the sentry vehicles of alarms from the hidden MIDS. When an alarm is received, each vehicle decides whether it should investigate the alarm based on the proximity of itself and the other vehicles to the alarm. As one vehicle attends an alarm, the other vehicles adjust their position around the perimeter to better prepare for another alarm. We have also demonstrated the ability to drive multiple vehicles in formation via tele-operation or by waypoint GPS navigation. This is currently being extended to include mission planning capabilities. At the base-station, the operator can draw on an aerial map the goal regions to be surrounded and the repulsive regions to be avoided. A potential field path planner automatically generates a path from the vehicles' current position to the goal regions while avoiding the repulsive regions and the other vehicles. This path is previewed to the operator before the regions are downloaded to the vehicles. The same potential field path planner resides on the vehicle, expect additional repulsive forces from on-board proximity sensors guide the vehicle away from unplanned obstacles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses ongoing research at the U.S. Army Research Laboratory that investigates the feasibility of developing a collaboration architecture between small physical agents and a mother ship. This incudes the distribution of planning, perception, mobility, processing and communications requirements between the mother ship and the agents. Small physical agents of the future will be virtually everywhere on the battlefield of the 21st century. A mother ship that is coupled to a team of small collaborating physical agents (conducting tasks such as Reconnaissance, Surveillance, and Target Acquisition (RSTA); logistics; sentry; and communications relay) will be used to build a completely effective and mission capable intelligent system. The mother ship must have long-range mobility to deploy the small, highly maneuverable agents that will operate in urban environments and more localized areas, and act as a logistics base for the smaller agents. The mother ship also establishes a robust communications network between the agents and is the primary information disseminating and receiving point to the external world. Because of its global knowledge and processing power, the mother ship does the high-level control and planning for the collaborative physical agents. This high level control and interaction between the mother ship and its agents (including inter agent collaboration) will be software agent architecture based. The mother ship incorporates multi-resolution battlefield visualization and analysis technology, which aids in mission planning and sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Idaho National Engineering and Environmental Laboratory (INEEL) and Utah State University's Center for Self-Organizing and Intelligent Systems have developed a team of autonomous robotic vehicles. This paper discusses the development of a strategy that uses a sophisticated, highly intelligent sensor platform to allow centralized coordination between smaller and inexpensive robots. The three components of the multi-agent cooperative scheme are small-scale robots, large-scale robots, and the central control station running a mission and path- planning software. The smaller robots are used for activities where the probability of loss increases, such as Unexploded Ordnance (UXO) or mine detonation. The research is aimed at building simple, inexpensive multi-agent vehicles and an intelligent navigation and multi-vehicle coordination system suitable for UXO, environmental remediation or mine detection. These simplified robots are capable of conducting hunting missions using low-cost positioning sensors and intelligent algorithms. Additionally, a larger sensor-rich intelligent system capable of transporting smaller units to outlying remote sites has been developed. The larger system interfaces to the central control station and provides navigation assistance to multiple low-cost vehicles. Finally, mission and path-planning software serves as the operator control unit, allowing central data collection, map creation and tracking, and an interface to the larger system as well as each smaller unit. The power of this scheme is the ability to scale to the appropriate level for the complexity of the mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mission and payload package acutely control the design of small robotic systems. Too often there is an emphasis on the vehicle technology with little thought to mission capability under field conditions. To meet a broad class of mission needs, Foster-Miller has developed a basic robotic family, referred to as Lemmings, which is highly scalable. The Lemmings family of portable robotics can accommodate a wide variety of payloads and control systems from pocket sized systems to desk-sized workhorses; all are based on the same drive and control principles. This paper gives examples of some of the missions, ranging from under water to dry land applications, and discusses how the missions have effected overall system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Battlefield situation awareness is the most fundamental prerequisite for effective command and control. Information about the state of the battlefield must be both timely and accurate. Imagery data is of particular importance because it can be directly used to monitor the deployment of enemy forces in a given area of interest, the traversability of the terrain in that area, as well as many other variables that are critical for tactical and force level planning. In this paper we describe prototype REmote Battlefield Observer Technology (REBOT) that can be deployed at specified locations and subsequently tasked to transmit high resolution panoramic imagery of its surrounding area. Although first generation REBOTs will be stationary platforms, the next generation will be autonomous ground vehicles capable of transporting themselves to specified locations. We argue that REBOT fills a critical gap in present situation awareness technologies. We expect to provide results of REBOT tests to be conducted at the 1999 Marines Advanced Warfighting Demonstration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a path planning algorithm that is part of the STESCA control architecture for autonomous vehicles. The path planning algorithm models an autonomous vehicle's path as a series of line segments in Cartesian space and compares each line segment to a list of known obstacles and hazardous areas to determine if any collisions or hindrances exist. In the event of a detected collision, the algorithm selects a point outside the obstacle or hazardous area, generates two new path segments that avoid the obstruction and recursively checks the new paths for other collisions. Once underway, if the autonomous vehicle encounters previously unknown obstacles or hazardous areas, the path planner operates in a run-time mode that decides how to re-route the path around the obstacle or abort. This paper describes the path planner along with examples of path planning in a two-dimensional environment with a wheeled land-based robotic vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Strategic-Tactical-Execution Software Control Architecture (STESCA) is a tri-level approach to controlling autonomous vehicles. Using an object-oriented approach, STESCA has been developed as a generalization of the Rational Behavior Model (RBM). STESCA was initially implemented for the Phoenix Autonomous Underwater Vehicle (Naval Postgraduate School -- Monterey, CA), and is currently being implemented for the Pioneer AT land-based wheeled vehicle. The goals of STESCA are twofold. First is to create a generic framework to simplify the process of creating a software control architecture for autonomous vehicles of any type. Second is to allow for mission specification system by 'anyone' with minimal training to control the overall vehicle functionality. This paper describes the prototype implementation of STESCA for the Pioneer AT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The control of Autonomous Land Vehicle (ALV) is a kind of typical nonlinear control. Because the state of traffic and road are complex, so the control of ALV is complex and uncertain. Especially if the ALV runs in normal state of traffic, the control of ALV is becoming more complex and difficult. A good idea is that to let the control system of ALV to simulate human driver and develop a machine-leaning system in the control system of ALV. So that as the control system of ALV is driving ALV along road, the control system can learn from the experience of human driver according to the state of traffic and road. In other words, the control system of ALV is able to become more and more 'clever.' Of course it is a challenging and very difficult task. This paper analyzes the principle of machine-leaning system used for ALV and discusses the engineering method of developing a machine- leaning system used for ALV, when the ALV runs along highway in normal traffic. This paper advances the administrative levels of machine-leaning system and a method of fusing human intelligence into machine intelligence. This paper introduces our preliminary research on machine-leaning system used for ALV also, which is a kind of online machine-leaning system without teacher.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.