PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The U.S. Department of Defense has initiated plans for the deployment of autonomous robotic vehicles in various tactical military operations starting in about seven years. Most of these missions will require the vehicles to drive autonomously over open terrain and on roads which may contain traffic, obstacles, military personnel as well as pedestrians. Unmanned Ground Vehicles (UGVs) must therefore be able to detect, recognize and track objects and terrain features in very cluttered environments. Although several LADAR sensors exist today which have successfully been implemented and demonstrated to provide somewhat reliable obstacle detection and can be used for path planning and selection, they tend to be limited in performance, are effected by obscurants, and are quite large and expensive. In addition, even though considerable effort and funding has been provided by the DOD R&D community, nearly all of the development has been for target detection (ATR) and tracking from various flying platforms. Participation in the Army and DARPA sponsored UGV programs has helped NIST to identify requirement specifications for LADAR to be used for on and off-road autonomous driving. This paper describes the expected requirements for a next generation LADAR for driving UGVs and presents an overview of proposed LADAR design concepts and a status report on current developments in scannerless Focal Plane Array (FPA) LADAR and advanced scanning LADAR which may be able to achieve the stated requirements. Examples of real-time range images taken with existing LADAR prototypes will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order for an unmanned aerial vehicle (UAV) to safely fly close to the ground, it must be capable of detecting and avoiding obstacles in its flight path. From a single camera on the UAV, the 3D structure of its surrounding environment, including any obstacles, can be estimated from motion parallax using a technique called structure from motion. Most structure from motion algorithms attempt to reconstruct the 3D structure of the environment from a single optical flow value at each feature point. We present a novel method for calculating structure from motion that does not require a precise calculation of optical flow at each feature point. Due to the effects of image noise and the aperture problem, it may be impossible to accurately calculate a single optical flow value at each feature point. Instead we may only be able to calculate a set of likely optical flow values and their associated probabilities or an optical flow probability distribution. Using this probability distribution, a more robust method for calculating structure from motion is developed. This method is being developed for use on a UAV to detect obstacles, but it can be used on any vehicle where obstacle avoidance is needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in many multi-discipline technologies have allowed small, low-cost fixed wing unmanned air vehicles (UAV) or more complicated unmanned ground vehicles (UGV) to be a feasible solution in many scientific, civil and military applications. Cameras can be mounted on-board of the unmanned vehicles for the purpose of scientific data gathering, surveillance for law enforcement and homeland security, as well as to provide visual information to detect and avoid imminent collisions for autonomous navigation. However, most current computer vision algorithms are highly complex computationally and usually constitute the bottleneck of the guidance and control loop. In this paper, we present a novel computer vision algorithm for collision detection and time-to-impact calculation based on feature density distribution (FDD) analysis. It does not require accurate feature extraction, tracking, or estimation of focus of expansion (FOE). Under a few reasonable assumptions, by calculating the expansion rate of the FDD in space, time-to-impact can be accurately estimated. A sequence of monocular images is studied, and different features are used simultaneously in FDD analysis to show that our algorithm can achieve a fairly good accuracy in collision detection. In this paper we also discuss reactive path planning and trajectory generation techniques that can be accomplished without violating the velocity and heading rate constraints of the UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a “raster” into a “vector” representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
More and more robotic applications are equipping robots with microphones to improve the sensory information available to them. However, in most applications the auditory task is very low-level, only processing data and providing auditory event information to higher-level navigation routines. If the robot, and therefore the microphone, ends up in a bad acoustic location, then the results from that sensor will remain noisy and potentially useless for accomplishing the required task. To solve this problem, there are at least two possible solutions. The first is to provide bigger and more complex filters, which is the traditional signal processing approach. An alternative solution is to move the robot in concert with providing better audition. In this work, the second approach is followed by introducing noise maps as a tool for acoustically sensitive navigation. A noise map is a guide to noise in the environment, pinpointing locations which would most likely interfere with auditory sensing. A traditional noise map, in an acoustic sense, is a graphical display of the average sound pressure level at any given location. An area with high sound pressure level corresponds to high ambient noise that could interfere with an auditory application. Such maps can be either created by hand, or by allowing the robot to first explore the environment. Converted into a potential field, a noise map then becomes a useful tool for reducing the interference from ambient noise. Preliminary results with a real robot on the creation and use of noise maps are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it into a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This methodology concentrates on the task behaviors explored through scenario examples to define a task decomposition tree that clearly represents the branching of tasks into layers of simpler and simpler subtask activities. There is a named branching condition/situation identified for every fork of this task tree. These become the input conditions of the if-then rules of the knowledge set that define how the task is to respond to input state changes. Detailed analysis of each branching condition/situation is used to identify antecedent world states and these, in turn, are further analyzed to identify all of the entities, objects, and attributes that have to be sensed to determine if any of these world states exist. This paper explores the use of this 4D/RCS methodology in some detail for the particular task of autonomous on-road driving, which work was funded under the Defense Advanced Research Project Agency (DARPA) Mobile Autonomous Robot Software (MARS) effort (Doug Gage, Program Manager).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensory processing for real-time, complex, and intelligent control systems is costly, so it is important to perform only the sensory processing required by the task. In this paper, we describe a straightforward metric for precisely defining sensory processing requirements. We then apply that metric to a complex, real-world control problem, autonomous on-road driving. To determine these requirements the system designer must precisely and completely define 1) the system behaviors, 2) the world model situations that the system behaviors require, 3) the world model entities needed to generate all those situations, and 4) the resolutions, accuracy tolerances, detection timing, and detection distances required of all world model entities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a cost-based adaptive planning agent that is operating at the route-segment level of a deliberative hierarchical planning system for autonomous road driving. At this level, the planning agent is responsible for developing fundamental driving maneuvers that allow a vehicle to travel safely amongst moving and stationary objects. This is facilitated through the use of an incrementally expanded planning graph that provides the ability to implement a dynamic cost function. This cost function varies to comply with particular road, regional, or event driven situations, and when coupled with the incremental graph expansion allows for the agent to implement hard and soft system constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present the PRIDE framework (Prediction In Dynamic Environments), which is a hierarchical multi-resolutional approach for moving object prediction that incorporates multiple prediction algorithms into a single, unifying framework. PRIDE is based upon the 4D/RCS (Real-time Control System) and provides information to planners at the level of granularity that is appropriate for their planning horizon. The lower levels of the framework utilize estimation theoretic short-term predictions based upon an extended Kalman filter that provide predictions and associated uncertainty measures. The upper levels utilize a probabilistic prediction approach based upon situation recognition with an underlying cost model that provide predictions that incorporate environmental information and constraints. These predictions are made at lower frequencies and at a level of resolution more in line with the needs of higher-level planners. PRIDE is run in the systems’ world model independently of the planner and the control system. The results of the prediction are made available to a planner to allow it to make accurate plans in dynamic environments. We have applied this approach to an on-road driving control hierarchy being developed as part of the DARPA Mobile Autonomous Robotic Systems (MARS) effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a methodology for evaluating algorithms to provide quantitative information about how well road detection and road following algorithms perform. The approach relies on generating a set of standard data sets annotated with ground truth. We evaluate the algorithms used to detect roads by comparing the output of the algorithms with ground truth, which we obtain by having humans annotate the data sets used to test the algorithms. Ground truth annotations are acquired from more than one person to reduce systematic errors. Results are quantified by looking at false positive and false negative regions of the image sequences when compared with the ground truth. We describe the evaluation of a number of variants of a road detection system based on neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes NIST’s efforts in evaluating what it will take to achieve autonomous human-level driving skills in terms of time and funding. NIST has approached this problem from several perspectives: considering the current state-of-the-art in autonomous navigation and extrapolating from there, decomposing the tasks identified by the Department of Transportation for on-road driving and comparing that with accomplishments to date, analyzing computing power requirements by comparison with the human brain, and conducting a Delphi Forecast using the expert researchers in the field of autonomous driving. A detailed description of each of these approaches is provided along with the major finding from each approach and an overall picture of what it will take to achieve human level driving skills in autonomous vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we illustrate the benefits of a strongly typed software design framework, explore the difficulties in applying such a system in a distributed sensing setting, and describe the extended type system we have created and fielded in multi-robot sensing experiments. While the built-in type systems of modern object-oriented languages provide much of the functionality we desire, there is a level of synchronization required by both static and dynamic linking that limits the applicability of such a system to a scalable distributed sensing and computing platform. We show how the limitations of these robust strong type systems can be overcome, allowing one to bring their power to bear on distributed sensing. By adhering to a formal, well-supported type system, our framework offers a scalable approach to dynamic resource discovery and exploitation. A natural consequence of the platform’s design is a higher level design system for building multi-agent programs that itself enforces type safety when pairing data sources and sinks, both when the distributed task is being launched and as the task dynamically reconfigures itself to exploit new resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fielded mobile robot systems will inevitably suffer hardware and software failures. Failures in a single subsystem can often disable the entire robot, especially if the controlling application does not consider such failures. Often simple measures, such as a software restart or the use of a secondary sensor, can solve the problem. However, these fixes must generally be applied by a human expert, who might not be present in the field. In this paper, we describe a recovery-oriented framework for mobile robot applications which
addresses this problem in two ways. First, fault isolation automatically provides graceful degradation of the overall system as individual software and hardware components fail. In addition, subsystems are monitored for known failure modes or aberrant behavior. The framework responds to detected or immanent failures by restarting or replacing the suspect component in a manner transparent to the application programmer and the robot's operator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the development of a local bus architecture for robot systems that facilitates modular development, and increases the reliability of systems composed of heterogeneous sensors and actuators. The communications bus is based on Control Area Network (CAN), and supports distributed processing in physically separate nodes. Modular cabling and a modular software interface facilitate assembly and modification, and all bus communication is browseable for configuration and troubleshooting. We demonstrate two implementations of this system, and discuss its performance and capabilities compared to alternate communication architectures, with specific emphasis on mobile robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent behaviors allow a convoy of small indoor robots to perform high-level mission tasking. These behaviors include various implementations of map building, localization, obstacle avoidance, object recognition, and navigation. Several behaviors have been developed by SSC San Diego, with integration of other behaviors developed by open-source projects and a technology transfer effort funded by DARPA. The test system, developed by SSC San Diego, consists of ROBART III (a prototype security robot), serving as the master platform, and a convoy of four ActivMedia Pioneer 2-DX robots. Each robot, including ROBART III, is equipped with a SICK LMS 200 laser rangefinder. Using integrated wireless network repeaters, the Pioneer 2-DX robots maintain an ad hoc communication link between the operator and ROBART III. The Pioneer 2-DX robots can also act as rear guards to detect intruders in areas that ROBART III has previously explored. These intelligent behaviors allow a single operator to command the entire convoy of robots during a mission in an unknown environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Air and ground vehicles exhibit complementary capabilities and
characteristics as robotic sensor platforms. Fixed wing aircraft offer broad field of view and rapid coverage of search areas. However, minimum operating airspeed and altitude limits, combined with attitude uncertainty, place a lower limit on their ability to detect and localize ground features. Ground vehicles on the other hand offer high resolution sensing over relatively short ranges with the disadvantage of slow coverage. This paper presents a decentralized architecture and solution methodology for seamlessly realizing the collaborative potential of air and ground robotic sensor platforms. We provide a framework based on an established approach to the underlying sensor fusion problem. This provides transparent integration of information from heterogeneous sources. An information-theoretic utility measure captures the task objective and robot inter-dependencies. A simple distributed solution mechanism is employed to determine team member sensing trajectories subject to the constraints of individual vehicle and sensor sub-systems. The architecture is applied to a mission involving searching for and localizing an unknown number of targets in an user specified search area. Results for a team of two fixed wing UAVs and two all terrain UGVs equipped with vision sensors are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a framework for multi-vehicle control which explicitly incorporates the state of the communication network and the constraints imposed by specifications on the quality of the communications links available to each robot. In a multi-robot adhoc setting, the need for guaranteed communications is essential for cooperative behavior. We propose a control methodology that ensures local connectivity in multi-robot navigation. Specifically, given an initial and final configuration of robots in which the quality of each communication link is above some specified threshold, we synthesize controllers that guarantee each robot goes to its goal destination while maintaining the quality of the communication links above the given threshold. For the sake of simplicity, we assume each robot has a pre-assigned "base unit" with which the robot tries to maintain connectivity while performing the assigned task. The proposed control methodology allows the robot's velocity to align with the tangent of a critical communication surface such that it might be possible for the robot to move on the surface. No assumptions are made regarding the critical surface, which might be arbitrarily complex for cluttered urban environments. The stability of such technique is shown and three-dimensional simulations with a small team of robots are presented. The paper demonstrates the performance of the control scheme in various three-dimensional settings with proofs of guarantees in simple scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A long-held dream for robotics researchers is the creation of vehicles that can move to a goal without human supervision, adapting as required to changing circumstances. While today’s ground robots are still far from achieving such complete autonomy, substantial progress has been attained. In this paper we describe the state-of-the-art in autonomous ground vehicle navigation as observed in the recently completed DARPA PerceptOR program, and we suggest new research directions where we see opportunities for leaps in performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Segway Robotic Mobility Platform (RMP) is a new mobile robotic platform based on the self-balancing Segway Human Transporter (HT). The Segway RMP is faster, cheaper, and more agile than existing comparable platforms. It is also rugged, has a small footprint, a zero turning radius, and yet can carry a greater payload. The new geometry of the platform presents researchers with an opportunity to examine novel topics, including people-height sensing and actuation modalities. This paper describes the history and development of the platform, its characteristics, and a summary of current research projects involving the platform at various institutions across the United States.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing maritime navigation and reconnaissance systems require man-in-the-loop situation awareness for obstacle avoidance, area survey analysis, threat assessment, and mission re-planning. We have developed a boat with fully autonomous navigation, surveillance, and reactive behaviors. Autonomous water navigation is achieved with no prior maps or other data − the water surface, riverbanks obstacles, movers and salient objects are discovered and mapped in real-time using a circular array of cameras along with a self-directed pan-tilt camera. The autonomous boat has been tested on harbor and river domains. Results of the detection, tracking, mapping and navigation will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small unmanned aerial vehicles (UAVs) are hindered by their limited payload and duration. Consequently, UAVs spend little time in their area of operation, returning frequently to base for refueling. The effective payload and duration of small UAVs is increased by moving the support base closer to the operating area; however this increases risk to personnel. Performing the refueling operations autonomously allows the support base to be located closer to the operating area without increasing risk to personnel. Engineers at SPAWAR Systems Center San Diego (SSC San Diego) are working to develop technologies for automated launch, recovery, refueling, rearming, and re-launching of small UAVs. These technologies are intended to provide forward-refueling capabilities by teaming small UAVs with large unmanned ground vehicles (UGVs). The UGVs have larger payload capacities so they can easily carry fuel for the UAVs in addition to their own fuel and mission payloads. This paper describes a prototype system that launched and recovered a remotely-piloted UAV from a UGV and performed automated refueling of a UAV mockup.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the area of logistics, there currently is a capability gap between the one-ton Army robotic Multifunction Utility/Logistics and Equipment (MULE) vehicle and a soldier’s backpack. The Unmanned Systems Branch at Space and Naval Warfare Systems Center (SPAWAR Systems Center, or SSC), San Diego, with the assistance of a group of interns from nearby High Tech High School, has demonstrated enabling technologies for a solution that fills this gap. A small robotic transport system has been developed based on the Segway Robotic Mobility Platform (RMP). We have demonstrated teleoperated control of this robotic transport system, and conducted two demonstrations of autonomous behaviors. Both demonstrations involved a robotic transporter following a human leader. In the first demonstration, the transporter used a vision system running a continuously adaptive mean-shift filter to track and follow a human. In the second demonstration, the separation between leader and follower was significantly increased using Global Positioning System (GPS) information. The track of the human leader, with a GPS unit in his backpack, was sent wirelessly to the transporter, also equipped with a GPS unit. The robotic transporter traced the path of the human leader by following these GPS breadcrumbs. We have additionally demonstrated a robotic medical patient transport capability by using the Segway RMP to power a mock-up of the Life Support for Trauma and Transport (LSTAT) patient care platform, on a standard NATO litter carrier. This paper describes the development of our demonstration robotic transport system and the various experiments conducted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major problems with any robotic vehicle is inefficient use of available power. This research explores in detail the locomotion, power dynamics and performance of a skid steered robotic vehicle and develops techniques to derive efficient design parameters of the vehicle in order to achieve optimal performance by minimizing the power losses/consumption. Three categories of design variables describe the vehicle and its dynamics; variables that describe the vehicle, variables that describe the surface on which it runs and variables that describe the vehicle’s motion. Two major constituent components of power losses/consumption of the vehicle are − losses in skid steer turning, and losses in rolling. Our focus is on skid steering, we present a detailed analysis of skid steering for different turning modes; elastic mode steering, half-slip steering, skid turns, low radius turns, and zero radius turns. Each of the power loss components is modeled from physics in terms of the design variables. The effect of design variables on the total power losses/consumption is then studied using simulated data for different types of surfaces i.e. hard surfaces and muddy surfaces. Finally, we make suggestions about efficient vehicle design choices in terms of the design variables.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In addition to the challenges of equipping a mobile robot with the appropriate sensors, actuators, and processing electronics necessary to perform some useful function, there coexists the equally important challenge of effectively controlling the system’s desired actions. This need is particularly critical if the intent is to operate in conjunction with human forces in a military application, as any low-level distractions can seriously reduce a warfighter’s chances of survival in hostile environments. Historically there can be seen a definitive trend towards making the robot smarter in order to reduce the control burden on the operator, and while much progress has been made in laboratory prototypes, all equipment deployed in theatre to date has been strictly teleoperated. There exists a definite tradeoff between the value added by the robot, in terms of how it contributes to the performance of the mission, and the loss of effectiveness associated with the operator control unit. From a command-and-control perspective, the ultimate goal would be to eliminate the need for a separate robot controller altogether, since it represents an unwanted burden and potential liability from the operator’s perspective. This paper introduces the long-term concept of a supervised autonomous Warfighter’s Associate, which employs a natural-language interface for communication with (and oversight by) its human counterpart. More realistic near-term solutions to achieve intermediate success are then presented, along with actual results to date. The primary application discussed is military, but the concept also applies to law enforcement, space exploration, and search-and-rescue scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used a technical readiness level assessment to obtain intervention time and the time to acquire situation awareness for different classifications of interventions. We analyzed this data to determine if it is feasible for one operator to control multiple robots of this type in similar environments. We conclude that in both terrains analyzed (an arid terrain and a wooded terrain) it would be feasible for one operator to control two robots. While it is also possible for an operator to work on another task and control a robot as well, there is an issue of providing situation awareness about the robot. There are also constraints on the tasks that could be effectively accomplished.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robots are excellent examples of systems that need to show a
high level of autonomy. Often robots are loosely supervised by humans
who are not intimately familiar with the inner workings of the robot. We cannot generally predict exact environmental conditions in
which the robot will operate in advance. This means that the behavior
must be adapted in the field. Untrained individuals cannot (and
probably should not) program the robot to effect these changes. We
need a system that will (a) allow re-tasking, and (b) allow adaptation of the behavior to the specific conditions in the field. In this paper we concentrate on (b). We will describe how to assemble
controllers, based on high-level descriptions of the behavior. We will show how the behavior can be tuned by the human, despite not knowing how the code is put together. We will also show how this can be done automatically, using reinforcement learning, and point out the problems that must be overcome for this approach to work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present two methods for a localization system, defined as the "angle of arrival" scheme, which computes position and heading of an autonomous vehicle system (AVS) fusing both odometry data and the measurements of the relative azimuth angles of known landmarks (in this case, reflectors of a stabilized laser/reflector system). The first method involves a combination of a geometric transformation and a recursive least squares approach with forgetting factor. The second method presented is a direct approach using variants of the Unscented Kalman filter. Both methods are examined in simulation and the results presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A head-aimed vision system greatly improves the situational awareness and decision speed for tele-operations of mobile robots. With head-aimed vision, the tele-operator wears a head-mounted display and a small three axis head-position measuring device. Wherever the operator looks, the remote sensing system "looks". When the system is properly designed, the operator's occipital lobes are "fooled" into believing that the operator is actually on the remote robot. The result is at least a doubling of: situational awareness, threat identification speed, and target tracking ability. Proper system design must take into account: precisely matching fields of view; optical gain; and latency below 100 milliseconds. When properly designed, a head-aimed system does not cause nausea, even with prolonged use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.