The reliability of individual robots influences the success of multirobot missions. When one robot fails, others
must be retasked to complete the failed robot's tasks. This increases the failure likelihood for these other robots.
Existing multirobot task allocation systems consider robot failures only after the fact, via replanning. In this
paper we show that mission performance for multirobot missions can be improved by using knowledge of robot
failure rates to inform the initial task allocation.
In this paper, an offline learning mechanism based on the genetic algorithm is proposed for autonomous vehicles
to emulate human driver behaviors. The autonomous driving ability is implemented based on a Prediction- and
Cost function-Based algorithm (PCB). PCB is designed to emulate a human driver's decision process, which
is modeled as traffic scenario prediction and evaluation. This paper focuses on using a learning algorithm to
optimize PCB with very limited training data, so that PCB can have the ability to predict and evaluate traffic
scenarios similarly to human drivers. 80 seconds of human driving data was collected in low-speed (< 30miles/h)
car-following scenarios. In the low-speed car-following tests, PCB was able to perform more human-like carfollowing
after learning. A more general 120 kilometer-long simulation showed that PCB performs robustly even
in scenarios that are not part of the training set.
Earth science research must bridge the gap between the atmosphere and the ocean to foster understanding of Earth's
climate and ecology. Typical ocean sensing is done with satellites or in situ buoys and research ships which are slow to
reposition. Cloud cover inhibits study of localized transient phenomena such as Harmful Algal Blooms (HAB). A fleet
of extended-deployment surface autonomous vehicles will enable in situ study of characteristics of HAB, coastal
pollutants, and related phenomena. We have developed a multiplatform telesupervision architecture that supports
adaptive reconfiguration based on environmental sensor inputs. Our system allows the autonomous repositioning of
smart sensors for HAB study by networking a fleet of NOAA OASIS (Ocean Atmosphere Sensor Integration System)
surface autonomous vehicles. In situ measurements intelligently modify the search for areas of high concentration.
Inference Grid and complementary information-theoretic techniques support sensor fusion and analysis. Telesupervision
supports sliding autonomy from high-level mission tasking, through vehicle and data monitoring, to teleoperation when
direct human interaction is appropriate. This paper reports on experimental results from multi-platform tests conducted
in the Chesapeake Bay and in Pittsburgh, Pennsylvania waters using OASIS platforms, autonomous kayaks, and multiple
simulated platforms to conduct cooperative sensing of chlorophyll-a and water quality.
We are developing a multi-robot science exploration architecture and system called the Telesupervised Adaptive Ocean
Sensor Fleet (TAOSF). TAOSF uses a group of robotic boats (the OASIS platforms) to enable in-situ study of ocean
surface and sub-surface phenomena. The OASIS boats are extended-deployment autonomous ocean surface vehicles,
whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). The
TAOSF architecture provides an integrated approach to multi-vehicle coordination and sliding human-vehicle autonomy.
It allows multiple mobile sensing assets to function in a cooperative fashion, and the operating mode of the vessels to
range from autonomous control to teleoperated control. In this manner, TAOSF increases data-gathering effectiveness
and science return while reducing demands on scientists for tasking, control, and monitoring. It combines and extends
prior related work done by the authors and their institutions. The TAOSF architecture is applicable to other areas where
multiple sensing assets are needed, including ecological forecasting, water management, carbon management, disaster
management, coastal management, homeland security, and planetary exploration. The first field application chosen for
TAOSF is the characterization of Harmful Algal Blooms (HABs). Several components of the TAOSF system have been
tested, including the OASIS boats, the communications and control interfaces between the various hardware and
software subsystems, and an airborne sensor validation system. Field tests in support of future HAB characterization
were performed under controlled conditions, using rhodamine dye as a HAB simulant that was dispersed in a pond. In
this paper, we describe the overall TAOSF architecture and its components, discuss the initial tests conducted and
outline the next steps.
In Carnegie Mellon University's CyberScout project, we are developing a network of mobile and stationary sentries capable of autonomous reconnaissance and surveillance. In this paper, we describe the cooperative perception algorithms and mission planning necessary to achieve this task, including sensor-to-sensor target handoff methods and an efficient decentralized path-planning algorithm. These methods are applied to a typical law enforcement application, a building stakeout scenario.
In Carnegie Mellon University's CyberScout project, we are developing mobile and stationary sentries capable of autonomous reconnaissance and surveillance. In this paper, we describe recent advances in the areas of efficient perception algorithms (detection, classification, and correspondence) and mission planning. In detection, we have achieved improved rejection of camera jitter and environmental variations (e.g., lighting, moving foliage) through multi-modal filtering, and we have implemented panoramic backgrounding through pseudo-real-time mosaicing. In classification, we present methods for discriminating between individual, groups of individuals, and vehicles, and between individuals with and without backpacks. In correspondence, we describe an accurate multi-hypothesis approach based on both motion and appearance. Finally, in mission planning, we describe mapbuilding using multiple sensory cues and a computationally efficient decentralized planner for multiple platforms.
In Carnegie Mellon University's CyberScout project, we are developing mobile robotic technologies that will extend the sphere of awareness and mobility of small military units while exploring issues of command and control, task decomposition, multi-agent collaboration, efficient perception algorithms, and sensor fusion. This paper describes our work on robotic all-terrain vehicles (ATVs), one of several platforms within CyberScout. We have retrofitted two Polaris ATVs as mobile robotic surveillance and reconnaissance platforms. We describe the computing, sensing, and actuation infrastructure of these platforms, their current capabilities, and future research and applications.
An important class of robotic applications potentially involves multiple, cooperating robots: security or military surveillance, rescue, mining, etc. One of the main challenges in this area is effective cooperative control: how does one determine and orchestrate individual robot behaviors which result in a desired group behavior? Cognitive (planning) approaches allow for explicit coordination between robots, but suffer from high computational demands and a need for a priori, detailed world models. Purely reactive approaches such as that of Brooks are efficient, but lack a mechanism for global control and learning. Neither approach by itself provides a formalism capable of a sufficiently rapid and rich range of cooperative behaviors. Although we accept the usefulness of the reactive paradigm in building up complex behaviors from simple ones, we seek to extend and modify it in several ways. First, rather than restricting primitive behaviors to fixed input-output relationships, we include memory and learning through feedback adaptation of behaviors. Second, rather than a fixed priority of behaviors, our priorities are implicit: they vary depending on environmental stimuli. Finally, we scale this modified reactive architecture to apply not only for an individual robot, but also at the level of multiple cooperating robots: at this level, individual robots are like individual behaviors which combine to achieve a desired aggregate behavior. In this paper, we describe our proposed architecture and its current implementation. The application of particular interest to us is the control of a team of mobile robots cooperating to perform area surveillance and target acquisition and tracking.
High-quality transformer winding requires precise measurement and control of the gapping between adjacent wires. We take a vision-based approach to the measurement subtask of determining the gaps between copper wires wound onto an oval transformer. The oval core shape, which can have an eccentricity as high as 2-to-1, leads to significant variations in surface normal and viewing distance. We use special lighting, a secondary mandrel shape sensor, and the specular reflection off the wires to give us an accurate model of the experimental geometry. We further exploit the vertical symmetry of the viewed region to condense our 2D image to a simple 1D signal containing reflectance peaks. after utilizing pattern recognition and some additional safety features to separate the wire peaks from background noise, we perform a least squares curve fit of the peaks to determine the subpixel maxima. the final algorithm is computationally fast and yields the desired wire gap in an absolute metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.