Payloads for small robotic platforms have historically been designed and implemented as platform and task specific
solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without
substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed
from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU)
coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission
PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor
that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be
configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and
the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle
detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission
PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic
platforms, including Talon and Packbot.
Mine detection is a dangerous and physically demanding task that is very well-suited for robotic applications. In the
experiment described in this paper, we try to determine whether a remotely-operated robotic mine detection system
equipped with a hand-held mine detector can match the performance of a human equipped with a hand-held mine
detector. To achieve this objective, we developed the Robotic Mine Sweeper (RMS). The RMS platform is capable of
accurately sweeping and mapping mine lanes using common detectors, such as the Minelab F3 Mine Detector or the
AN/PSS-14. The RMS is fully remote controlled from a safe distance by a laptop via a redundant wireless connection
link. Data collected from the mine detector and various sensors mounted on the robot are transmitted and logged in real-time
to the remote user interface and simultaneously graphically displayed. In addition, a stereo color camera mounted
on top of the robot sends a live picture of the terrain. The system plays audio feedback from the detector to further
enhance the user's situational awareness. The user is trained to drag and drop various icons onto the user interface map
to locate mines and non-mine clutter objects. We ran experiments with the RMS to compare its detection and false alarm
rates with those obtained when the user physically sweeps the detectors in the field. The results of two trials: one with
the Minelab F3, the other with the Cyterra AN/PSS-14 are presented here.
KEYWORDS: Sensors, Robotics, Land mines, Mining, Robotic systems, Control systems, Navigation systems, Computing systems, Data processing, Data communications
CMMAD is a risk reduction effort for the AMDS program. As part of CMMAD, multiple instances of semi autonomous
robotic mine detection systems were created. Each instance consists of a robotic vehicle equipped with sensors required
for navigation and marking, countermine sensors and a number of integrated software packages which provide for real
time processing of the countermine sensor data as well as integrated control of the robotic vehicle, the sensor actuator
and the sensor. These systems were used to investigate critical interest functions (CIF) related to countermine robotic
systems. To address the autonomy CIF, the INL developed RIK was extended to allow for interaction with a mine sensor
processing code (MSPC). In limited field testing this system performed well in detecting, marking and avoiding both AT
and AP mines. Based on the results of the CMMAD investigation we conclude that autonomous robotic mine detection
is feasible. In addition, CMMAD contributed critical technical advances with regard to sensing, data processing and
sensor manipulation, which will advance the performance of future fieldable systems. As a result, no substantial
technical barriers exist which preclude - from an autonomous robotic perspective - the rapid development and
deployment of fieldable systems.
The Black Knight is a 12-ton, C-130 deployable Unmanned Ground Combat Vehicle (UGCV). It was developed to demonstrate how unmanned vehicles can be integrated into a mechanized military force to increase combat capability while protecting Soldiers in a full spectrum of battlefield scenarios. The Black Knight is used in military operational tests that allow Soldiers to develop the necessary techniques, tactics, and procedures to operate a large unmanned vehicle within a mechanized military force. It can be safely controlled by Soldiers from inside a manned fighting vehicle, such as the Bradley Fighting Vehicle. Black Knight control modes include path tracking, guarded teleoperation, and fully autonomous movement. Its state-of-the-art Autonomous Navigation Module (ANM) includes terrain-mapping sensors for route planning, terrain classification, and obstacle avoidance. In guarded teleoperation mode, the ANM data, together with automotive dials and gages, are used to generate video overlays that assist the operator for both day and night driving performance. Remote operation of various sensors also allows Soldiers to perform effective target location and tracking. This document covers Black Knight's system architecture and includes implementation overviews of the various operation modes. We conclude with lessons learned and development goals for the Black Knight UGCV.
To support the development of advanced algorithms for hand-held detectors, it is desirable to collect data with a specific sweep rate, height and spacing. In addition, it is also important that the position of each data point produced by the detector is known. Since it is impossible for a human operator to precisely control these sweep parameters, we have developed a semi-autonomous robotic data collection system. It is designed as a portable robot with a 2-axis manipulator that can be used to sweep any hand-held detector at a precise sweep rate, height, and spacing. It is also equipped with an interface to the hand-held detector, so it can log the output data during the sweeping motion. It also tags the output data with the position data from the on-board positioning system. As a result, we can construct an accurate 2-D or 3-D grid of the detector's output as a function of horizontal and vertical position of the detector. The manipulator is also equipped with force sensing capability that can be used to sense terrain height or collision. To increase deployment flexibility, all functions of the robot are controlled through a wireless communication link by a hand-held computer with a maximum operating distance of at least 100m. Through the hand-held computer, the operator can move the robot, and program its behavior using a script based motion sequencer. The robot has been deployed successfully on several data acquisition activities, and successfully produced high-resolution 2-D map of the buried targets.
As landmines get harder to detect, the complexity of landmine detectors has also been increasing. To increase the probability of detection and decrease the false alarm rate of low metallic landmines, many detectors employ multiple sensing modalities, which include radar and metal detector. Unfortunately, the operator interface for these new detectors stays pretty much the same as for the older detectors. Although the amount of information that the new detectors acquire has increased significantly, the interface has been limited to a simple audio interface. We are currently developing a hybrid audiovisual interface for enhancing the overall performance of the detector. The hybrid audiovisual interface combines the simplicity of the audio output with the rich spatial content of the video display. It is designed to optimally present the output of the detector and also to give the proper feedback to the operator. Instead of presenting all the data to the operator simultaneously, the interface allows the operator to access the information as needed. This capability is critical to avoid information overload, which can significantly reduce the performance of the operator. The audio is used as the primary notification signal, while the video is used for further feedback, discrimination, localization and sensor fusion. The idea is to let the operator gets the feedback that he needs and enable him to look at the data in the most efficient way. We are also looking at a hybrid man-machine detection system which utilizes precise sweeping by the machine and powerful human cognitive ability. In such a hybrid system, the operator is free to concentrate on discriminant task, such as manually fusing the output of the different sensing modalities, instead of worrying about the proper sweep technique. In developing this concept, we have been using the virtual mien lane to validate some of these concepts. We obtained some very encouraging results form our preliminary test. It clearly shows that with the proper feedback, the performance of the operator can be improved significantly in a very short time.
Landmine detection is a complex and highly dangerous task. Most demining operations are done using hand-held detectors, which means that the operator is always at risk of serious injury or death. One of the most important factor that determines the probability of detecting is the operator performance. Therefore, it is very important that we train the operator well and are able to assess their performance accurately. To achieve these objectives, we have been developing two training tools, the 3D tracker for real-time feedback during training, and the virtual mien lane for interactive training. We have been using the 3D tracker successfully to assess the performance of an operator as a prat of a successful training program.
The effectiveness and robustness of any landmine detector ultimately depend on its operator. This is especially true for hand-held landmine detectors, since the operator handles both the scanning motion and the interpretation of the data. Therefore, it is important that the human-in-the-loop issues are addressed as an integral part of the detector design, not as an afterthought. Two critical issues that we have identified are the lack of position feedback for the operator and the lack of 2D map of the detector output. The position feedback will allow the operator to obtain feedback with respect to the sweep rate, detector height and orientation. The position feedback can also be integrated with the detector output to generate a 2D map for the operator. In addition, the 2D map enables 2D image processing techniques, which are more robust and effective than 1D signal processing techniques.
Using surface and subsurface sensing, we have developed a perception system for autonomous retrieval of buried objects. The subsurface sensing system uses Ground Penetrating Radar (GPR) to locate buried objects. A 2D laser rangefinder system generates an elevation map, and using this map a robotic arm positions the GPR antenna. This setup allows us to automate the GPR data collection. An image processing algorithm is used to locate the object of interest in the GPR data. After it is located, we use sense and dig cycle to retrieve the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.