Commercial Lidar often focus on reporting the range associated with the strongest laser return pulse, first return pulse, or last return pulse. This technique works well when observing discrete objects separated by a distance greater than the laser pulse length. However, multiple reflections due to more closely layered objects produce overlapping laser return pulses. Resolving the multi-layered object ranges in the resulting complex waveforms is the subject of this paper. A laboratory setup designed to investigate the laser return pulse produced by multi-layered objects is described along with a comparison of a simulated laser return pulse and the corresponding digitized laser return pulse. Variations in the laboratory setup are used to assess different strategies for resolving multi-layered object ranges and how this additional information can be applied to detecting objects partially obscured in vegetation.
Previous research has presented work on sensor requirements, specifications, and testing, to evaluate the feasibility of increasing autonomous vehicle system speeds. Discussions included the theoretical background for determining sensor requirements, and the basic test setup and evaluation criteria for comparing existing and prototype sensor designs. This paper will present and discuss the continuation of this work. In particular, this paper will focus on analyzing the problem via a real-world comparison of various sensor technology testing results, as opposed to previous work that utilized more of a theoretical approach. LADAR/LIDAR, radar, visual, and infrared sensors are considered in this research. Results are evaluated against the theoretical, desired perception specifications. Conclusions for utilizing a suite of perception sensors, to achieve the goal of doubling ground vehicle speeds, is also discussed.
The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is executing a program
to assess the performance of a variety of sensor modalities for standoff detection of roadside explosive hazards. The
program objective is to identify an optimal sensor or combination of fused sensors to incorporate with autonomous
detection algorithms into a system of systems for use in future route clearance operations. This paper provides an overview
of the program, including a description of the sensors under consideration, sensor test events, and ongoing data analysis.
Commercial sensor technology has the potential to bring cost-effective sensors to a number of U.S. Army applications.
By using sensors built for a widespread of commercial application, such as the automotive market, the Army can
decrease costs of future systems while increasing overall capabilities. Additional sensors operating in alternate and
orthogonal modalities can also be leveraged to gain a broader spectrum measurement of the environment. Leveraging
multiple phenomenologies can reduce false alarms and make detection algorithms more robust to varied concealment
materials. In this paper, this approach is applied to the detection of roadside hazards partially concealed by light-to-medium
vegetation. This paper will present advances in detection algorithms using a ground vehicle-based commercial
LADAR system. The benefits of augmenting a LADAR with millimeter-wave automotive radar and results from
relevant data sets are also discussed.
Lidar systems are well known for their ability to measure three-dimensional aspects of a scene. This attribute of Lidar has been widely exploited by the robotics community, among others. The problem of resolving ranges of layered objects (such as a tree canopy over the forest floor) has been studied from the perspective of airborne systems. However, little research exists in studying this problem from a ground vehicle system (e.g., a bush covering a rock or other hazard). This paper discusses the issues involved in solving this problem from a ground vehicle. This includes analysis of extracting multi-return data from Lidar and the various laser properties that impact the ability to resolve multiple returns, such as pulse length and beam size. The impacts of these properties are presented as they apply to three different Lidar imaging technologies: scanning pulse Lidar, Geiger-mode flash Lidar, and Time-of-Flight camera. Tradeoffs associated with these impacts are then discussed for a ground vehicle Lidar application.
As robotic ground systems advance in capabilities and begin to fulfill new roles in both civilian and military life, the limitation of slow operational speed has become a hindrance to the wide-spread adoption of these systems. For example, military convoys are reluctant to employ autonomous vehicles when these systems slow their movement from 60 miles per hour down to 40. However, these autonomous systems must operate at these lower speeds due to the limitations of the sensors they employ. Robotic Research, with its extensive experience in ground autonomy and associated problems therein, in conjunction with CERDEC/Night Vision and Electronic Sensors Directorate (NVESD), has performed a study to specify system and detection requirements; determined how current autonomy sensors perform in various scenarios; and analyzed how sensors should be employed to increase operational speeds of ground vehicles. The sensors evaluated in this study include the state of the art in LADAR/LIDAR, Radar, Electro-Optical, and Infrared sensors, and have been analyzed at high speeds to study their effectiveness in detecting and accounting for obstacles and other perception challenges. By creating a common set of testing benchmarks, and by testing in a wide range of real-world conditions, Robotic Research has evaluated where sensors can be successfully employed today; where sensors fall short; and which technologies should be examined and developed further. This study is the first step to achieve the overarching goal of doubling ground vehicle speeds on any given terrain.
The capability to detect partially obscured objects is of interest to many communities, including ground vehicle robotics. The ability to find partially obscured objects can aid in automated navigation and planning algorithms used by robots. Two sensors often used for this task are Lidar and Radar. Lidar and Radar systems provide complementary data about the environment. Both are active sensing modalities and provide direct range measurements. However, they operate in very different portions of the radio frequency spectrum. By exploiting properties associated with the different frequency spectra, the sensors are able to compensate for each other’s shortcomings. This makes them excellent candidates for sensor processing and data fusion systems. The benefits associated with Lidar and Radar sensor fusion for a ground vehicle application, using economical variants of these sensors, are presented. Special consideration is given to detecting objects partially obscured by light to medium vegetation.
In recent years, the number of commercially available LADAR (also referred to as LIDAR) systems have grown with the increased interest in ground vehicle robotics and aided navigation/collision avoidance in various industries. With this increased demand the cost of these systems has dropped and their capabilities have increased. As a result of this trend, LADAR systems are becoming a cost effective sensor to use in a number of applications of interest to the US Army. One such application is the standoff detection of road-side hazards from ground vehicles. This paper will discuss detection of road-side hazards partially concealed by light to medium vegetation. Current algorithms using commercially available LADAR systems for detecting these targets will be presented, along with results from relevant data sets. Additionally, optimization of commercial LADAR sensors and/or fusion with Radar will be discussed as ways of increasing detection ability.
Macroscopic and microscopic mixture models and algorithms for hyperspectral unmixing are presented. Unmixing algorithms are derived from an objective function. The objective function incorporates the linear mixture model for macroscopic unmixing and a nonlinear mixture model for microscopic unmixing. The nonlinear mixture model is derived from a bidirectional reflectance distribution function for microscopic mixtures. The algorithm is designed to unmix hyperspectral images composed of macroscopic or microscopic mixtures. The mixture types and abundances at each pixel can be estimated directly from the data without prior knowledge of mixture types. Endmembers can also be estimated. Results are presented using synthetic data sets of macroscopic and microscopic mixtures and using well-known, well-characterized laboratory data sets. The unmixing accuracy of this new physics-based algorithm is compared to linear methods and to results published for other nonlinear models. The proposed method achieves the best unmixing accuracy.
A method of incorporating the multi-mixture pixel model into hyperspectral endmember extraction is presented and
discussed. A vast majority of hyperspectral endmember extraction methods rely on the linear mixture model to describe
pixel spectra resulting from mixtures of endmembers. Methods exist to unmix hyperspectral pixels using nonlinear
models, but rely on severely limiting assumptions or estimations of the nonlinearity. This paper will present a
hyperspectral pixel endmember extraction method that utilizes the bidirectional reflectance distribution function to
model microscopic mixtures. Using this model, along with the linear mixture model to incorporate macroscopic
mixtures, this method is able to accurately unmix hyperspectral images composed of both macroscopic and microscopic
mixtures. The mixtures are estimated directly from the hyperspectral data without the need for a priori knowledge of the
mixture types. Results are presented using synthetic datasets, of multi-mixture pixels, to demonstrate the increased
accuracy in unmixing using this new physics-based method over linear methods. In addition, results are presented using
a well-known laboratory dataset.
A method of incorporating macroscopic and microscopic reflectance models into hyperspectral pixel unmixing is
presented and discussed. A vast majority of hyperspectral unmixing methods rely on the linear mixture model to
describe pixel spectra resulting from mixtures of endmembers. Methods exist to unmix hyperspectral pixels using
nonlinear models, but rely on severely limiting assumptions or estimations of the nonlinearity. This paper will present a
hyperspectral pixel unmixing method that utilizes the bidirectional reflectance distribution function to model
microscopic mixtures. Using this model, along with the linear mixture model to incorporate macroscopic mixtures, this
method is able to accurately unmix hyperspectral images composed of both macroscopic and microscopic mixtures. The
mixtures are estimated directly from the hyperspectral data without the need for a priori knowledge of the mixture types.
Results are presented using synthetic datasets, of macroscopic and microscopic mixtures, to demonstrate the increased
accuracy in unmixing using this new physics-based method over linear methods. In addition, results are presented using
a well-known laboratory dataset. Using these results, and other published results from this dataset, increased accuracy in
unmixing over other nonlinear methods is shown.
KEYWORDS: Land mines, Data modeling, Neural networks, Explosives, Target detection, Process modeling, Neurons, Signal detection, Image processing, Classification systems
Typical classification models used for detection of buried landmines estimate a singular discriminative output. This
classification is based on a model or technique trained with a given set of training data available during system
development. Regardless of how well the technique performs when classifying objects that are 'similar' to the training
set, most models produce undesirable (and many times unpredictable) responses when presented with object classes
different from the training data. This can cause mines or other explosive objects to be misclassified as clutter, or false
alarms. Bayesian regression and classification models produce distributions as output, called the predictive distribution.
This paper will discuss predictive distributions and their application to characterizing uncertainty in the classification
decision, from the context of landmine detection. Specifically, experiments comparing the predictive variance produced
by relevance vector machines and Gaussian processes will be described. We demonstrate that predictive variance can be
used to determine the uncertainty of the model in classifying an object (i.e., the classifier will know when it's unable to
reliably classify an object). The experimental results suggest that degenerate covariance models (such as the relevance
vector machine) are not reliable in estimating the predictive variance. This necessitates the use of the Gaussian Process
in creating the predictive distribution.
In this paper, a study involving the detection of buried objects by fusing airborne Multi-Spectral Imagery
(MSI) and ground-based Ground Penetrating Radar (GPR) data is investigated. The benefit of using the
airborne sensor to cue the GPR, which will then search the area indicated by the MSI, is investigated and
compared to results obtained via a purely ground-based system. State-of-the-art existing algorithms, such as
hidden Markov models will be applied to the GPR data both in queued and non-queued modes. In addition,
the ability to measure disturbed earth with the GPR sensor will be investigated. Furthermore, state-of-theart
algorithms for the MSI system will be described. These algorithms require very high detection rates with
acceptable false alarm rates in order to serve as an acceptable system. Results will be presented on data
collected at outdoor testing and evaluation sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.