PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Over the past five years the potential utility for 3-D laser radar (LIDAR) to support urban warfare initiatives has grown rapidly. Currently, every major U.S. defense agency, supporting geo-spatial analysis and urban warfare, is now actively using or investigating the potential for high-resolution 3-D LIDAR terrain in future operations. Airborne LIDAR data are now commonly collected with an average spatial posting of 1 meter to 5 cm.
The Joint Programs, Sustainment and Development Project Office (JPSD-PO) has contributed significantly to the rapid collection of high-resolution urban terrain using LIDAR sensor technology. JPSD has collected LIDAR data in support of Department of Defense activities throughout the world. Currently, the Urban Recon ACTD is creating the next generation LIDAR sensors for use in tactical environments. Advancements in flash LIDAR data collection and the integration of a complete LIDAR sensor under the Urban Recon ACTD will be examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Results from our fast and high accuracy 3-D laser radar are given at distances up to 500 m. The system is based on gated viewing with range accuracy below 1 mm under optimal circumstances. It consists of a high sensitivity, fast, intensified CCD camera, and an Nd:YAG passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752×582 pixels. Camera shutter and delay steps are controlled in steps of 100 ps. Each laser pulse triggers the camera delay and shutter. A 3-D image is constructed from a sequence of 50-100 2-D reflectivity images, where each frame integrates ~700 laser pulses on the CCD. In 50 Hz video mode we record a 2-D sequence in a second and process a 3-D image in a few seconds. We compare 3-D images at short to long distances and quantify the degree of person identification in 3-D. Turbulence, vibrations and system errors are found to limit a successful PID to distances shorter than ~500m for our prototype system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel CCD sensor and associated camera, capable of operating in a burst mode at 100 million frames per second (Mfps) has been developed. This camera, referred to as “the Zenith camera” is combined with a fast pulse source of laser illumination to provide effective three-dimensional (3D) volumetric imaging of objects contained within a highly back-scattering media. Each 16-frame burst establishes an image data-cube. The Zenith sensor is equipped with a custom micro-lens-array to overcome sensor fill factor limitations. Camera performance is evaluated to address and quantify a variety of parameters critical to the lidar application. This includes fundamental signal-to-noise performance, as well as temporal resolution effects such as image lag occurring between adjacent frames within a data-cube. Image processing methods aimed at overcoming any residual image lag or other performance limitations are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D ranging and imaging technology is generally divided into time-based (ladar) and position-based (triangulation) approaches. Traditionally ladar has been applied to long range, low precision applications and triangulation has been used for short range, high precision applications. Measurement speed and precision of both technologies have improved such that ladars are viable at shorter ranges and triangulation is viable at longer ranges. These improvements have produced an overlap of technologies for short to mid-range applications. This paper investigates the two sets of technologies to demonstrate their complementary nature particularly with respect to space and terrestrial applications such as vehicle inspection, navigation, collision avoidance, and rendezvous & docking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated ladar/EO imager has been developed that synchronizes and aligns CMOS digital camera readouts with the scan motion of a time-of-flight pulsed ladar . A prototype has been developed at the Utah State University Center for Advanced Imaging Ladar that reads out a 13 by 13 patch of RGB pixels within the subtended angle of a single ladar beam footprint. The readout location for the patch is slaved to the ladar and follows the ladar beam as it is scanned within the field-of-view. As the scanning occurs, the x-y-z position of each footprint and associated image patch is determined via the ladar. Multiple patches can then be mosaiked to build up a 3D image composed of 3D texture elements (texels) or 3D splats. Because of its ability to produce texels on-the-fly, the system is called a Texel Camera. The approach precludes mismatched occlusions and other ill-effects when motion occurs in the scene. The existing prototype consists of a single-channel flying-spot ladar running at approximately 470 shots/second and a color imager running at approximately 160 times the shot rate. Other designs are in development that employ line-flash and array flash ladar components that will run at pixel rates up to two orders of magnitude faster. The ability to create high-fidelity combined ladar/EO data sets in real time will be advantageous for time-critical applications such as cruise missile automatic target recognition. The design has the potential for applications in space rendezvous and dock, airborne automatic target recognition, surveillance from a tripod, and others that benefit from real-time 3D imagery creation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A compact dual mode seeker is under development at Diehl BGT Defence (DBD) addressing autonomous guidance, target detection and classification/identification for extended air defence (EAD) and ballistic missile defence (BMD). The dual mode sensor consists of an imaging infrared sensor and an imaging LADAR sensor both in snapshot mode. This paper presents the concept of the dual mode sensor and shows the current development status. Critical components such as a compact laser source, fiber-array for image plane sampling, and wavelength selective infrared beam splitter are presented in detail. Single Spot and 3D-LADAR-measurements were performed with a seeker lab-setup to demonstrate the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Powered Low Cost Autonomous Attack System (PLOCAAS) is an Air Force Research Laboratory Munitions Directorate Advanced Technology Demonstration program. The PLOCAAS objective is to demonstrate a suite of technologies in an affordable miniature munition to autonomously search, detect, identify, track, attack and destroy ground mobile targets of military interest.
PLOCAAS incorporates a solid state LADAR seeker and Autonomous Target Acquisition (ATA) algorithms, miniature turbojet engine, multi-mode warhead, and an integrated INS/GPS into a 36” high lift-to-drag airframe. Together, these technologies provide standoff beyond terminal defenses, wide area search capability, and high probability of target report with low false target attack rate with high load-outs. Four LADAR seeker captive flight tests provided the sequestered data for robust Air Force ATA algorithm performance assessment and non-sequestered data for algorithm development. PLOCAAS has had three successful free-flight tests in which the LADAR seeker and ATA algorithms have detected, acquired, identified, tracked, and engaged ground mobile targets.
In addition to summarizing program activities to date, this paper will present requirements and capabilities to be demonstrated in the next phase of PLOCAAS development. This phase’s objective is to demonstrate the military utility of a two-way data-link. The data-link allows Operator-In-The-Loop monitoring and control of miniature, cooperative, wide-area-search munitions and enables them to serve as non-traditional Intelligence, Surveillance, and Reconnaissance (ISR) assets in a network-centric environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raytheon has developed a new tactical form-factored, imaging LADAR (LAser Detection And Ranging) seeker. In a joint activity with AMRDEC, the seeker was used in a tower test data collection at the Russell Measurement Facility at Redstone Arsenal, Alabama. The seeker collected 3D imagery of fixed structures and vehicles embedded in various clutter backgrounds for use in analysis of computer vision and automatic target recognition techniques. This paper presents a high-level overview of the seeker, a description of the test activities, representative LADAR range and intensity imagery collected during the test, and 3D rendered scenes constructed from the imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Urban Recon Advanced Concepts Technology Demonstration (ACTD) has integrated a high rate 3-D laser scanner with an Inertial Navigation System (INS). The moving vehicle LIDAR sensors capture the entire street-level scene in 3-D including people, cars, building windows, doorways, and the entire gamut of urban clutter. The vehicle can travel at speeds of 20-30 mph collecting 5cm - 10cm resolution 3-D point cloud data of the street-level scene. The system collects below-the-roofline data as a vehicle moves down the street at a rate of 250,000 points per second.
Data must not only be collected at a high rate of speed, but the data must rapidly be processed and ready for visualization and analysis to be useful. Software has been created, extended or integrated to support the collection, processing and visualization of these data. The hardware and software which Urban Recon has assembled and integrated will be examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to estimate the mean frequency, peak frequency, and frequency spread of angularly unresolved hard targets is examined using both coherent and direct detection ladar simultaneously. It has been proposed that the direct detection of the return speckle intensity can be used to enhance the coherent detection ladar spectral estimates and signal processing algorithms. The direct detection ladar uses the orthogonally polarized speckle E-field return, with respect the local-oscillator laser, and does not affect the coherent detection sequence. We are concerned in obtaining precise frequency information with only coarse range requirements and, therefore, consider Q-switched laser pulses whose spectral width is much narrower than the target’s spectral bandwidth. We use spinning diffuse cones for examples. The direct detection speckle intensity spectrum is computed and is shown to be corrupted by a strong DC-component and interference between the positive and negative frequencies, which causes additional frequency spread of nearly twice the target spectral width. However, useful target spectral width information can be obtained by direct detection to help in the coherent detection signal processing. Three algorithms are described which are each shown to be within a factor of about 2 times the Cramer-Rao lower bound estimate on mean and peak frequency precision. Surprisingly robust performance of the autocorrelation function first-lag algorithm (“pulse-pair”) is demonstrated for these targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a new approach to coherent LIDAR remote sensing by utilizing a quantum-optical, parallel sensor based on spatial-spectral holography (SSH) in a cryogenically cooled inhomogeneously-broadened absorber (IBA) crystal that is used to sense the LIDAR returns and perform the front-end range-correlation signal processing. This SSH sensor increases the LIDAR system sensitivity through range-correlation gain before detection. This approach permits the use of high-power, noisy, CW lasers as ranging waveforms in LIDAR systems instead of the highly stabilized, injection seeded and amplified pulsed laser sources required by most coherent LIDAR systems. The capabilities of the IBA media for many 10s of GHz bandwidth and sub-MHz resolution, while using either a coded waveform or just a high-power, noisy laser with a broad linewidth (e.g. a random noise LIDAR) may enable a new generation of improved LIDAR sensors and processors. Preliminary experimental demonstrations of LIDAR range detection and signal processing for random noise and chirped transmitted waveforms are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advance of linear CCD arrays and high precision galvanometer design in recent years, triangulation based 3D laser cameras have found wide applications from human contour digitization to object tracking and imaging on the International Space Station. [1] In most applications, a beam size of 1mm or larger is used to minimize the beam divergence over the entire range.
With a beam diameter of 1mm, the position resolution (X, Y direction) is normally in the order of one millimeter. In the triangulation method, the distance (Z direction) information is extracted from the position of a Gaussian shape peak on a detector array. There are two major sources of error, excessive edge effects and speckle noise caused by a large spot size. Edge effects are produced when parts of the same beam spot fall on surfaces at different distances. This causes the peak shape of the imaging spot on the array to deviate from Gaussian and produces errors in the distance measurement at the edge of an object.
In this paper, modeling of edge effects and speckle noise in an auto-synchronized 3D laser camera in terms of beam size, laser wavelength, optical aperture and geometrical parameters used in the triangulation arrangement are discussed. The methods to mitigate errors from edge effects and speckle noise, and the results showing high resolution in both lateral position and distance on a 3D object are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FireLidar, an active optical imaging system, is being developed for use as an aid to search and rescue in smoke and flame environments. The system is intended to augment currently available passive thermal imaging technology by imaging in the presence of a thermal bloom, heavy smoke conditions, or species which strongly absorb thermal radiation, such as water. We present experimental verification of a theoretical model for FireLidar. Lidar range equations for compartment fire scenarios are derived and compared to measurements taken in a controlled smoke chamber. Extinction measurements of near-infrared light through soot particulate provide information about optical properties of fire environments necessary to predict Lidar returns. Measured extinction values are compared to a single-scattering approximation, based on the Rayleigh-Debye-Gans scattering theory for fractal aggregates. Component specifications for a FireLidar prototype system are discussed, including laser power, filter bandwidth, and camera integration times. A man-portable prototype system using specified components is scheduled for completion by the end of 2005, with a handheld device following soon thereafter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode simplifies the signal processing by eliminating the need for analog-to-digital converters and non-linearity corrections. With appropriate optics, the 32x32 array of digital time values represents a 3-D spatial image frame of the scene. Successive image frames illuminated with the multi-kilohertz pulse repetition rate laser are accumulated into range histograms to provide 3-D volume and intensity information. In this article, we describe the Jigsaw program goals, our demonstration sensor system, the data collection campaigns, and show examples of 3-D imaging with foliage and camouflage penetration. Other applications for this 3-D imaging direct-detection ladar technology include robotic vision, avigation of autonomous vehicles, manufacturing quality control, industrial security, and topography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatial resolution of a conventional imaging LADAR system is constrained by the diffraction limit of the telescope aperture. The purpose of this work is to investigate Synthetic Aperture Imaging LADAR (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long range, two-dimensional imaging with modest aperture diameters. This paper details our laboratory-scale SAIL testbed, digital signal processing techniques, and image results. A number of fine-resolution, well-focused SAIL images are shown including both retro-reflecting and diffuse scattering targets. A general digital signal processing solution to the laser waveform instability problem is described and demonstrated, involving both new algorithms and hardware elements. These algorithms are primarily data-driven, without a priori knowledge of waveform and sensor position, representing a crucial step in developing a robust imaging system. These techniques perform well on waveform errors, but not on external phase errors such as turbulence or vibration. As a first step towards mitigating phase errors of this type, we have developed a balanced, quadrature phase, laser vibrometer to work in conjunction with our SAIL system to measure and compensate for relative line of sight motion between the target and transceiver. We describe this system and present a comparison of the vibrometer-measured phase error with the phase error inferred from the SAIL data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report the development of an eyesafe YAG laser for coherent laser radar wind sensing applications. The upper-state pumped 1.6 mm Er:YAG laser produces high pulse energies with diffraction-limited beam quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precision geo-location (absolute accuracy of 20 cm or less) is required of high-resolution lidar data (1 m or less post spacing) for general surveying needs and high-quality visualization. Current open-loop airborne-hardware pointing-technology supports precision geo-location at short range (less than a few km). Precision geo-location at longer range can be achieved with more complex pointing-hardware but at substantial cost. This paper introduces the concept of multi-look lidar that has the potential for achieving long-range precision geo-location but at substantially reduced cost. In this concept, lidar data are collected at multiple look-angles that are consistent with Position-Dilution-Of-Precision (PDOP) requirements. The data are registered, triangulated, and block-adjusted with a dense set of self-generated control points. An analytic model is presented that shows that the error performance is independent of range. A flight test is described that validates the multi-look-lidar concept. Potential system-implementations are also described that can have minimal impact on hardware or conventional flight operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ongoing technical developments on airborne laser scanner systems, with shorter pulses, increased operation
altitudes, focal plane array detectors, full-waveform digitization and recoding, etc. provide new opportunities for the
expansion and growth of military as well as civilian applications. However, for the continuing development of systems
and applications one crucial issue is the research and development of new and efficient laser data processing methods for
analysis and visualization.
In this paper we will present some recent developments on visualization and analysis of full-waveform data. We will
discuss visualization of waveform data by inserting the waveform samples in a 3D volume consisting of small 3D cells
referred to as voxels. We will also present an approach for extracting additional 3D point data from the waveforms. The
long term goal of this research is to develop methods for automated extraction of natural as well as man-made objects.
The aim is to support the construction of high-fidelity 3D virtual environment models and detection and identification of
man-made objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional flash LIDAR coupled with a 2D RGB camera on an aerial platform is an efficient data collection method for mapping wide-area terrain and urban sites with imagery draped over a 3D model. In order to assemble a seamless and geographically accurate mosaic product despite GPS/INS errors, frames of imagery require data-driven registration. In the approach described in this paper, all spatially overlapping frame pairs are registered, be they adjacent in time, within the same flight line, or across flight lines, and the alignment model accounts for parallax due to 3D structure. All pairwise registration constraints, along with GPS/INS measurements, are combined by least squares adjustment to estimate the pose of each frame. Registered LIDAR frames are then combined and regridded to a uniformly sampled DEM, which is then used to orthorectify and mosaic the RGB frames. Furthermore, in order to process and store hours of data efficiently, a control strategy partitions the entire terrain into moderate size tiles, within which the pairwise registration, least squares adjustment, and resampling are performed. In a flash LIDAR system designed to map 360 sq. km per hour at 1m resolution, the software will achieve near real-time throughput on a commercial PC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The navigation of an autonomous robotic vehicle is a difficult task. Accurate measurement of robotic vehicle motion is a problem in certain environments. In desert and other terrains, wheel slip affects the accuracy of odometry sensors. Poorly-lit underground environments present problems for passive vision systems. As well, for slow-moving vehicles, the effects of INS drift errors can be large even over short distances. An active triangulation scanning laser camera sensor, which can provide accurate 3D images at distances less than 10m, has the potential to alleviate the problems mentioned above by improving the accuracy of integrated navigation systems for robotic vehicles operating in such environments. Knowledge of the relative position measurement accuracy for scanning laser cameras in various environments will allow navigation system designers to determine whether incorporating these sensors will help to meet their system accuracy requirements. This paper presents an experimental method for determining relative position measurement accuracy of an auto-synchronous triangulation scanning laser camera. 3D images were taken of a simulated desert terrain environment from multiple camera positions and orientations. Registration of overlapping images using an Iterative Closest Point (ICP) based algorithm was performed to determine an estimate of the position and orientation change of the laser camera. Truth data for the position and orientation of the laser camera at each location was determined by using theodolites to measure the location of survey targets mounted on the laser camera. The relative position estimates were then compared to the truth data. In this paper, the experiment design and implementation are detailed, and preliminary experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based object recognition in range imagery typically involves matching the image data to the expected model data for each feasible model and pose hypothesis. Since the matching procedure is computationally expensive, the key to efficient object recognition is the reduction of the set of feasible hypotheses. This is particularly important for military vehicles, which may consist of several large moving parts such as the hull, turret, and gun of a tank, and hence require an eight or higher dimensional pose space to be searched.
The presented paper outlines techniques for reducing the set of feasible hypotheses based on an estimation of target dimensions and orientation. Furthermore, the presence of a turret and a main gun and their orientations are determined. The vehicle parts dimensions as well as their error estimates restrict the number of model hypotheses whereas the position and orientation estimates and their error bounds reduce the number of pose hypotheses needing to be verified.
The techniques are applied to several hundred laser radar images of eight different military vehicles with various part classifications and orientations. On-target resolution in azimuth, elevation and range is about 30 cm. The range images contain up to 20% dropouts due to atmospheric absorption. Additionally some target retro-reflectors produce outliers due to signal crosstalk. The presented algorithms are extremely robust with respect to these and other error sources. The hypothesis space for hull orientation is reduced to about 5 degrees as is the error for turret rotation and gun elevation, provided the main gun is visible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system’s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The loss of Space Shuttle Columbia and her crew led to the creation of the Columbia Accident Investigation Board (CAIB), which concluded that a piece of external fuel tank insulating foam impacted the Shuttle’s wing leading edge. The foam created a hole in the reinforced carbon/carbon (RCC) insulating material which gravely compromised the Shuttle’s thermal protection system (TPS). In response to the CAIB recommendation, the upcoming Return to Flight Shuttle Mission (STS-114) NASA will include a Shuttle deployed sensor suite which, among other sensors, will include two laser sensing systems, Sandia National Lab’s Laser Dynamic Range Imager (LDRI) and Neptec’s Laser Camera System (LCS) to collect 3-D imagery of the Shuttle’s exterior. Herein is described a ground-based statistical testing procedure that will be used by NASA as part of a damage detection performance assessment studying the performance of each of the two laser radar systems in detecting and identifying impact damage to the Shuttle. A statistical framework based on binomial and Bayesian statistics is used to describe the probability of detection and associated statistical confidence. A mock-up of a section of Shuttle wing RCC with interchangeable panels includes a random pattern of 1/4” and 1” diameter holes on the simulated RCC panels and is cataloged prior to double-blind testing. A team of ladar sensor operators will acquire laser radar imagery of the wing mock-up using a robotic platform in a laboratory at Johnson Space Center to execute linear image scans of the wing mock-up. The test matrix will vary robotic platform motion to simulate boom wobble and alter lighting and background conditions at the 6.5 foot and 10 foot sensor-wing stand-off distances to be used on orbit. A separate team of image analysts will process and review the data and characterize and record the damage that is found. A suite of software programs has been developed to support hole location definition, damage disposition recording, statistical data analysis and results presentation. The result of the statistical analysis will provide a quantitative indication of the laboratory performance of the ladar systems in the role of through hole damage detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a discussion of standards requirements for LADAR. Two specific questions are addressed: (1) is there a need for such standards and (2) what types of standards are required? LADAR standards development issues and current standardization efforts are also summarized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing systems, such as LIDAR, have benefited greatly from nonlinear sources capable of generating tunable mid-infrared wavelengths (3-5 microns). Much work has focused on improving the energy output of these sources so as to improve the system's range. We present a different approach to improving the range by focusing on improving the receiver of a LADAR system employing nonlinear optical techniques. In this paper, we will present results of a receiver system based on frequency converting mid-infrared wavelengths to the 1.5 μm region using Periodically-Poled Lithium Niobate (PPLN). By doing so, optical amplifiers and avalanche photodetectors (APDs) developed for the fiber optics communications industry can be used, thus providing very high detection sensitivity and high speed without the need for cryogenically cooled optical detectors. We will present results of laboratory experiments with 3 μm, 2.5 ns FWHM LADAR pulses that have been converted to 1.5 μm. Detection sensitivities as low as 1.5 x 10^-13 Joules have been demonstrated. The performance of the Peltier-cooled 1.5 μm InGaAs APD quasi photon-counting receiver will be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An alternative, class AB configuration of a proven class A readout cell for active/passive imaging systems is presented. Comparison between the two approaches shows that class AB circuit lowers power consumption and reduces noise by a factor of 3 while using nearly equal chip area. On the other hand, class AB has lower bandwidth because it operates at lower bias currents. A 0.5μm CMOS test chip that includes both types of readout circuits has been designed, fabricated and is currently being tested. Simulation results, using readout circuits from this test chip, are used to compare the two configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geiger-mode avalanche photodiodes (APDs) can convert the arrival of a single photon into a digital logic pulse. Arrays of APDs can be directly interfaced to arrays of per-pixel digital electronics fabricated in silicon CMOS, providing the capability to time the arrival of photons in each pixel. These arrays are of interest for "flash" LADAR systems, where multiple target pixels are simultaneously illuminated by the laser during a single laser pulse, and the imaging array is used to measure range to each of the illuminated pixels. Since many laser radar systems use Nd:YAG lasers operating at 1.06 um, we have extended our earlier work with silicon-based APDs by developing arrays of InGaAsP/InP APDs, which are efficient detectors for near-IR radiation. 32x32 pixel arrays, with 100-um pixel pitches, are currently being successfully used in demonstration systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews the progress of Advanced Scientific Concepts, Inc (ASC) large format flash ladar 3-D imaging systems for longer-range applications. Single-laser-pulse images are taken from a manned flight test at 1000 - 2000 ft demonstrating not only the 3-D mapping potential of the system but also its use in object identification. Gated images on the ground exemplify vehicle identification applications. Use of signal amplitude information in enhancing the 3-D image is also illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Targets, Backgrounds, and Environmental Monitoring
This paper will describe measurements of snow reflection using laser radar. There seems to be a rather limited number
of publications on snow reflection related to laser radar, which is why we decided to investigate a little more details of
snow reflection including that from different kinds of snow as well as the angular reflection properties.
We will discuss reflectance information obtained by two commercial scanning laser radars using the wavelengths
0.9 μm and 1.5 μm. Data will mainly be presented at the eye safe wavelength 1.5 μm but some measurements were also
performed for the wavelength 0.9 μm. We have measured snow reflection during a part of a winter season which gave
us opportunities to investigate different types of snow and different meteorological conditions.
The reflection values tend to decrease during the first couple of hours after a snowfall. The snow structure seems to be
more important for the reflection than the snow age. In general the snow reflection at 1.5 μm is rather low and the
reflectivity values can vary between 0.5 and 10 % for oblique incidence depending on snow structure which in turn
depends on age, air temperature, humidity etc. The snow reflectivity at the 0.9 μm laser wavelength is much higher,
more than 50 % for fresh snow. Images of snow covered scenes will be shown together with reflection data including
BRDFs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing demand on accurately monitoring various pollutions, especially, oil in seawater and in ice, requires the employment of advanced remote sensing means. These techniques must be real-time and suitable for multiple tasks. One of very promising oil remote sensing techniques is a hyperspectral analyze of a laser light reflected by oil film. There is some development work going on in the world. This activity focuses on a modern lidar system, which is called a multi-wavelength hyperspectral lidar system. Our paper provides an overview of state-of-art of hyperspectral lidar technology up-to-date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Delay between the time when natural disaster, for example, oil accident in coastal water, occurred and the time when environmental protection actions, for example, water and shoreline clean-up, started is of significant importance. Mostly remote sensing techniques are considered as (near) real-time and suitable for multiple tasks. These techniques in combination with rapid environmental assessment methodologies would form multi-tier environmental assessment model, which allows creating (near) real-time datasets and optimizing sampling scenarios. This paper presents the idea of three-tier environmental assessment model. Here all three tiers are briefly described to show the linkages between them, with a particular focus on the first tier. Furthermore, it is described how large-scale environmental assessment can be improved by using an airborne 3-D scanning FLS-AM series hyperspectral lidar. This new aircraft-based sensor is typically applied for oil mapping on sea/ground surface and extracting optical features of subjects. In general, a sampling network, which is based on three-tier environmental assessment model, can include ship(s) and aircraft(s). The airborne 3-D scanning FLS-AM series hyperspectral lidar helps to speed up the whole process of assessing of area of natural disaster significantly, because this is a real-time remote sensing mean. For instance, it can deliver such information as georeferenced oil spill position in WGS-84, the estimated size of the whole oil spill, and the estimated amount of oil in seawater or on ground. All information is produced in digital form and, thus, can be directly transferred into a customer’s GIS (Geographical Information System) system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unattended lidars operating in the mid-visible region for clouds and aerosols are currently deployed at tens of locations in the U.S. and in other countries. The micro-pulse lidar known as MPL is a very successful instrument in terms of numbers deployed, and it is also very sophisticated. In order to operate during daytime, micro-pulse lidars must have an extremely narrow field of view (FOV) and a very small optical bandpass. They are consequently not inexpensive, they tend to suffers from mechanical instability, and they are not field-serviceable when certain types of failures occur. In order to establish the optimum wavelength region for an unattended aerosol lidar, the spectral dependencies of eye safety standards, sky radiance, laser availability, detector performance, atmospheric optical properties, and optical materials are presented. In particular, eye safety standards allow a fluence of 1 J/cm^2 at 1.5 micron, which is 10^7 times the fluence allowed in the mid-visible. Pulse energies on the order of 10 mJ are sufficient to make daytime operation easy and low-cost. A conventional bistatic lidar configuration can then be used with a field of view on the order of milliradians, which eliminates the problem of mechanical instability, and the optical bandpass can be limited with an inexpensive interference filter. In addition, the InGaAs detectors used at 1.5 microns are much less susceptible to optical damage than the Geiger-mode silicon avalanche photodiodes (APDs) used in visible-light lidars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shipboard infrared search and track (IRST) systems can detect sea-skimming, anti-ship missiles at long ranges. Since IRST systems cannot measure range and line-of-sight (LOS) velocity, they have difficulty distinguishing missiles from false targets and clutter. In a joint Army-Navy program, the Army Research Laboratory (ARL) is developing a ladar based on the chirped amplitude modulation (AM) technique to provide range and velocity measurements of potential targets handed-over by the distributed aperture system - IRST (DAS-IRST) being developed by the Naval Research Laboratory (NRL) and sponsored by the Office of Naval Research (ONR). Using the ladar's range and velocity data, false alarms and clutter will be eliminated, and valid missile targets' tracks will be updated. By using an array receiver, ARL's ladar will also provide 3D imagery of potential threats for force protection/situational awareness. The concept of operation, the Phase I breadboard ladar design and performance model results, and the Phase I breadboard ladar development program were presented in paper 5413-16 at last year's symposium. This paper will present updated design and performance model results, as well as recent laboratory and field test results for the Phase I breadboard ladar. Implications of the Phase I program results on the design, development, and testing of the Phase II brassboard ladar will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model has been developed at the Rochester
Institute of Technology (RIT) for over a decade. The model is an established, first-principles based scene simulation
tool that has been focused on passive multi- and hyper-spectral sensing from the visible to long wave infrared (0.4 to 14 μm). Leveraging photon mapping techniques utilized by the computer graphics community, a first-principles based elastic Light Detection and Ranging (LIDAR) model was incorporated into the passive radiometry framework so that the model calculates arbitrary, time-gated radiances reaching the sensor for both the atmospheric and topographic
returns. The active LIDAR module handles a wide variety of complicated scene geometries, a diverse set of surface and participating media optical characteristics, multiple bounce and multiple scattering effects, and a flexible suite of sensor
models. This paper will present the numerical approaches employed to predict sensor reaching radiances and
comparisons with analytically predicted results. Representative data sets generated by the DIRSIG model for a topographical LIDAR will be shown. Additionally, the results from phenomenological case studies including standard terrain topography, forest canopy penetration, and camouflaged hard targets will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Constantly improving ladar sensor technology has pushed simulation capabilities required for hardware-in-the-loop sensor testing and algorithm development beyond the capabilities of standard desktop PCs. Robust ladar computations require transport and manipulation of large, complex, multi-dimensional datasets containing range, irradiance, micro-Doppler, polarization, speckle decorrelation, and other information. Coherent Technologies, Inc. (CTI) is developing a portable, scalable software architecture for implementing ladar imaging simulation calculations on large cluster-based supercomputers. This architecture takes advantage of both line-of-sight and transverse modes of parallelization for the various stages of computation encountered in typical ladar calculations. In order to assure portability of software, this effort has followed ANSI coding standards for C/C++ and parallel data control is implemented using the Message Passing Interface (MPI). Using this rather general coding framework, CTI researchers have realized parallel efficiencies in excess of 50%, or fixed problem speedups of up to 19x on 32 processors. As increased fidelity is incorporated into the simulator, parallel efficiency is expected to improve even further.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The emergence of tripod-mounted lidar sensors as a viable method of 3D data collection has provided users with the ability to interrogate structures using high-resolution, metrically accurate 3D measurements. As with any measurement device, the accuracy of the collected data is of paramount importance. Angular accuracy is a crucial parameter in the overall performance of a 3D range sensor. This is particularly true in long-range applications where angular errors would be amplified proportionately to the target range. Consequently, angular accuracy is the determining factor in the accuracy of a long-range tripod mounted laser scanner. Recent advances in laser scanning technologies enabled significant increases in the addressable field-of-view (FOV) of 3D scanners. The most common embodiment of such systems incorporates two axes-rotation mechanisms. Typically, a rapidly oscillating mirror directs a laser beam into a “sheet” of light covering a vertical plane. This plane, in turn, is rotated around a vertical axis to provide nearly omni-directional coverage of the scene.
A 3D measurement system's angular accuracy depends on two angular characteristics: angular resolution and angular repeatability. The systems described above could suffer on both dimensions. In this case the angular resolution is determined by the available angular position sensors yielding angular increments that are too crude for long-range 3D measurements. Similarly, angular repeatability of the available actuators suffers from non-linearities and other mechanical instabilities. The combined results are a data set that is less accurate than what is achievable in small FOV systems of similar configuration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a high-resolution direct-detection commercially-available scanning LIDAR making use of digitizing the echo signal and providing these waveforms for a subsequent full waveform analysis for extracting target characteristics. The scanning LIDAR is intended primarily for airborne applications providing significant advantages in forestry applications but indicates also high potential for demanding applications requiring high penetration, multiple targets per laser pulse, and high multi-pulse resolution. We present test data to demonstrate the capabilities of the echo-digitizing scanning LIDAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.