Current pulsed laser radar systems for ranging purposes are based on time-of-flight techniques. Nowadays first pulse as well as last pulse exploitation is used for different application, e.g. urban planning, forestry surveying. Besides this technique of time measurement the complete signal form over the time might be of interest, because it includes the backscattering characteristic of the illuminated field. This characteristic can be used for estimating the aspect angle of a plane with special surface property or estimating the surface property of a plane with a special aspect angle. In this paper a monostatic bi-directional experimental system with a fast digitizing receiver is described. The spatio-temporal beam propagation, the spatial reflectance of the surface, and receiver properties are modeled. A time dependent description of the received signal power is derived and our special surface property is considered. The spatial distribution of the used laser beam was measured and displayed by the beam profile. For a plane surface under various aspect angles the transversal distributions of the beam were simulated and measured. For these angles the corresponding temporal beam distributions were measured and compared with their pulse widths. The pulse spread is used to estimate the aspect angle of the illuminated object. The statistics for different angles was calculated. Different approaches which detect a characteristic time value were compared and evaluated. The consideration of the signal form allows a more precise determination of the time-of-flight. A 3-d visualization of equi-irradiance surfaces allows to access the spatio-temporal shape of the pulses.
Surveillance systems against missile attacks require the automatic detection of targets with low false alarm rate (FAR). Infrared Search and Track (IRST) systems offer a passive detection of threats at long ranges. For maximum reaction time and the arrangement of counter measurements, it is necessary to declare the objects as early as possible. For this purpose the detection and tracking algorithms have to deal with point objects. Conventional object features like shape, size and texture are usually unreliable for small objects. More reliable features of point objects are three-dimensional spatial position and velocity. At least two sensors observing the same scene are required for multi-ocular stereo vision. Mainly three steps are relevant for successful stereo image processing. First of all the precise camera calibration (estimating the intrinsic and extrinsic parameters) is necessary to satisfy the demand of high degree of accuracy, especially for long range targets. Secondly the correspondence problem for the detected objects must be solved. Thirdly the three-dimensional location of the potential target has to be determined by projective transformation. For an evaluation a measurement campaign to capture image data was carried out with real targets using two identical IR cameras and additionally synthetic IR image sequences have been generated and processed. In this paper a straightforward solution for stereo analysis based on stationary bin-ocular sensors is presented, the current results are shown suggestions for future work are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.