This paper describes a novel method used to extract planar surfaces from a stream of 3D images in near real-time. The
method currently operates on 3D images acquired from a MESA SwissRanger SR-3000 infrared time of flight camera,
which operates in a manner similar to flash-ladar sensors; the camera provides the user with range and intensity value for
each pixel in the 176 by 144 image frame. After application of the camera calibration the range measurement associated
with each pixel can be converted to a Cartesian coordinate.
First, the proposed method splits the focal image plane into sub-images or sub-windows. The method then operates in
the 3D parameter space to find an estimate of the planar equation best describing the point cloud associated with the
window pixels and to compute a metric that defines how well the sub-window points fit to the planar estimate. The best
fit sub-window is then used as an initialization to one of two investigated methods: a parameter based search technique
and cluster validation using histogram thresholding to extract the entire plane from the 3D image frame. Once a plane is
extracted, a feature vector describing that plane along with their describing statistics can be generated. These feature
vectors can then be used to enable feature-based navigation.
The paper will fully describe the feature extraction method and will provide application results of this method to extract
features from indoor 3D video data obtained with the MESA SwissRanger SR-3000. Also provided is a brief overview
of the generation of feature statistics and their importance.
KEYWORDS: LIDAR, Image sensors, Sensors, Cameras, Feature extraction, Environmental sensing, 3D image processing, Stereoscopy, 3D metrology, Global Positioning System
This paper discusses the integration of Inertial measurements with measurements from a three-dimensional (3D)
imaging sensor for position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground
vehicles (AGV) in urban or indoor environments. To enable operation of UAVs and AGVs at any time in any
environment a Precision Navigation, Attitude, and Time (PNAT) capability is required that is robust and not solely
dependent on the Global Positioning System (GPS). In urban and indoor environments a GPS position capability may
not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial
or deception. Although deep integration of GPS and Inertial Measurement Unit (IMU) data may prove to be a viable
solution an alternative method is being discussed in this paper.
The alternative solution is based on 3D imaging sensor technologies such as Flash Ladar (Laser Radar). Flash Ladar
technology consists of a modulated laser emitter coupled with a focal plane array detector and the required optics. Like
a conventional camera this sensor creates an "image" of the environment, but producing a 2D image where each pixel
has associated intensity vales the flash Ladar generates an image where each pixel has an associated range and intensity
value. Integration of flash Ladar with the attitude from the IMU allows creation of a 3-D scene. Current low-cost Flash
Ladar technology is capable of greater than 100 x 100 pixel resolution with 5 mm depth resolution at a 30 Hz frame
rate.
The proposed algorithm first converts the 3D imaging sensor measurements to a point cloud of the 3D, next, significant
environmental features such as planar features (walls), line features or point features (corners) are extracted and
associated from one 3D imaging sensor frame to the next. Finally, characteristics of these features such as the normal or
direction vectors are used to compute the platform position and attitude changes. These "delta" position and attitudes
are then used calibrate the IMU. Note, that the IMU is not only required to form the point cloud of the environment
expressed in the navigation frame, but also to perform association of the features from one flash Ladar frame to the
next.
This paper will discuss the performance of the proposed 3D imaging sensor feature extraction, position change
estimator and attitude change estimator using both simulator data and data collected from a moving platform in an
indoor environment. The former consists of data from a simulated IMU and flash Ladar installed on an aerial vehicle for
various trajectories through an urban environment. The latter consists of measurements from a CSEM Swissranger 3D
imaging sensor and a MicroStrain low-cost IMU. Data was collected on a manually operated aerial vehicle inside the
Ohio University School of Electrical Engineering and Computer Science building.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.