KEYWORDS: Cameras, Clouds, 3D modeling, 3D image processing, Reconstruction algorithms, Airborne remote sensing, Free space, 3D image reconstruction, Atomic force microscopy, Ray tracing
With recent advances in technologies, reconstructions of three-dimensional (3D) point clouds from multi-view aerial imagery are readily obtainable. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids in the point cloud are present in texturally flat areas that failed to generate features during the initial stages of reconstruction, as well as areas where multiple views were not obtained during collection or a constant occlusion existed due to collection angles or overlapping scene. A method is presented for identifying the type of void present using a voxel-based approach to partition the
3D space. By using collection geometry and information derived from the point cloud, it is possible to detect
unsampled voxels such that voids can be identified. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged, such that the new images can be included in the 3D reconstruction, with the goal of reducing the voids in the point cloud that are a result of lack of coverage. This method has been tested on high-frame-rate oblique aerial imagery captured over Rochester, NY.
Advances in LIDAR technology have made sub-meter resolutions from airborne instruments possible, enabling quick capture of fine 3D details over large areas. During collection, occluding objects may prevent a laser pulse from reaching regions where overlapping geometry is present, such as under tree canopies. This is particularly true given the near-nadir angles typically used by airborne LIDAR, since the limited number of unique angles does not ensure that all surfaces can be sensed. These missed surface detections decrease the overall quality of a dataset, but are not normally quantified due to a lack of ground-truth. Using information that is normally discarded about the LIDAR instrument position, we show how these unsampled regions can be identified by tracing the path of each laser pulse. A voxel representation provides the framework for computing the necessary statistics, and also allows for correct representations of overlapping geometry in complex environments. Based on this novel unsampled information we show how the fraction of total surfaces sensed and not sensed by the LIDAR can be estimated, giving a measurement of how completely all surfaces are sampled. Results are demonstrated for a real-world dataset, including the effects of voxel resolution and data density on the sampling completeness metric.
KEYWORDS: Clouds, Cameras, 3D image processing, 3D modeling, 3D image reconstruction, Reconstruction algorithms, Digital imaging, Free space, RGB color model, Airborne remote sensing
In the construction of three-dimensional (3D) point clouds from multi-view aerial imagery, voids in the point cloud often exist where multiple views of the area were not obtained during collection. A method is presented for identifying these voids. In this work, point clouds are derived from oblique aerial imagery using multi-view techniques from the photogrammetry and computer vision communities. A voxel-based approach is used to partition the 3D space and each voxel is classified as containing or not containing derived points. Using the imagery and the position of the camera, it is possible to analyze what the cameras can and cannot see, thereby making it possible to label the voxels as occupied, free, and non-classified spaces. Voids in the data will manifest themselves in the non-classified voxels. This method has been tested on high-frame-rate oblique aerial imagery captured over Rochester, NY as well as synthetic data sets. Also presented is a unique synthetic dataset for 3D reconstruction. The data set, created with the Rochester Institute of Technology's Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, provides high-fidelity radiometric data in addition to known 3D locations and surface normals for each pixel location in an image scene. This data set is available to the community for use in their related research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.