Regular Articles

Review of measurement quality metrics for range imaging

[+] Author Affiliations
David MacKinnon

Carleton University, Department of Systems and Computer Engineering,Ottowa, ON K1S5B6 Canada

Victor Aitken

Carleton University, Department of Systems and Computer Engineering,Ottowa, ON K1S5B6 Canada

François Blais

National Research Council of Canada, Institute for Information TechnologyOttowa, ON K1A0R6 Canada

J. Electron. Imaging. 17(3), 033003 (July 25, 2008). doi:10.1117/1.2955245
History: Received October 31, 2007; Revised February 27, 2008; Accepted March 16, 2008; Published July 25, 2008
Text Size: A A A

Open Access Open Access

Quality metrics, within the field of laser range imaging, are used to quantify by how much some aspect of a measurement deviates from a predefined standard. Measurement quality evaluations are becoming increasingly important in laser range imaging for range image registration, merging measurements, and planning the next best view. Spatial uncertainty and resolution are the primary metrics of image quality; however, spatial uncertainty is affected by a variety of environmental factors. A review how contemporary researchers have attempted to quantify these environmental factors is presented, along with spatial uncertainty and resolution, resulting in a wide range of quality metrics.

Figures in this Article

All range images begin with a series of range measurements, and the quality of the range image depends on the quality of each of those measurements. The quality of a range measurement depends on measurement uncertainty and measurement resolution; however, spatial uncertainty is also strongly affected by environmental factors, such as return signal intensity and relationship to a measurement’s immediate neighbors. One or more of these factors can be expressed as a metric representing the deviation of some quality attribute associated with the measurement from a predefined standard. Spatial measurement quality represents the degree of confidence one can place in how accurately a measurement represents the position of a real surface in the environment. Laser range scanners can also provide intensity information that may be used in representing the surface so quality attributes relating to return signal intensity are useful. In this paper, contemporary approaches to evaluating measurement quality attributes are reviewed, including measurement uncertainty, return signal intensity, range, sampling density, and relationship to neighboring points. This review focuses particularly on measurement quality metrics for ground-based laser range scanners that can be adapted for automated systems. Within this context, measurement quality metrics provide a way to direct and terminate automated scanning procedures.

Perceptual quality metrics can be either objectively or subjectively defined;1 however, only objective quality metrics are useful for automating data acquisition. For this reason, only objective quality metrics are considered here. Objective quality metrics can be further classified as referenced or unreferenced.1 Unreferenced quality metrics use no benchmark; thus, automated processes can only evaluate the change in a quality attribute in response to some action. Referenced quality metrics can be evaluated in the same manner as unreferenced quality metrics, but the size of the deviation of an attribute from a reference can be used to evaluate whether the measurement should be either retained or ignored. For this reason, referenced quality metrics are preferred for automated systems in which thousands, or even millions, of measurements may be obtained.

Referenced range measurement quality metrics quantify the relationship of a quality attribute to some previously established benchmark or reference. These metrics can then be used to either compare methods or systems, or they can be used in an iterative process to maximize some qualitative attribute of a range image.2 Quality metrics appear most often in the guise of a weighting parameter when merging measurements or data sets. Two important components of a referenced quality metric are a clearly defined quality benchmark against which to compare the current state of the range image, and a quality scale to indicate the degree to which the range measurement quality attribute deviates from the benchmark.

In this paper, metrics for quantifying the quality of measurements are reviewed. For purposes of discussion, these metrics have been classified as measurement uncertainty based, signal intensity based, range based, and neighborhood based. As will be demonstrated, considerable work remains to ensure that the quality of measurements and points used to construct virtual models is effectively and comprehensively defined.

The spatial resolution of a laser range scanner measurement is dependant on the size of the laser spot that illuminates the surface at the point the measurement is obtained. For pulsed laser systems, the spatial resolution is also dependant on the pulse length of the system. The spatial resolution can be divided into range resolution and angular resolution. Angular resolution is the minimum angular distance between features such that they can be resolved as separate features. Range resolution is the minimum distance between angularly resolved features such that they can be distinguished as separate features.3 The angular resolution of a laser range scanner is defined by the Rayleigh criterion4 and represents the size of the smallest feature that can be angularly resolved.3,5

The laser projects a spot into the surface being scanned, and the region in which the surface intersects the laser spot is referred to as the beam footprint. Features within the footprint contribute to the return signal intensity, which is used to obtain the spatial measurement that approximates the position of a portion of the surface.6 The area covered by the beam footprint is generally not measured by laser range scanners; thus, it is approximated by a model of the area of the laser spot that illuminates the area. Ideally, the laser spot area should be the same as the beam footprint area; however, environmental factors, such as spatial discontinuities7 or dense fog,8 can result in the beam footprint deviating from that predicted by the laser spot model. Moreover, if the surface normal is assumed to be oriented along the line of sight in the laser spot model, then surface angulation can result in a discrepancy between actual and predicted beam footprint areas. Quality metrics provide a way to predict by how much the beam footprint of a measurement might deviate from that predicted by the laser spot model.

The spatial resolution of a measurement can be represented by the instantaneous resolution, which assumes the footprint is stationary at the time the measurement is acquired, or the effective resolution, which takes into account the procession of the footprint over the surface during the acquisition process. When the term resolution is used in this paper, unless otherwise stated, it refers to the instantaneous resolution. The term footprint and laser spot are also used interchangeably in this paper, although the terms are strictly equivalent only when the surface is continuous within the laser spot.

Measurement uncertainty is the most common attribute used to assess measurement quality. Range measurement uncertainty is generally modeled as an independent zero-mean Gaussian process added to the quantity returned by the range sensor; that is,Display Formula

1x=x̂+e
where x is the ground truth position or surface characteristic, x̂ is the quantity returned by the sensor, and eN(0,Σ) is the additive zero-mean Gaussian noise process with measurement covariance Σ. This may not always be a valid assumption; environmental effects and nonlinear bias in the sensors may cause the observed measurement distribution to become distinctly non-Gaussian. In practice, Gaussian models provide the benefit of simplifying mathematical analyses and result in an approximation of how a system should behave under a broad range of circumstances. Non-Gaussian models are highly situation dependant, therefore are rarely used for predicting measurement uncertainty.

The uncertainty associated with the range sensor is referred to here as the radial error and is one attribute that can be used to evaluate measurement quality. Rotational or translational position are referred to here generically as positional error and represent two more attributes which can be employed to evaluate the quality of a measurement. Figure 1 shows one example of a triangulation laser range scanner system in which the angular position of the laser spot on a surface in the environment is controlled by two rotating mirrors. Similar dual-axis optical scanning configurations are used in time-of-flight (TOF) systems and other laser range systems by combining orthogonal galvanometers, rotating mirrors, or motors. As a result, the geometrical model and measurement uncertainties can be generalized to a variety of laser range scanning systems.

Graphic Jump LocationF1 :

Example of a fixed-viewpoint laser range scanner employing dual-axis galvanometer-controlled rotating mirrors [modified from Fig. 6(b) of Ref. 10].

Measurement Uncertainty

Measurement uncertainty is represented by a covariance matrix, generally based on a model of the root-mean-square (rms) sensor error along each axis of motion employed by the scanner and on a model of the error associated with the range sensor. Sensor variance is often based on a model of the sensor error, rather than on the spread of repeated measurements acquired in situ, because it is often not practical to obtain a large enough set of repeated measurements to derive a situation-specific variance profile. These models are generally obtained under ideal conditions for specific materials and surface orientations. As a result, there can be a significant discrepancy between the model sensor variance and what would be observed using a repeated measures approach in the field. For example, if the variance model of a system was based on white cardboard, then the model variance would significantly underestimate the variance resulting from black felt.9 This can be a significant issue where the type of material being scanned cannot be known a priori or where the object being scanned may consist of multiple types of material. In general, measurement uncertainty cannot be considered a sufficient quality metric on its own because it depends heavily on a variety of other attributes. In the following sections, various attributes that can result in true measurement uncertainty deviating from model-based measurement uncertainty are identified.

Positional Uncertainty

Assuming a fixed-viewpoint scanner, such as the one shown in Fig. 1, the positional uncertainty is a function of the mechanisms used to control the orientation of the laser and the photosensor.10 These mechanisms are typically precision galvanometers or rotating motors, and the positional uncertainty reflects the variation in real laser/sensor orientations when the galvanometer or motor indicate that it has achieved a given angular position. In the case of fixed pattern projection systems, error positioning is often due to the stability of the optomechanical system. The acquisition of range and angular position measurements are generally synchronized, but synchronization errors, or jitter, can result in the true angular position differing from the angular position at the instant the range measurement is acquired.11

Although the laser is often modeled as originating either from the scanner viewpoint or from a fixed point near the viewpoint, its true origin may vary depending on the scanner geometry.12 Well-calibrated laser range scanner systems account for this complexity; however, the transformation between sensor data and spherical or Cartesian coordinates can introduce errors.1314 As a result, rotational uncertainty may not be constant, as is often assumed. A similar situation arises for laser range scanner systems using motor-controlled rotating bases. Thermal effects, wobble and jitter, and mirror nonplanarity can also cause the final reflection point position and output orientation to deviate from a Gaussian distribution.

Radial Uncertainty

Range measurement uncertainty depends on how the interaction of the laser with the surface is measured. In TOF systems, the range is determined by the time between the pulse being generated and being detected. In triangulation systems, the range measurement depends on the position of the signal peak on a photodetector array. In both cases, a significant portion of the range measurement uncertainty is the ambiguity of the location of the signal peak.

Range uncertainty is typically assumed constant for TOF scanners, as shown in Fig. 2. Specifically,Display Formula

2σR=c2στ,
where σR is the range measurement error, c is the speed of light, and στ is the time measurement error. The last term represents the uncertainty in the temporal location of the signal peak. This is found byDisplay Formula
3στ=Tr2SNR,
where Tr is the pulse rise time and SNR is the signal-to-noise ratio.1516 The range measurement error is determined by the signal bandwidth,15 amplitude of the return signal,17 thermal drift,1718 crosstalk between the transmitter and receiver,18 timing jitter,19 and nonuniformities and changes in the returning signal shape.15,1819 For example, different surface materials can change the shape of the return signal resulting in significantly different error distributions.10 Moreover, feedback within the sensor can result in a measurement being affected by the previous measurement, violating the assumption that there is no correlation among range measurements.

Graphic Jump LocationF2 :

Common range uncertainties for Amplitude-Modulation Continuous Wave (AM), Frequency-Modulation Continuous Wave (FMCW), Time-of-Flight (TOF) and triangulation scanners up to 100-m effective range (referred to in this figure as volume). The range measurement uncertainties of all but the triangulation scanners are considered constant with respect to range. [modified from Fig. 2(b) of Ref. 16].

Laser motion while the signal is being emitted is negligible for pulse TOF systems because the pulse duration is so short, but it can affect continuously modulated laser systems. This motion can distort the return signal and introduce ambiguity into the true measurement. Consider, as well, that the range measurement equation is given byDisplay Formula

4R=cτ2,
where τ is the propagation delay.16,2021 This assumes that the TOF between the laser and the surface is equal to the TOF between the surface and the sensor. This may not always be the case, especially if the true origin of the laser pulse varies as a function of the mirror angles.

The peak uncertainty of a triangulation scanner is typically dominated by speckle noise.22 This can be modeled asDisplay Formula

5σa=λfDcos(β)2π,
where λ is the laser wavelength, f is the focal length of the receiving lens, D is the diameter of the collecting lens, and β is the Scheimpflugg angle of the photodetector array.23 Speckle noise arises when speckle elements on the surface illuminated by the laser spot are large when compared to the wavelength of the laser light.22,24 Under this assumption, each speckle element becomes a point emitter with respect to the photodetector array. Interference patterns are generated when each speckle element reflects light from the laser onto the photodetector array,22,25 as shown in Fig. 3. There, they constructively and destructively interact to form a speckle image on the photodetector array.22,2425

Graphic Jump LocationF3 :

Speckle noise arises from the interference of a series of diffraction patterns, each generated by a speckle element. (modified from Fig. 2.11 of Ref. 25).

Speckle noise is generally countered by integrating a single measurement over several intensity samples as the laser spot is moved over the surface being scanned.26 Figure 4 illustrates the reduction in speckle after integration. This is complicated by the need to minimize aliasing by ensuring that the measurements are, where possible, separated by a distance less than the radius of the laser spot (Fig. 4).7 Similarly, the range uncertainty in an amplitude-modulation continuous-wave scan can be decreased by increasing the sampling rate.27

Graphic Jump LocationF4 :

Speckle noise is reduced by integrating the measurement over several sampling intervals. (modified from Fig. 3 of Ref. 23).

Environmental Effects

The mechanical effects described in Sec. 3c can be included in a model of expected range and rotational uncertainty; however, many environmental factors, summarized in Table 1, can cause the true measurement uncertainty to deviate from the model. For example, measurement uncertainty can increase with increasing incidence angle,2831 a reduction in surface reflectivity,10,32 and an increase in ambient lighting.6,33

Table Grahic Jump Location
Environmental factors affecting measurement uncertainty.

Equation 5 assumes that the size of the spot projected onto the photodetector array has not been distorted by occlusion, surface orientation, or other environmental effects. Figure 5 shows the effect of laser spot distortion arising from a surface discontinuity.7 In this case, the discontinuity occludes part of the laser spot so that the spot centroid no longer coincides with the signal peak. This introduces an error into the horizontal location of the signal peak, denoted here as Δx. This results in a range error Δz, which is compounded by the surface orientation with respect to the direction of the laser. The deviation of the surface normal from the laser path is denoted here as γ. Sudden changes in surface height are not uncommon and represent a reduction in measurement quality that is not captured by model-based measurement uncertainty.

Graphic Jump LocationF5 :

A range discontinuity results in a shift (Δx) in the position of the centroid in a triangulation laser range scanner. This results in a range error Δz. [modified from Fig. 7(a) of Ref. 7].

Different surface materials can also affect the accuracy of range measurements. Figure 6 demonstrates the effect of a partially translucent material, such as marble, in which the laser may penetrate part way into the surface before sufficient light is reflected to estimate the distance to the surface.7 In this case, the range measurement does not represent the surface of the material, and the actual range measurement obtained depends on the reflective and refractive qualities of the material. According to Beraldin et al. ,34 translucent surfaces like marble change the shape of the laser spot on the photodetector array of a triangulation scanner, resulting in the range estimate being in error. As well, the nonhomogeneity of the material increases the range measurement uncertainty.16 Translucent nonhomogeneous materials can also feature a greater measurement uncertainty as well as a bias that increases with the distance between the scanner and the surface.35

Graphic Jump LocationF6 :

Range errors can result from the laser penetrating the surface of the material being scanned. (modified from Fig. 6 of Ref. 7).

Surface complexity is not limited to variations in the height and frequency of surface structures; transitions between areas of different surface reflectivity can affect the accuracy of a range measurement,7,36 as illustrated in Fig. 7. Different materials with different reflectivity properties can also generate very different range measurement uncertainties.10 The change in reflectivity for different portions of the laser spot results in a shift in the signal peak that introduces an error into both the range measurement and return signal intensity, a topic discussed in Sec. 4. Moreover, a reduction in surface reflectivity can result in an increase in range measurement uncertainty.33

Graphic Jump LocationF7 :

Transitions between regions of different surface reflectivity can affect the accuracy of the range measurement (Fig. 1 of Ref. 36).

Increasing the surface orientation with respect to the line of sight of the scanner can result in an elongation of the laser spot, which increases peak detection uncertainty.31 This problem is most pronounced when the length of the baseline is significant with respect to the distance to the surface, as is the case with triangulation laser range scanners, even when operating in the far field. Moreover, increased surface orientation with respect to the line of projection of the laser increases the spot size on the surface, resulting in more speckle elements contributing to the spot projected onto the photodetector array. Because the range uncertainty of triangulation laser range scanners is dependant on the surface orientation, model-based range uncertainty is not sufficient to represent the quality of a range measurement.

Measurement Uncertainty as a Quality Metric

Measurement spatial uncertainty has often been used as a way to quantify the quality of the measurement. For example, Sequeira et al. 3738 and Sequeira and Goncalves39 used range sensor uncertainty as part of a reliability metric generated from the weighted sum of measurement attributes. They recognized that spatial uncertainty is not a sufficient metric and therefore combined it with other measurement quality metrics. The combining of quality metrics to generate a more holistic view of measurement quality is discussed in Sec. 7. Some range sensors, such as the triangulation scanner shown in Fig. 2, have range measurement variance that increases with the square of the distance between the scanner and the surface.16,20,23,4041 In this case, using range sensor uncertainty as a quality metric means that measurements closer to the scanner are considered to be of higher quality.

If the measurements are being merged using a modified Kalman minimum variance estimator (MKMV) approach,4243 then the measurement variance becomes a function of the number of measurements that are merged to form a point in a virtual model. Moreover, the merged measurements could be obtained from different viewpoints; thus, range measurement uncertainty alone is insufficient as a quality metric. To counter this problem, the covariance matrix may be used as a multidimensional quality metric. For example, using the MKMV approach, two measurements x̂i and x̂j are merged to form a point x in the virtual model. The point is generated using the weighted sumDisplay Formula

6x=Wix̂i+Wjx̂j,
whereDisplay Formula
7Wi=Σj(Σi+Σj)1
andDisplay Formula
8Wj=Σi(Σi+Σj)1
are the weighting factors. As a result, the position of x is closest to the measurement with the smallest covariance. In effect, Wi, for example, becomes a quality metric for measurement x̂i; the location of the point represents the integration of multiple measurements that maximizes the quality of the point from the perspective of measurement uncertainty.

One drawback of Sequeira’s weighting method is that it was only applied to radial uncertainty. Table 1 illustrates the reasoning behind considering only radial uncertainty: it is the attribute that is generally affected by environmental factors. In Sequeira’s case, the metric was only applied to range images and not to the merged data; thus, this approach was sufficient for the purpose for which it was designed. Rotational uncertainty could be assumed constant and, thus, ignored. The method, however, is not generalizable to data merged using the MKMV method. Consider that the covariance of x is found byDisplay Formula

9Σ1=Σi1+Σj1;
thus, the radial and rotational uncertainties of Σ are less than the radial and rotational uncertainties of either Σi or Σj. If only the radial uncertainty is considered, then the reduction in rotational uncertainty is never taken into account. Similar issues arise when combining data from multiple types of scanners, each which may have different radial and rotational uncertainties.

The MKMV weighting factors, although effective quality metrics for measurement merger, are less effective for representing the quality of the measurement from the perspective of spatial measurement uncertainty. Ideally, an uncertainty metric should represent the uncertainty of a measurement as a scalar value so that the relative quality of measurements can be compared along a single vector rather than within a multidimensional space. On the other hand, reducing a multidimensional parameter to a single dimension risks losing potentially important information; therefore, the choice of unidimensional representation must be carefully chosen.

Although the covariance matrix approach addresses the issue of ignoring potentially valuable information in the position uncertainty attribute, it does not address the issues of surface complexity and orientation increasing the effective measurement uncertainty above the level predicted by the model. As a quality metric, range uncertainty and even measurement covariance are useful quality metrics but not sufficient by themselves. In particular, metrics evaluating surface spatial complexity, surface orientation, and changes in surface reflectivity need to be examined to augment measurement spatial uncertainty as a quality metric.

It was noted in Sec. 3d that a decrease in surface reflectivity can result in an increase in measurement uncertainty. Surface reflectivity can be assessed by examining how the intensity of the received signal varies from what would be expected for a surface of known reflectivity; however, signal intensity measurements can vary significantly as a result of such factors as range,32 high incidence angles,28,3132 low reflectivity,10,18 atmospheric attenuation,44 sharp discontinuities,16,45 and translucency of the material being scanned.16,40 For example, the return signal intensity decreases with an increase in angle of incidence and decreases with an increase in distance between the scanner and the surface when the transmitted signal power remains constant.32 As a result, quality metrics provide a way to predict the extent to which the actual reflectivity of a surface might deviate from that predicted from the return signal intensity.

Figure 5 illustrated that surface discontinuities can result in range errors; however, a change in the shape of the signal intensity profile in a triangulation laser range scanner can also result in a reduction in return signal intensity. When the shape of the peak is sufficiently distorted, as is the case with mixed measurements, it becomes difficult to locate its centroid. Laser spots that cross edges can result in smeared or multiple return signals that result in ambiguous range measurements, what is referred to as mixed measurement error.32,46 Mixed measurements are a result of receiving reflected energy from two surfaces within the laser spot and are often interpreted as a range measurement somewhere between the two surfaces.6,33,47 Hebert and Krotkov6 referred to the interdependence of measured range with signal intensity as range/intensity crosstalk. TOF systems calculate range by comparing the return signal to the transmitted signal, thus are more sensitive to signal intensity changes. Figure 8 shows that a discontinuity in surface reflectivity can also reduce the return signal intensity.7 As a result, quality metrics provide a way to predict the extent to which the spatial position of the measurement might be in error as a result of the return signal intensity deviating from that predicted using a model of the laser range scanner optics.

Graphic Jump LocationF8 :

Discontinuity in surface reflectivity results in a shift (Δx) in the position of the centroid in a triangulation laser range scanner. This results in a change in return signal intensity. [modified from Fig. 7(b) of Ref. 7].

Some surfaces may be difficult, if not impossible, to scan because the return signal is diffusely scattered, what is referred to as volumetric scattering.46,48 Surfaces that exhibit this property include glass, hair,46 and grass.48 Figure 6 illustrates that translucent materials can also reduce the strength of the return signal.7 Other surfaces are excessively absorbent so the signal is of insufficient intensity to obtain a range measurement, while other surfaces may be very highly reflective that the photodetector is saturated.46 The absence of a return signal, referred to as a nonreturn measurement, can be a valuable piece of information but is almost always discarded.

Given a reference material, the change in return signal intensity can be modeled as a function of range. A shift in the return signal intensity from the model value can then be used as a metric of the quality of a measurement. Measurement spatial uncertainty is also affected by return signal intensity; thus, both, variables are important in assessing measurement quality, and neither are sufficient by themselves. Moreover, signal intensity shifts can indicate the presence of mixed pixels and surface material transitions, either of which may introduce errors into the range measurement. The challenge is in how to determine the cause of the intensity shift, given that only the spatial position and deviation in signal intensity from a model value are known. Deviations from model return intensity can arise from several different environmental conditions; therefore, return intensity, even when combined with spatial position and model spatial uncertainty, is not sufficient to completely represent the quality of a measurement. Table 2 summarizes the factors that affect return signal intensity.

Table Grahic Jump Location
Environmental factors affecting return signal intensity.
Intensity as a Quality Attribute

Signal intensity is rarely used as a quality metric; it is more often used as a weighting factor for combining measurements. For example, Godin et al. 49 used the compatibility of signal intensities between correspondence pairs of measurements prior to iterative closest point (ICP) registration. Given two intensity measurements hi and hj, the compatibility C(hi,hj) is found byDisplay Formula

10C(hi,hj)=exp((hihj)2σc2),
where σc2 is an estimate of the reliability of the intensity measurements. In Eq. 10, hi and hj are quality attributes associated with measurements x̂i and x̂j respectively; however, this metric only assessed the quality of association between two measurements, not the quality of each measurement. Fiocco et al. 50 defined a reflectivity quality metric for each measurement. It took the formDisplay Formula
11ρ={1ρminρiρmax0otherwise,}
where ρmin and ρmax defined the minimum and maximum acceptable reflectivity of the surface, and ρi was the observed surface reflectivity. Sequeira et al. 3738 simply applied a weighting factor to the detected signal intensity.

One drawback of Fiocco et al. ’s method is that it employs a binary scale, which, while useful for the application for which it was designed, lacks the generalizability of a sliding scale. Sequeira’s approach of using a weighting factor avoids this problem, but does not address the issue of the ideal reflectivity changing with an increase in range. As with Fiocco’s method, the weighted intensity approach used by Sequeira was sufficient for the application for which it was designed but is not applicable to medium-range scanning without some modifications to take into account the relationship between range and return signal intensity. Fiocco et al. avoids this problem by using reflectivity, which is independent of range.

It was noted in Sec. 3c that measurement spatial uncertainty generally increases with increased range and, in Sec. 4, that return signal intensity generally decreases with increased range. The range measurement itself can be used to represent the quality of a measurement. For example, Sequeira et al. 3738 and Fiocco et al. 50 each used the range portion of the measurement as part of their reliability metrics. Figure 9 graphically demonstrates how the quality of a measurement decreases as the distance between the scanner and the surface that generated the measurement increases.

Graphic Jump LocationF9 :

Assuming the scanner position is limited to within a metre or two of ground level, the lateral measurement uncertainty increases as the height of the structure increases due to the increase in the distance between the scanner and the surface.

In general, the farther a surface is from the scanner, the larger the area encompassed within the laser spot. The size of the spot projected onto a surface is represented by the beam width at the point of intersection. The beam width depends on the distribution of irradiance, which is often assumed to follow a Gaussian distribution. Specifically,Display Formula

12I(r,ζ)=Icexp(2r2w(ζ)2),
where Ic is the is irradiance of the beam along the central axis, r is the radial distance perpendicular to the central axis, and w(ζ) is the spot radius a distance ζ from the beam waist.5152 Figure 10 shows the irradiance profile centered on the central axis and the spot size w(ζ) as a function of distance from the beam waist.

Graphic Jump LocationF10 :

Laser beam 1e2 boundary. The far field is the region in which ζζ0.

The surface formed by w(ζ) represents the distance r from the central axis at which the beam irradiance falls to 1e2=0.135. As a result, the volume bounded by w(ζ) represents the region within which 86.5% of the beam irradiance is contained.5153 The laser spot defined in this way represents the portion of the surface being scanned from which most of the laser irradiance is being reflected. As a result, the laser spot represents the smallest region that can be resolved by the laser range scanner.

The boundary of w(ζ) can be approximated by the hyperbolic equation,Display Formula

13(w(ζ)w(0))2(ζζ0)2=1,
where w(ζ)w(0) is the beam radius as a function of the radius of the beam waist w(0) and ζζ0 is the ratio of the beam waist to the depth of focus of the beam ζ0. The depth of focus, illustrated in Fig. 10, is defined byDisplay Formula
14ζ0=πw2(0)λ
where λ is the laser wavelength. Meanwhile, the beam waist for an aberration-free optical system using a circular lens can be approximated by the Rayleigh diffraction equation,Display Formula
15w(0)1.22λfD,
where D is the lens diameter and f is the focal length of the lens.52 The focal length also represents the distance from the lens to the beam waist.

Range can act as a proxy for the resolution of a measurement under the assumption that the focal length remains fixed and the surface is farther from the scanner than the beam waist. Under these conditions, measurements closer to the scanner can be considered to be of higher quality than those farther from the scanner. Although range is generally not referred to as an indicator of the quality of a measurement, this relationship is implied when the more distant of a pair of measurements is dropped as part of the registration process.

Fiocco et al. 50 defined a distance quality metric based on the minimum and maximum range limits. In practice, a scanner is bounded by the minimum and maximum effective range, defined by a variety of factors, including the laser power, beam spread, and photodetector sensitivity. Sequeira et al. 3738 simply applied a weighting factor to the range measurement to obtain a quality metric.

Only long range scans (those for which ζζ0, referred to as far-field measurements51) are guaranteed to have a measurement resolution decrease with range. Medium range scanners may be used for surfaces that are at, or even less than, the distance to the beam waist. Surfaces that are closer than the beam waist have an inverse relationship between resolution and range, as shown in Fig. 10. In this case, measurement quality decreases with distance. As a result, Fiocco et al. ’s and Sequeira et al. ’s methods are only applicable to the situation for which they were designed; laser range scanners in which the surface is farther from the scanner than the beam waist. For medium-range scanning, the surface may be placed such that it coincides as much as possible with the beam waist. A more general-purpose resolution-based quality metric should be applicable to both long and medium range scanner data, as well as data from scanners with multiple focal lengths. The use of laser spot size in assessing measurement quality will be addressed in Sec. 6.

Attributes such as surface orientation, or spatial or reflectivity discontinuities, cannot be determined from single measurements; they can only be inferred from groups of measurements located in close spatial proximity to each other. Spatially related measurements are referred to here as a neighborhood and are used to model a small portion of the surface being scanned to predict some aspect of that surface, such as its orientation. The class of neighborhood-based metrics encompasses all quality metrics defined by the neighborhood of a measurement.

Neighborhood-based quality metrics attempt to infer some aspect of a measurement by its relationship to its immediate neighbors. For purposes of discussion, a neighborhood is defined as a point p̂ and the set of all points P={p̂0,p̂K} considered to be the immediate neighbors of p̂ by some commonly accepted criteria. It is assumed that this criterion is either the Euclidean or the rotational distance, although the discussion could apply to other distance metrics.

Two neighborhood-based quality metrics are considered: those based on interpoint distance and those based on vertex orientation with respect to the line of sight. The former is a measure of the density of the measurements in a neighborhood, which, in turn, indicates how finely the surface has been sampled. The latter is used to estimate the orientation of the surface at a spatial location of the measurement and is the most commonly used quality metric after measurement uncertainty.

Surface complexity can also be evaluated using edge-detection techniques. Specifically, spatial (illustrated in Fig. 5) and intensity (illustrated in Figs. 78) discontinuities result in range measurement errors so the quality of measurements corresponding to discontinuities are of lower quality than measurements arising from surfaces without discontinuities. Edge detection, applied to either spatial data, intensity data, or both, can be used to detect the presence of discontinuities, which are one type of surface complexity. A complete review of edge-detection techniques is, however, beyond the scope of this paper. For surveys on edge-detection techniques, see Argyle,54 Davis,55 Peli and Malah,56 Ziou and Tabone,57 Trichili et al. ,58 Xiao et al. ,59 and Basu.60

Distance Metrics

Distance metrics are typically used to evaluate two attributes: distance to neighboring points and the density of points in the neighborhoods. The latter is referred to as sampling density, which is the number of measurements per unit area of the surface being modeled. Densely sampled surfaces have the greatest possibility of detecting important surface features that might be missed by more sparse sampling methods. On the other hand, dense scanning techniques generate a large number of points, many of which may be redundant if the surface being scanned lacks significant surface features. With respect to quality, densely sampled surfaces, to within certain limits, have the greatest probability of generating high-quality models; thus, sampling density is a measure of the potential quality of the final model.

According to Shannon sampling theory, given a band-limited signal, the sampled signal will contain all the information in the band-limited signal only if the sampling frequency is more than twice the signal bandwidth.61 This is also known as the Shannon-Nyquist sampling theorem62 or simply the Nyquist sampling theorem.63 This means that the distance between samples must be less than half the smallest feature size resolvable to the scanner;64 that is,Display Formula

16d<Δx2,
where d is the distance between samples and the smallest feature size resolvable is given by Δx. The signal bandwidth is referred to as the Nyquist frequency, and the Nyquist rate, equal to twice the Nyquist frequency, defines the frequency that must be exceeded by the sampling frequency. If the sampling frequency is less than or equal to the Nyquist rate, then aliasing, or aliasing distortion, occurs.63 On the other hand, measurement quality does not improve in proportion to the amount by which the sampling rate exceeds the Nyquist rate;65 thus, the sampling rate is often defined to be only slightly higher than the Nyquist rate. The Nyquist rate, therefore, represents a quality breakpoint.

Shannon sampling requires a band-limited signal, and diffraction in the optical system ensures this by imposing a limit on the size of features that can be resolved. The Rayleigh criteria represents the resolution limit of the scanning system even if measurement noise were negligible.34 In the case of a perfectly focused, diffraction-limited optical system, laser physics still imposes a limit on the size of the feature that can be resolved, given by the Rayleigh criteria. If Δd represents the minimum distance between beam footprint peaks at which they can be separately resolved, then the Nyquist rate is given byDisplay Formula

17fR=1Δd
and the Nyquist frequency becomes fN=fR2. The smallest feature that can be resolved is given by the beam width 2Δd.

If 2Δd is large with respect to d, then fine details are blurred;3 however, if the d is too large, then fine details are missed. It is convenient, in the absence of other information about the system, to choose a sampling density slightly less than the smallest angular beamwidth such that d<min{w(ζ)} within the volume of interest. The goal of scanning a surface is to achieve an intersample surface distance Δx, given that the laser scanner is, under ideal conditions, unable to resolve features at <2Δd. Sampling density and intersample distance, therefore, are useful in assessing model quality.

Klein and Sequeira66 and Klein and Zachmann67 compared the actual sample density β(p) to the expected sampling density F(x,V,m). The expected sampling density was found usingDisplay Formula

18F(x,V,m)=mn(xV)R3Apatch,
where V is the position of the scanner in the world, m is the resolution, Apatch is the solid angle of the patch covered by a single pixel, n is the normal of the surface at x, and x is a point in the global coordinate system. If x is part of an unscanned surface, then F(x,V,m)=0. These quality metrics were then used to perform a cost-benefit analysis of potential viewpoints. Specifically, they calculated the resolution quality of point x in the surface as seen from viewpoint V usingDisplay Formula
19B(x,V,m)=min[βmax(x),F(x,V,m)]min[βmax(x),β(x)],
where βmax(x) is the maximum sampling density of x, and β(x) is the observed sampling density of x. In this case, the benchmark is βmax(x). The benefit of this approach is that it combined measurement resolution, surface orientation, and sampling density into quality metric for each point on a surface.

Fiocco et al. 50 used a less complicated method for defining the density of a set of measurements than proposed by Klein and Sequeira66 and Klein and Zachmann.67 They defined the density quality metric asDisplay Formula

20s={smaxsismaxsismax0otherwise,}
where si is the distance to the closest neighbor and smax is the maximum acceptable distance. Meanwhile, Sequeira et al. 3738 used the weighted average-distance between neighboring points as a quality metric.

One drawback of the quality metrics employed by Refs. 3738,50,6667 is that they ignore measurement spatial uncertainty, which also affects the resolution of the system.3,68 In particular, spatial uncertainty makes it difficult to know, precisely, the extent of the region covered by each laser spot. Another drawback of these metrics is that they do not make clear whether quality is being assessed relative to the desired resolution Δx or the attainable resolution 2Δd. The former is generally constant while the latter depends on surface orientation, the presence of spatial or reflectivity discontinuities, and the size of the laser spot illuminating the surface. In some cases, Δx may not even be attainable for certain combinations of range and surface orientation.

Orientation Metrics

A typical approach to generating the orientation of a measurement is to obtain a mesh model of the surface and use the normals of each of the mesh elements to estimate the normal of the surface at the measurement.22,42,6971 Orientation is often represented by the surface normal, which is generally found by taking the average of the normals of all Delaunay facets that have this measurement as a vertex.42,70 The exception is Hoppe et al. ,72 who preferred to use the normal of a plane fit to the neighborhood of the measurement. The benchmark for the grazing angle attribute is the angle that generates the most accurate range measurement; that is, when the surface normal is oriented along the line between the surface and the scanner. Assuming the maximum grazing angle is one in which the surface normal is perpendicular to the line between the surface and the scanner, the scale of the grazing angle attribute is from 0 (best quality) to π2 (worst quality) radians. This is often represented as the cosine of the grazing angle,30,70 which has a range of 1 (best quality) to 0 (worst quality).

Often the deviation of the return signal intensity from the ideal Lambertian model is represented by the surface normal25,42,6970 or grazing angle.73 The reasoning is that the signal intensity decreases with increasing surface orientation; thus, surface orientation can be used as a proxy for signal intensity. However, return signal intensity is affected by all the factors summarized in Table 2. Therefore, this assumption is true only in the absence of other factors, such as surface spatial complexity and changes in surface reflectivity. Surface orientation also affects the uncertainty of range measurements,74 particularly for triangulation laser range scanners; thus, surface orientation as a metric can affect quality metrics for both spatial uncertainty and return signal intensity.

Fiocco et al. 50 used the deviation of the line of sight to the scanner from the surface normal as a quality metric. This metric took the formDisplay Formula

21φ=1φi90,
where φi is the surface orientation deviation in units of degrees. Turk and Levoy70 used the cosine of the grazing angle to weight measurements prior to ICP registration. Soucy and Laurendeau30 showed that the squared cosine of the grazing angle corresponds to the relative illuminance received by the photodetector. They used this metric to perform a weighted merge of measurements from different viewpoints such thatDisplay Formula
22x=i=1NWix̂i,
whereDisplay Formula
23Wi=cos2(γi)j=1Ncos2(γj)
is the weighting factor associated with measurement x̂i. The cosine of the grazing angle can be found usingDisplay Formula
24cos(γi)=niTx̂iR̂i,
where x̂i is a measurement located R̂i units from the viewpoint, and ni is the normal to the surface at x̂i. In this case, the coordinate system is assumed centered on the scanner viewpoint. Curless22 employed a similar approach to merging measurements that co-occupied the same voxel. Soucy and Laurendeau30 demonstrated that the reflectivity of the surface was directly proportional to the square of Eq. 24. Because measurement quality was expected to be directly proportional to the amount of light returned to the sensor, cos2(γ) would better represent measurement quality than cos(γ); however, this was based on the assumption that the reflectivity change was primarily caused by high surface orientation. The relationship is less clear when the surface reflectivity is more complex.

Scott et al. 73 suggested that basing quality solely on the grazing angle of a measurement ignores the objective effects of high grazing angle in favor of a more subjective metric. Surface orientation, in particular, ignores factors that affect the shape and peak height of the intensity profile, such as surface reflectivity changes. Moreover, the surface normal is the average of the orientations along each Delaunay edge extending from a point. As a result, it is possible to have a wide range of vertex normals but a surface normal oriented along the line of sight. Finally, for systems in which the baseline is not insignificant with respect to range, the line of sight could be defined with respect to the photodetector, the laser, or the scanner origin, each yielding a different result. As a result, surface orientation is important but insufficient as a quality metric.

An alternative to grazing angle for representing surface orientation of a range image obtained using a raster scan pattern is the facet edge length ratio. In this case, the ratio of longest to shortest edge of a Delaunay facet is used to assess the quality of the facet and, by extension, its measurements. Sequeira et al. used this approach to discard the facet if the ratio was too large.37 Consider the image on the left in Fig. 11, which represents a two-dimensional Delaunay triangulation of a range image; when seen in three dimensions, facets on a discontinuity are elongated with respect to their neighbors. The ratio between the longest and shortest edge should ideally be 1:1; that is, the triangles should be equilateral. As the surface orientation increases with respect to the line of sight from the scanner, the ratio between the longest and shortest edges increases. Specifically, given a facet Fi with edges Ei={ei,1,ei,2,ei,3}, the facet edge ratio wi can be found byDisplay Formula

25wi=minEimaxEi(0,1].
The weighting factor wi decreases toward zero as the disparity between the longest and shortest edges increases.

Graphic Jump LocationF11 :

Range discontinuities result in elongation of Delaunay facets when viewed in three dimensions (Modified from Fig. 4 of Ref. 38).

The facet ratio represents a quality metric in which the neighborhood is limited to the three measurements bounding the Delaunay facet. High-quality measurements would be those in which the facet ratio was close to 1, whereas those in which wi was very small would be considered to be low-quality measurements. Low-quality measurements would have elongated facets indicating steep surface slopes. A drawback of this method is that it is specifically designed to assess the quality of facets and can only be applied to measurements as a side benefit. Moreover, it is specifically designed to work with regularly spaced raster patterns. Nonraster patterns can feature large edge ratios, even if the surface is relatively flat, as illustrated in Fig. 12. Arrangements in Figs. 12 contain facets with large facet ratios regardless of the range value associated with them. Although well suited to the purpose for which it was designed, facet ratio is not easily adapted to use as a general-purpose quality metric representing surface orientation. Fiocco’s method as well as the more popular grazing angle metric described by Eq. 24 are better suited as general-purpose surface orientation quality metrics.

Graphic Jump LocationF12 :

Facet ratio is most effective for regularly-spaced data, such as (a) and (b). As measurement distribution becomes less regular, facets with large facet ratios can emerge regardless of their relative range measurements.

Quality metrics are generally combined to generate an overall measure of quality, referred to here as a total quality metric. Scott et al. 2 cited two common examples of how quality metrics could be combined: weighted summation and composite binary pass/fail. The weighted summation approach takes the formDisplay Formula

26Qi=j=1NCwjCi,j,
where Ci,j[0,1] represents the j’th quality metric. An example of this approach is the weighted average model used by Sequeira et al. 37 to determine the total quality of each measurement in a range image. To ensure that Qi[0,1], the weight values can be restricted such that j=1NCwj=1. Meanwhile, the binary product approach has the formDisplay Formula
27Qi=j=1NC(Ci,jCT,j),
where CT,j is a threshold quality limit for the j’th quality metric. In this case, (Ci,jCT,j)=1 when the quality metric equals or exceeds the threshold value, and (Ci,jCT,j)=0 otherwise.

The choice of how quality metrics are combined depends on the application and the relative weight placed on each of the quality metrics. The weighted summation approach allows the researcher to tailor the contribution of each of the quality metrics to the overall measurement quality without any one metric dominating the result. For example, Fiocco et al. 50 experimentally derived the weights for each sensor used in the experiment. They also standardized the weighting factors such that each sensor technology could be represented by a single weighting factor that modified each of the metric weights. Sequeira et al. 3738 also used the weighted-sum approach but did not indicate how the weights were derived. The binary product approach is effective if the goal is to simply exceed some preset quality level.

Several quality attributes are notably absent from contemporary, and even emerging, quality metrics. In particular, no quality metric has been developed to address the motion of the laser spot during the acquisition process. This is of particular interest in triangulation scanners where multiple sample intervals may be integrated to combat speckle noise. No quality metric has been defined to quantify the effect of measurement resolution. Even using range as a quality metric only addresses measurement resolution by proxy. In fact, neighborhood-based metrics do not consider the issue of measurement density or proximity that is less than the measurement resolution of the system. No metric has addressed the problem of measurement repeatability, most likely because it requires multiple range images of the same surface, which substantially increases scanning time. Finally, surface complexity is only imperfectly evaluated using surface orientation.

Measurement quality metrics are rarely combined into a total quality metric. As a result, operations such as measurement merge, range image registration, and deciding whether or not to delete a measurement are often based on inadequate information. For example, although a maximum likelihood merge of two measurements is statistically valid, the covariance matrix only partially describes the quality of the measurement. In fact, a measurement with relatively large covariance may be of substantially lower quality than a measurement with relatively small covariance when other factors, such as distortion of the signal peak and surface orientation, are taken into account. A more comprehensive approach to applying measurement quality to manipulating measurements is required.

Finally, nonreturn measurements are generally treated as having no qualitative value, thus are often ignored during data collection. This means that information about regions of the environment that cannot be scanned is lost. Future research should examine what can be learned about the environment being scanned from the absence of a return signal.

Quality metrics have featured significantly in contemporary research; however, most quality metrics have been designed for specific application or specific algorithms, and are often used independently. Measurement uncertainty has been used extensively to represent measurement quality, but many environmental factors affect measurement uncertainty, making it insufficient as an independent quality metric.

The relationship of range and resolution to measurement quality depends on the beam width. Additional work is required to better define the relationship between measurement quality and resolution for midfield measurements, where parallax must be taken into account. Sampling density has also been featured in various forms as a quality metric, although most approaches are highly application specific. Absent from the literature is a more detailed analysis of how sampling density is related to measurement quality and how to quantify sampling density as a quality metric in a generalized fashion. Surface orientation has also been used extensively as a quality metric, although it is also insufficient as an independent quality metric. Reflectivity is affected by surface materials, orientation, and surface complexity; thus, this factor has been used to represent measurement quality. Given, however, that reflectivity is affected by multiple factors, it, too, is insufficient as an independent quality metric.

The current state of the art in quality metrics performs adequately in assessing the quality of measurements within the context of specific applications, but are often not readily generalizable. Few researchers combine quality metrics so that the strengths of one may offset the weakness of the other. This paper was a first step in assessing the relationship among the various quality metrics currently in use. More work is needed to develop a more comprehensive approach to measurement quality assessment.

We thank the National Research Council of Canada for providing funding for this research through the Graduate Student Scholarship Supplement, as well as for providing facilities and equipment.

Rohani  B., and Zepernick  H.-J., “ Application of a perceptual speech quality metric for link adaptation in wireless systems. ,”  Proc. 1st Int. Symposium on Wireless Communication Systems. , pp. 260–264 ,  IEEE , Piscataway, NJ ((2004)).
Scott  W., , Roth  G., , and Rivest  J., “ Performance-Oriented View Planning for Model Acquisition. ,” in  Proc. Int. Symp. on Robotics. , pp. 212–219  ((2000)).
Lichti  D. D., and Jamtsho  S., “ Angular resolution of terrestrial laser scanners. ,” Photogramm. Rec..  0031-868X 21, , 141–160  ((2006)).
Walker  J. G., “ Optical imaging with resolution exceeding the Rayleigh criterion. ,” Opt. Acta.  0030-3909 30, (9 ), 1197–1202  ((1983)).
Blais  F., and Beraldin  J.-A., “ Recent developments in 3D multi-modal laser imaging applied to cultural heritage. ,” Mach. Vision Appl..  0932-8092 17, (6 ), 395–409  ((2006)).
Hebert  M., and Krotkov  E., “ 3-D measurements from imaging laser radars: How good are they?. ,” in  Proc. IEEE/RSJ Int. Workshop on Intelligent Robots and Systems. , Vol. 1, , pp. 359–364  ((1991)).
Blais  F., , Taylor  J., , Cournoyer  L., , Picard  M., , Borgeat  L., , Dicaire  L., , Rioux  M., , Beraldin  J.-A., , Godin  G., , Lahanier  C., , and Aitken  G., “ High resolution imaging at 50μm using a portable XYZ-RGB color laser scanner. ,” in  Proc. Int. Workshop on Recording, Modeling and Visualization of Cultural Heritage. ,  NRC , Ottowa ((2005)).
Ryan  J. S., and Carswell  A. I., “ Laser beam broadening and depolarization in dense fogs. ,” J. Opt. Soc. Am..  0030-3941 68, , 900–908  ((1978)).
Adams  M., “ Lidar design, use, and calibration concepts for correct environmental detection. ,” IEEE Trans. Rob. Autom..  1042-296X 16, , 753–761  ((2000)).
Beraldin  J.-A., , El-Hakim  S., , and Cournoyer  L., “ Practical range camera calibration. ,” Proc. SPIE.  0277-786X 2067, , 21–31  ((1993)).
Green  D., and Blais  F., “ A Multiple DSP-based 3D Laser Range Sensor and its Application to Real-time Motion Detection. ,” Tech. Report No. NRC/ERB-1095, National Research Council of Canada Ottowa ((2002)).
MacKinnon  D., , Blais  F., , and Aitken  V., “ Object location using edge-bounded planar surfaces from sparse range data. ,” in  Proc. Canadian Conference on Electrical and Computer Engineering. , pp. 403–408, IEEE ((2003)).
Longbin  M., , Ziaoquan  S., , Yiyu  Z., , and Chang  S. Z., “ Unbiased converted measurements for tracking. ,” IEEE Trans. Aerosp. Electron. Syst..  0018-9251 34, , 1023–1027  ((1998)).
Suchomski  P., “ Explicit expressions for debiased statistics of 3D converted measurements. ,” IEEE Trans. Aerosp. Electron. Syst..  0018-9251 35, , 368–370  ((1999)).
Beraldin  J.-A., , Latouche  C., , El-Hakim  S., , and Filiatrault  A., “ Applications of photogrammetric and computer vision techniques in shake table testing. ,” in  Proc. 13th World Conf. on Earthquake Engineering. , paper no. 3458  ((2004)).
Beraldin  J.-A., , Picard  M., , El-Hakim  S., , Godin  G., , Borgeat  L., , Blais  F., , Paquet  E., , Rioux  M., , Valzano  V., , and Bandiera  A., “ Virtual reconstruction of heritage sites: Opportunities and challenges created by 3D technologies. ,” in  Proc. Int. Workshop on Recording, Modeling and Visualization of Cultural Heritage. , pp. 141–156  ((2005)).
Blais  F., “ Review of 20Years of Range Sensor Development. ,” J. Electron. Imaging.  1017-9909 13, , 231–243  ((2004)).
Adams  M., “ Coaxial range measurement—Current trends for mobile robotic applications. ,” IEEE Sens. J..  1530-437X 2, , 2–13  ((2002)).
Amann  M.-C., , Bosch  T., , Lescure  M., , Myllyla  R., , and Rioux  M., “ Laser ranging: a critical review of usual techniques for distance measurement. ,” Opt. Eng..  0091-3286 40, , 10–19  ((2001)).
Blais  F., , Beraldin  J. A., , and El-Hakim  S. F., “ Range error analysis of an integrated time-of-flight, triangulation, and photogrammetry 3D laser scanning system. ,” Proc. SPIE.  0277-786X 4035, , 236–247  ((2000)).
Garcia  E., and Lamela  H., “ Low-cost three-dimensional vision system based on a low-power semiconductor laser rangefinder and a single scanning mirror. ,” Opt. Eng..  0091-3286 40, , 61–66  ((2001)).
Baribeau  R., and Rioux  M., “ Influence of speckle on laser range finders. ,” Appl. Opt..  0003-6935 30, , 2873–2978  ((1991)).
Beraldin  J.-A., , Blais  F., , Rioux  M., , Cournoyer  L., , Laurin  D., , and MacLean  S., “ Eye-safe digital 3-D sensing for space applications. ,” Opt. Eng..  0091-3286 39, , 196–211  ((2000)).
Goodman  J. W., “ Some fundamental properties of speckle. ,” J. Opt. Soc. Am..  0030-3941 66, , 1145–1150  ((1976)).
Curless  B. L., “ New methods for surface reconstruction from range images. ,” Ph.D. Dissertation, Stanford Univ. ((1997)).
Baribeau  R., , Rioux  M., , and Godin  G., “ Color reflectance modeling using a polychromatic laser range sensor. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 14, , 263–269  ((1991)).
Hancock  J. A., “ Laser intensity-based obstacle detection and tracking. ,” Ph.D. Dissertation, Carnegie Mellon Univ. ((1999)).
Prieto  F., , Boulanger  P., , Lepage  R., , and Redarce  T., “ Automated inspection system using range data. ,” in  Proc. IEEE Int. Con. on Robotics and Automation. , 3, , pp. 2557–2562  ((2002)).
Lang  J., and Pai  D., “ Bayesian estimation of distance and surface normal with a time-of-flight laser rangefinder. ,” in  Proc. 2nd Int. Conf. on 3-D Digital Imaging and Modeling. , pp. 109–117 ,  IEEE  ((1999)).
Soucy  M., and Laurendeau  D., “ A general surface approach to the integration of a set of range views. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 17, , 344–358  ((1995)).
Johnson  A., , Hoffman  R., , Osborn  J., , and Hebert  M., “ A system for semi-automatic modeling of complex environments. ,” in  Proc. Int. Conf. on Recent Advances in 3-D Digital Imaging and Modeling. , pp. 213–220 ,  IEEE  ((1997)).
Hancock  J., , Hebert  M., , and Thorpe  C., “ Laser intensity-based obstacle detection. ,” in  Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. , 3, , pp. 1541–1546  ((1998)).
Hancock  J., , Langer  D., , Hebert  M., , Sullivan  R., , Ingimarson  D., , Hoffman  E., , Mettenleiter  M., , and Froehlich  C., “ Active laser radar for high-performance measurements. ,” in  Proc. IEEE Int. Conf. on Robotics and Automation. , 2, , pp. 1465–1470  ((1998)).
Beraldin  J.-A., , Blais  F., , Rioux  M., , Domey  J., , Gonzo  L., , Nisi  F. D., , Comper  F., , Stoppa  D., , Gottardi  M., , and Simoni  A., “ Optimized position sensors for flying-spot active triangulation systems. ,” in  Proc. 4th Int. Conf. on 3-D Digital Imaging and Modeling. , pp. 29–36 ,  IEEE  ((2003)).
Godin  G., , Beraldin  J.-A., , Rioux  M., , Levoy  M., , and Cournoyer  L., “ An assessment of laser range measurement of marble surfaces. ,” in  Proc. 5th Conf. on Optical 3-D Measurement Techniques. , pp. 49–56  ((2001)).
El-Hakim  S., and Beraldin  J.-A., “ Configuration design for sensor integration. ,” Proc. SPIE.  0277-786X 2598, , 274–285  ((1995)).
Sequeira  V., , Ng  K., , Wolfart  E., , Goncalves  J. G., , and Hogg  D., “ Automated 3D reconstruction of interiors with multiple scan-views. ,” Proc. SPIE.  0277-786X 3641, , 106–117  ((1998)).
Sequeira  V., , Ng  K., , Wolfart  E., , Goncalves  J. G. M., , and Hogg  D., “ Automated reconstruction of 3D models from real environments. ,” ISPRS J. Photogramm. Remote Sens..  0924-2716 54, , 1–22  ((1999)).
Sequeira  V., and Goncalves  J., “ 3D reality modelling: Photo-realistic 3D models of real world scenes. ,” in  Proc. 1st Int. Symp. on 3D Data Processing Visualization and Transmission. , pp. 776–783 ,  IEEE  ((2002)).
Beraldin  J.-A., “ Integration of laser scanning and close-range photogrammetry—The last decade and beyond. ,” in  Proc. XXth Int. Soc. for Photogrammetry and Remote Sensing (ISPRS) Congress, Commission VII. , pp. 972–983  ((2004)).
DeNisi  F., , Comper  F., , Gonzo  L., , Gottardi  M., , Stoppa  D., , Simoni  A., , and Beraldin  J.-A., “ A CMOS sensor optimized for laser spot-position detection. ,” IEEE Sens. J..  1530-437X 5, , 1296–1304  ((2005)).
Rutishauser  M., , Stricker  M., , and Trobina  M., “ Merging range images of arbitrarily shaped objects. ,” in  Proc. IEEE Comput. Soc. Conf. on Computer Vision and Pattern Recognition. , pp. 573–580 ,  IEEE  ((1994)).
Zhang  Z., and Faugeras  O., “ A 3D world model builder with a mobile robot. ,” Int. J. Robot. Res..  0278-3649 11, , 269–285  ((1992)).
Carmer  D., and Peterson  L., “ Laser radar in robotics. ,” Proc. IEEE.  0018-9219 84, , 299–320  ((1996)).
El-Hakim  S., and Beraldin  J.-A., “ On the integration of range and intensity data to improve vision-based threee-dimensional measurements. ,” Proc. SPIE.  0277-786X 2350, , 306–321  ((1994)).
Scott  W. R., , Roth  G., , and Rivest  J.-F., “ View planning for automated three-dimensional object reconstruction and inspection. ,” ACM Comput. Surv..  0360-0300 35, , 64–96  ((2003)).
Tuley  J., , Vandapel  N., , and Hebert  M., “ Analysis and removal of artifacts in 3-D LADAR data. ,” in  Proc. IEEE Int. Conf. on Robotics and Automation. , pp. 2203–2210  ((2005)).
Slob  S., , Hack  H., , and Turner  A., “ An approach to automate discontinuity measurements of rock faces using laser scanning techniques. ,” in  Proc. ISRM Int. Symp. on Rock Engineering for Mountainous Regions. , pp. 87–94 ,  Sociedade Portuguesa de Geotecnica  ((2002)).
Godin  G., , Rioux  M., , and Baribeau  R., “ Three-dimensional registration using range and intensity information. ,” in Proc. SPIE.  0277-786X , 2350, , 279–290  ((1994)).
Fiocco  M., , Boström  G., , Gonçalves  J., , and Sequeira  V., “ Multisensor fusion for volumetric reconstruction of large outdoor areas. ,” in  Proc. 5th Int. Conf. on 3-D Digital Imaging and Modeling. , pp. 47–54 ,  IEEE  ((2005)).
Williams  D.,  Optical Methods in Engineering Metrology. , 1st ed., pp. 11–16 ,  Chapman & Hall , London ((1993)).
Chu  B.,  Laser Light Scattering Basic Principles and Practice. , 2nd ed., pp. 156–160 ,  Academic Press ,  New York , ((1991)).
Jacobs  G., “ Understanding spot size for laser scanning. ,” Professional Surv. Mag.. , 26, (10 ), 48–50  ((2006)).
Argyle  E., “ Techniques for edge detection. ,” Proc. IEEE.  0018-9219 , 59, , 285–287  ((1971)).
Davis  L., “ A survey of edge detection techniques. ,” Comput. Graph. Image Process..  0146-664X , 4, , 248–270  ((1975)).
Peli  T., and Malah  D., “ A study of edge detection algorithms. ,” Comput. Graph. Image Process..  0146-664X , 20, , 1–21  ((1982)).
Ziou  D., and Tabbone  S., “ Edge Detection Techniques -An Overview. ,” Int. J. Pattern Recog. and Image Anal.. , 8, , 537–559  ((1998)).
Trichili  H., , Bouhlel  M.-S., , Derbel  N., , and Kamoun  L., “ A survey and evaluation of edge detection operators application to medical images. ,” in  Proc. IEEE Int. Conf. on Systems, Man and Cybernetics. , vol. 4, pp. 706–709 ((2002)).
Xiao  Z., , Yu  M., , Guo  C., , and Tang  H., “ Analysis and comparison on image feature detectors. ,” in  Proc. 3rd Int. Symp. on Electromagnetic Compatibility. , pp. 651–656 ,  IEEE  ((2002)).
Basu  M., “ Gaussian-based edge-detection methods-a survey. ,” IEEE Trans. Syst. Man Cybern..  0018-9472 , 32, (3 ), 252–260  ((2002)).
Jerri  A. J., “ The shannon sampling theorem—its various extensions and applications: A tutorial review. ,” Proc. IEEE.  0018-9219 65, , 1565–1596  ((1977)).
Lee  C.-H., “ Image surface approximation with irregular samples. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 11, , 206–212  ((1989)).
Oppenheim  A. V., and Schafer  R. W.,  Discrete Signal Processing. , 2nd ed., pp. 142–147 ,  Prentice-Hall Signal Processing ,  Prentice-Hall, Englewood Cliffs, NJ  ((1999)).
Guidi  G., , Frischer  B., , Russo  M., , Spinetti  A., , Carosso  L., , and Micoli  L. L., “ Three-dimensional acquisition of large and detailed cultural heritage objects. ,” Mach. Vision Appl..  0932-8092 17, , 349–360  ((2006)).
Fiete  R. D., and Tantalo  T. A., “ Image quality of increased along-scan sampling for remote sensing systems. ,” Opt. Eng..  0091-3286 , 38, , 815–820  ((1999)).
Klein  K., and Sequeira  V., “ The view-cube: an efficient method of view planning for 3D modelling from range data. ,” in  Proc. 5th IEEE Workshop on Applications of Computer Vision. , pp. 186–191  ((2000)).
Klein  J., and Zachmann  G., “ Proximity graphs for defining surfaces over point clouds. ,” in  Proc. Eurographics Symp. on Point-Based Graphics. ,  ETH , Zurich ((2004)).
den Dekker  A. J., and van den Bos  A., “ Resolution: A survey. ,” J. Opt. Soc. Am. A.  0740-3232 , 14, , 547–557  ((1997)).
Soucy  M., , Croteau  A., , and Laurendeau  D., “ A multi-resolution surface model for compact representation of range images. ,” in  Proc. IEEE Int. Conf. on Robotics and Automation. , 2, , 1701–1706  ((1992)).
Turk  G., and Levoy  M., “ Zippered polygon meshes from range images. ,” in  SIGGraph-94. , pp. 311–318 ,  ACM  ((1994)).
Massios  N. A., and Fisher  R. B., “ A best next view selection algorithm incorporating a quality criterion. ,” in  9th British Machine Vision Conf.. , pp. 780–789  ((1998)).
Hoppe  H., , DeRose  T., , Duchamp  T., , McDonald  J., , and Stuetzle  W., “ Surface reconstruction from unorganized points. ,” in  Proc. SIGRAPH. , 26, , pp. 71–78  ((1992)).
Scott  W., , Roth  G., , and Rivest  J.-F., “ View planning for multi-stage object reconstruction. ,” in  Proc. Vision Interface. , pp. 64–71  ((2001)).
Johnson  A., and Kang  S. B., “ Registration and integration of textured 3-D data. ,” in  Proc. Int. Conf. on Recent Advances in 3-D Digital Imaging and Modeling. , pp. 234–241  ((1997)).

Grahic Jump LocationImage not available.

David MacKinnon holds a BSc (1990) in mathematics from the University of Prince Edward Island (PEI), a BSc (2001) in electrical and computer engineering from the University of New Brunswick, and both an MASc (2003) and PhD (2008) in electrical engineering from Carleton University. He is currently a research associate at the National Research Council Canada’s Institute for Information Technology, working in the area of measurement standards in 3D metrology. Between 1991 and 1998, he worked as a statistician, first with the PEI Food Technology Centre, then with the UPEI Clinical Research Centre. He is currently an Engineer-in-Training with the Association of Professional Engineers and Geoscientists of New Brunswick.

Grahic Jump LocationImage not available.

Victor Aitken holds a BSc (1987) in electrical engineering and mathematics from the University of British Columbia, and the MEng (1991) and PhD (1995) degrees in electrical engineering from Carleton University, Ottawa. He is currently an associate professor and chair of the Department of Systems and Computer Engineering at Carleton University, Ottawa, and is a member of the Professional Engineers of Ontario. His research interests include control systems, state estimation, data and information fusion, redundancy, sliding mode systems, nonlinear systems, vision, and mapping and localization for navigation and guidance of unmanned vehicle systems with applications in underground mining, landmine detection, and exploration.

Grahic Jump LocationImage not available.

François Blais is principal research officer and group leader of visual information technology, at the National Research Council Canada’s Institute for Information Technology. He received his BSc and MSc in electrical engineering from Laval University, Quebec City. Since 1984, his research has resulted in the development of a number of innovative 3D sensing technologies licensed to various industries and applications, including space and the in-orbit 3D laser inspection of NASA’s Space Shuttles. He led numerous R&D initiatives in 3D and the scanning and modeling of important archeological sites and objects of art, including the masterpiece Mona Lisa by Leonardo Da Vinci. He received several awards of excellence for his work and is very active on the international scene with scientific committees, more than 150 publications and patents, invited presentations, and tutorials.

© 2008 SPIE and IS&T

Citation

David MacKinnon ; Victor Aitken and François Blais
"Review of measurement quality metrics for range imaging", J. Electron. Imaging. 17(3), 033003 (July 25, 2008). ; http://dx.doi.org/10.1117/1.2955245


Figures

Graphic Jump LocationF1 :

Example of a fixed-viewpoint laser range scanner employing dual-axis galvanometer-controlled rotating mirrors [modified from Fig. 6(b) of Ref. 10].

Graphic Jump LocationF2 :

Common range uncertainties for Amplitude-Modulation Continuous Wave (AM), Frequency-Modulation Continuous Wave (FMCW), Time-of-Flight (TOF) and triangulation scanners up to 100-m effective range (referred to in this figure as volume). The range measurement uncertainties of all but the triangulation scanners are considered constant with respect to range. [modified from Fig. 2(b) of Ref. 16].

Graphic Jump LocationF3 :

Speckle noise arises from the interference of a series of diffraction patterns, each generated by a speckle element. (modified from Fig. 2.11 of Ref. 25).

Graphic Jump LocationF4 :

Speckle noise is reduced by integrating the measurement over several sampling intervals. (modified from Fig. 3 of Ref. 23).

Graphic Jump LocationF5 :

A range discontinuity results in a shift (Δx) in the position of the centroid in a triangulation laser range scanner. This results in a range error Δz. [modified from Fig. 7(a) of Ref. 7].

Graphic Jump LocationF6 :

Range errors can result from the laser penetrating the surface of the material being scanned. (modified from Fig. 6 of Ref. 7).

Graphic Jump LocationF7 :

Transitions between regions of different surface reflectivity can affect the accuracy of the range measurement (Fig. 1 of Ref. 36).

Graphic Jump LocationF8 :

Discontinuity in surface reflectivity results in a shift (Δx) in the position of the centroid in a triangulation laser range scanner. This results in a change in return signal intensity. [modified from Fig. 7(b) of Ref. 7].

Graphic Jump LocationF9 :

Assuming the scanner position is limited to within a metre or two of ground level, the lateral measurement uncertainty increases as the height of the structure increases due to the increase in the distance between the scanner and the surface.

Graphic Jump LocationF10 :

Laser beam 1e2 boundary. The far field is the region in which ζζ0.

Graphic Jump LocationF11 :

Range discontinuities result in elongation of Delaunay facets when viewed in three dimensions (Modified from Fig. 4 of Ref. 38).

Graphic Jump LocationF12 :

Facet ratio is most effective for regularly-spaced data, such as (a) and (b). As measurement distribution becomes less regular, facets with large facet ratios can emerge regardless of their relative range measurements.

Tables

Table Grahic Jump Location
Environmental factors affecting measurement uncertainty.
Table Grahic Jump Location
Environmental factors affecting return signal intensity.

References

Rohani  B., and Zepernick  H.-J., “ Application of a perceptual speech quality metric for link adaptation in wireless systems. ,”  Proc. 1st Int. Symposium on Wireless Communication Systems. , pp. 260–264 ,  IEEE , Piscataway, NJ ((2004)).
Scott  W., , Roth  G., , and Rivest  J., “ Performance-Oriented View Planning for Model Acquisition. ,” in  Proc. Int. Symp. on Robotics. , pp. 212–219  ((2000)).
Lichti  D. D., and Jamtsho  S., “ Angular resolution of terrestrial laser scanners. ,” Photogramm. Rec..  0031-868X 21, , 141–160  ((2006)).
Walker  J. G., “ Optical imaging with resolution exceeding the Rayleigh criterion. ,” Opt. Acta.  0030-3909 30, (9 ), 1197–1202  ((1983)).
Blais  F., and Beraldin  J.-A., “ Recent developments in 3D multi-modal laser imaging applied to cultural heritage. ,” Mach. Vision Appl..  0932-8092 17, (6 ), 395–409  ((2006)).
Hebert  M., and Krotkov  E., “ 3-D measurements from imaging laser radars: How good are they?. ,” in  Proc. IEEE/RSJ Int. Workshop on Intelligent Robots and Systems. , Vol. 1, , pp. 359–364  ((1991)).
Blais  F., , Taylor  J., , Cournoyer  L., , Picard  M., , Borgeat  L., , Dicaire  L., , Rioux  M., , Beraldin  J.-A., , Godin  G., , Lahanier  C., , and Aitken  G., “ High resolution imaging at 50μm using a portable XYZ-RGB color laser scanner. ,” in  Proc. Int. Workshop on Recording, Modeling and Visualization of Cultural Heritage. ,  NRC , Ottowa ((2005)).
Ryan  J. S., and Carswell  A. I., “ Laser beam broadening and depolarization in dense fogs. ,” J. Opt. Soc. Am..  0030-3941 68, , 900–908  ((1978)).
Adams  M., “ Lidar design, use, and calibration concepts for correct environmental detection. ,” IEEE Trans. Rob. Autom..  1042-296X 16, , 753–761  ((2000)).
Beraldin  J.-A., , El-Hakim  S., , and Cournoyer  L., “ Practical range camera calibration. ,” Proc. SPIE.  0277-786X 2067, , 21–31  ((1993)).
Green  D., and Blais  F., “ A Multiple DSP-based 3D Laser Range Sensor and its Application to Real-time Motion Detection. ,” Tech. Report No. NRC/ERB-1095, National Research Council of Canada Ottowa ((2002)).
MacKinnon  D., , Blais  F., , and Aitken  V., “ Object location using edge-bounded planar surfaces from sparse range data. ,” in  Proc. Canadian Conference on Electrical and Computer Engineering. , pp. 403–408, IEEE ((2003)).
Longbin  M., , Ziaoquan  S., , Yiyu  Z., , and Chang  S. Z., “ Unbiased converted measurements for tracking. ,” IEEE Trans. Aerosp. Electron. Syst..  0018-9251 34, , 1023–1027  ((1998)).
Suchomski  P., “ Explicit expressions for debiased statistics of 3D converted measurements. ,” IEEE Trans. Aerosp. Electron. Syst..  0018-9251 35, , 368–370  ((1999)).
Beraldin  J.-A., , Latouche  C., , El-Hakim  S., , and Filiatrault  A., “ Applications of photogrammetric and computer vision techniques in shake table testing. ,” in  Proc. 13th World Conf. on Earthquake Engineering. , paper no. 3458  ((2004)).
Beraldin  J.-A., , Picard  M., , El-Hakim  S., , Godin  G., , Borgeat  L., , Blais  F., , Paquet  E., , Rioux  M., , Valzano  V., , and Bandiera  A., “ Virtual reconstruction of heritage sites: Opportunities and challenges created by 3D technologies. ,” in  Proc. Int. Workshop on Recording, Modeling and Visualization of Cultural Heritage. , pp. 141–156  ((2005)).
Blais  F., “ Review of 20Years of Range Sensor Development. ,” J. Electron. Imaging.  1017-9909 13, , 231–243  ((2004)).
Adams  M., “ Coaxial range measurement—Current trends for mobile robotic applications. ,” IEEE Sens. J..  1530-437X 2, , 2–13  ((2002)).
Amann  M.-C., , Bosch  T., , Lescure  M., , Myllyla  R., , and Rioux  M., “ Laser ranging: a critical review of usual techniques for distance measurement. ,” Opt. Eng..  0091-3286 40, , 10–19  ((2001)).
Blais  F., , Beraldin  J. A., , and El-Hakim  S. F., “ Range error analysis of an integrated time-of-flight, triangulation, and photogrammetry 3D laser scanning system. ,” Proc. SPIE.  0277-786X 4035, , 236–247  ((2000)).
Garcia  E., and Lamela  H., “ Low-cost three-dimensional vision system based on a low-power semiconductor laser rangefinder and a single scanning mirror. ,” Opt. Eng..  0091-3286 40, , 61–66  ((2001)).
Baribeau  R., and Rioux  M., “ Influence of speckle on laser range finders. ,” Appl. Opt..  0003-6935 30, , 2873–2978  ((1991)).
Beraldin  J.-A., , Blais  F., , Rioux  M., , Cournoyer  L., , Laurin  D., , and MacLean  S., “ Eye-safe digital 3-D sensing for space applications. ,” Opt. Eng..  0091-3286 39, , 196–211  ((2000)).
Goodman  J. W., “ Some fundamental properties of speckle. ,” J. Opt. Soc. Am..  0030-3941 66, , 1145–1150  ((1976)).
Curless  B. L., “ New methods for surface reconstruction from range images. ,” Ph.D. Dissertation, Stanford Univ. ((1997)).
Baribeau  R., , Rioux  M., , and Godin  G., “ Color reflectance modeling using a polychromatic laser range sensor. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 14, , 263–269  ((1991)).
Hancock  J. A., “ Laser intensity-based obstacle detection and tracking. ,” Ph.D. Dissertation, Carnegie Mellon Univ. ((1999)).
Prieto  F., , Boulanger  P., , Lepage  R., , and Redarce  T., “ Automated inspection system using range data. ,” in  Proc. IEEE Int. Con. on Robotics and Automation. , 3, , pp. 2557–2562  ((2002)).
Lang  J., and Pai  D., “ Bayesian estimation of distance and surface normal with a time-of-flight laser rangefinder. ,” in  Proc. 2nd Int. Conf. on 3-D Digital Imaging and Modeling. , pp. 109–117 ,  IEEE  ((1999)).
Soucy  M., and Laurendeau  D., “ A general surface approach to the integration of a set of range views. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 17, , 344–358  ((1995)).
Johnson  A., , Hoffman  R., , Osborn  J., , and Hebert  M., “ A system for semi-automatic modeling of complex environments. ,” in  Proc. Int. Conf. on Recent Advances in 3-D Digital Imaging and Modeling. , pp. 213–220 ,  IEEE  ((1997)).
Hancock  J., , Hebert  M., , and Thorpe  C., “ Laser intensity-based obstacle detection. ,” in  Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. , 3, , pp. 1541–1546  ((1998)).
Hancock  J., , Langer  D., , Hebert  M., , Sullivan  R., , Ingimarson  D., , Hoffman  E., , Mettenleiter  M., , and Froehlich  C., “ Active laser radar for high-performance measurements. ,” in  Proc. IEEE Int. Conf. on Robotics and Automation. , 2, , pp. 1465–1470  ((1998)).
Beraldin  J.-A., , Blais  F., , Rioux  M., , Domey  J., , Gonzo  L., , Nisi  F. D., , Comper  F., , Stoppa  D., , Gottardi  M., , and Simoni  A., “ Optimized position sensors for flying-spot active triangulation systems. ,” in  Proc. 4th Int. Conf. on 3-D Digital Imaging and Modeling. , pp. 29–36 ,  IEEE  ((2003)).
Godin  G., , Beraldin  J.-A., , Rioux  M., , Levoy  M., , and Cournoyer  L., “ An assessment of laser range measurement of marble surfaces. ,” in  Proc. 5th Conf. on Optical 3-D Measurement Techniques. , pp. 49–56  ((2001)).
El-Hakim  S., and Beraldin  J.-A., “ Configuration design for sensor integration. ,” Proc. SPIE.  0277-786X 2598, , 274–285  ((1995)).
Sequeira  V., , Ng  K., , Wolfart  E., , Goncalves  J. G., , and Hogg  D., “ Automated 3D reconstruction of interiors with multiple scan-views. ,” Proc. SPIE.  0277-786X 3641, , 106–117  ((1998)).
Sequeira  V., , Ng  K., , Wolfart  E., , Goncalves  J. G. M., , and Hogg  D., “ Automated reconstruction of 3D models from real environments. ,” ISPRS J. Photogramm. Remote Sens..  0924-2716 54, , 1–22  ((1999)).
Sequeira  V., and Goncalves  J., “ 3D reality modelling: Photo-realistic 3D models of real world scenes. ,” in  Proc. 1st Int. Symp. on 3D Data Processing Visualization and Transmission. , pp. 776–783 ,  IEEE  ((2002)).
Beraldin  J.-A., “ Integration of laser scanning and close-range photogrammetry—The last decade and beyond. ,” in  Proc. XXth Int. Soc. for Photogrammetry and Remote Sensing (ISPRS) Congress, Commission VII. , pp. 972–983  ((2004)).
DeNisi  F., , Comper  F., , Gonzo  L., , Gottardi  M., , Stoppa  D., , Simoni  A., , and Beraldin  J.-A., “ A CMOS sensor optimized for laser spot-position detection. ,” IEEE Sens. J..  1530-437X 5, , 1296–1304  ((2005)).
Rutishauser  M., , Stricker  M., , and Trobina  M., “ Merging range images of arbitrarily shaped objects. ,” in  Proc. IEEE Comput. Soc. Conf. on Computer Vision and Pattern Recognition. , pp. 573–580 ,  IEEE  ((1994)).
Zhang  Z., and Faugeras  O., “ A 3D world model builder with a mobile robot. ,” Int. J. Robot. Res..  0278-3649 11, , 269–285  ((1992)).
Carmer  D., and Peterson  L., “ Laser radar in robotics. ,” Proc. IEEE.  0018-9219 84, , 299–320  ((1996)).
El-Hakim  S., and Beraldin  J.-A., “ On the integration of range and intensity data to improve vision-based threee-dimensional measurements. ,” Proc. SPIE.  0277-786X 2350, , 306–321  ((1994)).
Scott  W. R., , Roth  G., , and Rivest  J.-F., “ View planning for automated three-dimensional object reconstruction and inspection. ,” ACM Comput. Surv..  0360-0300 35, , 64–96  ((2003)).
Tuley  J., , Vandapel  N., , and Hebert  M., “ Analysis and removal of artifacts in 3-D LADAR data. ,” in  Proc. IEEE Int. Conf. on Robotics and Automation. , pp. 2203–2210  ((2005)).
Slob  S., , Hack  H., , and Turner  A., “ An approach to automate discontinuity measurements of rock faces using laser scanning techniques. ,” in  Proc. ISRM Int. Symp. on Rock Engineering for Mountainous Regions. , pp. 87–94 ,  Sociedade Portuguesa de Geotecnica  ((2002)).
Godin  G., , Rioux  M., , and Baribeau  R., “ Three-dimensional registration using range and intensity information. ,” in Proc. SPIE.  0277-786X , 2350, , 279–290  ((1994)).
Fiocco  M., , Boström  G., , Gonçalves  J., , and Sequeira  V., “ Multisensor fusion for volumetric reconstruction of large outdoor areas. ,” in  Proc. 5th Int. Conf. on 3-D Digital Imaging and Modeling. , pp. 47–54 ,  IEEE  ((2005)).
Williams  D.,  Optical Methods in Engineering Metrology. , 1st ed., pp. 11–16 ,  Chapman & Hall , London ((1993)).
Chu  B.,  Laser Light Scattering Basic Principles and Practice. , 2nd ed., pp. 156–160 ,  Academic Press ,  New York , ((1991)).
Jacobs  G., “ Understanding spot size for laser scanning. ,” Professional Surv. Mag.. , 26, (10 ), 48–50  ((2006)).
Argyle  E., “ Techniques for edge detection. ,” Proc. IEEE.  0018-9219 , 59, , 285–287  ((1971)).
Davis  L., “ A survey of edge detection techniques. ,” Comput. Graph. Image Process..  0146-664X , 4, , 248–270  ((1975)).
Peli  T., and Malah  D., “ A study of edge detection algorithms. ,” Comput. Graph. Image Process..  0146-664X , 20, , 1–21  ((1982)).
Ziou  D., and Tabbone  S., “ Edge Detection Techniques -An Overview. ,” Int. J. Pattern Recog. and Image Anal.. , 8, , 537–559  ((1998)).
Trichili  H., , Bouhlel  M.-S., , Derbel  N., , and Kamoun  L., “ A survey and evaluation of edge detection operators application to medical images. ,” in  Proc. IEEE Int. Conf. on Systems, Man and Cybernetics. , vol. 4, pp. 706–709 ((2002)).
Xiao  Z., , Yu  M., , Guo  C., , and Tang  H., “ Analysis and comparison on image feature detectors. ,” in  Proc. 3rd Int. Symp. on Electromagnetic Compatibility. , pp. 651–656 ,  IEEE  ((2002)).
Basu  M., “ Gaussian-based edge-detection methods-a survey. ,” IEEE Trans. Syst. Man Cybern..  0018-9472 , 32, (3 ), 252–260  ((2002)).
Jerri  A. J., “ The shannon sampling theorem—its various extensions and applications: A tutorial review. ,” Proc. IEEE.  0018-9219 65, , 1565–1596  ((1977)).
Lee  C.-H., “ Image surface approximation with irregular samples. ,” IEEE Trans. Pattern Anal. Mach. Intell..  0162-8828 11, , 206–212  ((1989)).
Oppenheim  A. V., and Schafer  R. W.,  Discrete Signal Processing. , 2nd ed., pp. 142–147 ,  Prentice-Hall Signal Processing ,  Prentice-Hall, Englewood Cliffs, NJ  ((1999)).
Guidi  G., , Frischer  B., , Russo  M., , Spinetti  A., , Carosso  L., , and Micoli  L. L., “ Three-dimensional acquisition of large and detailed cultural heritage objects. ,” Mach. Vision Appl..  0932-8092 17, , 349–360  ((2006)).
Fiete  R. D., and Tantalo  T. A., “ Image quality of increased along-scan sampling for remote sensing systems. ,” Opt. Eng..  0091-3286 , 38, , 815–820  ((1999)).
Klein  K., and Sequeira  V., “ The view-cube: an efficient method of view planning for 3D modelling from range data. ,” in  Proc. 5th IEEE Workshop on Applications of Computer Vision. , pp. 186–191  ((2000)).
Klein  J., and Zachmann  G., “ Proximity graphs for defining surfaces over point clouds. ,” in  Proc. Eurographics Symp. on Point-Based Graphics. ,  ETH , Zurich ((2004)).
den Dekker  A. J., and van den Bos  A., “ Resolution: A survey. ,” J. Opt. Soc. Am. A.  0740-3232 , 14, , 547–557  ((1997)).
Soucy  M., , Croteau  A., , and Laurendeau  D., “ A multi-resolution surface model for compact representation of range images. ,” in  Proc. IEEE Int. Conf. on Robotics and Automation. , 2, , 1701–1706  ((1992)).
Turk  G., and Levoy  M., “ Zippered polygon meshes from range images. ,” in  SIGGraph-94. , pp. 311–318 ,  ACM  ((1994)).
Massios  N. A., and Fisher  R. B., “ A best next view selection algorithm incorporating a quality criterion. ,” in  9th British Machine Vision Conf.. , pp. 780–789  ((1998)).
Hoppe  H., , DeRose  T., , Duchamp  T., , McDonald  J., , and Stuetzle  W., “ Surface reconstruction from unorganized points. ,” in  Proc. SIGRAPH. , 26, , pp. 71–78  ((1992)).
Scott  W., , Roth  G., , and Rivest  J.-F., “ View planning for multi-stage object reconstruction. ,” in  Proc. Vision Interface. , pp. 64–71  ((2001)).
Johnson  A., and Kang  S. B., “ Registration and integration of textured 3-D data. ,” in  Proc. Int. Conf. on Recent Advances in 3-D Digital Imaging and Modeling. , pp. 234–241  ((1997)).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.