Range imagers provide useful information for part inspection, robot control, or human safety applications in industrial environments. However, some applications may require more information than range data from a single viewpoint. Therefore, multiple range images must be combined to create a three-dimensional representation of the
scene. Although simple in its principle, this operation is not straightforward to implement in industrial systems, since each range image is affected by noise. In this paper, we present two specific applications where merging of range images must be performed. We use the same processing pipeline for both applications : conversion from
range image to point clouds, elimination of degrees of freedom between different clouds, validation of the merged results. Nevertheless, each step in this pipeline requires dedicated algorithms for our example applications. The first application is high resolution inspection of large parts, where many range images are acquired sequentially and merged in a post-processing step, allowing to create a virtual model of the part observed, typically larger than the instrument's field of view. The key requirement in this application is high accuracy for the merging of multiple point clouds. The second application discussed is human safety in a human/robot environment: range images are used to ensure that no human is present in the robot’s zone of operation, and can trigger the robot's emergency shutdown when needed. In this case, range image merging is required to avoid uncertainties due to occlusions. The key requirement here is real-time operation, namely the merging operation should not introduce a significant latency in the data processing pipeline. For both application cases, the improvements brought by
merging multiple range images are clearly illustrated.
Industrial inspection of micro-devices is often a very challenging task, especially when those devices are produced
in large quantities using micro-fabrication techniques. In the case of microlenses, millions of lenses are produced
on the same substrate, thus forming a dense array. In this article, we investigate a possible automation of
the microlens array inspection process. First, two image processing methods are considered and compared:
reference subtraction and blob analysis. The criteria chosen to compare them are the reliability of the defect
detection, the processing time required, as well as the sensitivity to image acquisition conditions, such as varying
illumination and focus. Tests performed on a real-world database of microlens array images led to select the blob
analysis method. Based on the selected method, an automated inspection software module was then successfully
implemented. Its good performance allows to dramatically reduce the inspection time as well as the human
intervention in the inspection process.
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection
of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic
visual attention, and specifically compares computer models designed with the motion component expressed
either as the speed magnitude or as the speed vector. Several computer models, including static features (color,
intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative
evaluations are performed by comparing the computer model output with human saliency maps obtained
experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic
and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences
of each method as well as preferred domain of application.
Recent time-of-flight (TOF) cameras allow for real-time acquisition of range maps with good performance.
However, the accuracy of the measured range map may be limited by secondary light reflections. Specifically,
the range measurement is affected by scattering, which consists in parasitic signals caused by multiple reflections inside the camera device. Scattering, which is particularly strong in scenes with large aspect ratios, must be detected and the errors compensated. This paper considers reducing scattering errors by means of image processing methods applied to the output image from the time-of-flight camera. It shows that scattering reduction can be expressed as a deconvolution problem on a complex, two-dimensional signal. The paper investigated several solutions. First, a comparison of image domain and Fourier domain processing for scattering compensation is provided. One key element in the comparison is the computation load and the requirement to perform scattering compensation in real-time. Then, the paper discusses strategies for improved scattering reduction. More specifically, it treats the problem of optimizing the description of the inverse filter for best scattering compensation results. Finally, the validity of the proposed scattering reduction method is verified on various examples of indoor scenes.
Visual attention models mimic the ability of a visual system, to detect potentially relevant parts of a scene. This
process of attentional selection is a prerequisite for higher level tasks such as object recognition. Given the high
relevance of temporal aspects in human visual attention, dynamic information as well as static information must
be considered in computer models of visual attention. While some models have been proposed for extending to
motion the classical static model, a comparison of the performances of models integrating motion in different
manners is still not available. In this article, we present a comparative study of various visual attention models
combining both static and dynamic features. The considered models are compared by measuring their respective
performance with respect to the eye movement patterns of human subjects. Simple synthetic video sequences,
containing static and moving objects, are used to assess the model suitability. Qualitative and quantitative
results provide a ranking of the different models.
KEYWORDS: 3D vision, 3D image processing, Sensors, Three dimensional sensing, 3D acquisition, Range imaging, Interferometry, Video, Cameras, Imaging systems
The paper provides considerations relative to the application of 3D vision methods and presents some lessons learnt in
this respect by presenting four 3D vision tasks and discussing the selection of vision sensing devices meant to solving the
task. After a short reminder of 3D vision methods of interest for optical range imaging for microvision and macrovision
applications, the paper enumerates and comments some aspects which contribute to find a good solution. Then, it
presents and discusses the four following tasks: 3D sensing for people surveillance, measurement of stamping burrs,
sorting burred stamping parts and finally, hole filling algorithm.
Machine vision plays an important role in automated assembly. However, present vision systems are not adequate for robot control in an assembly environment where individual components have sizes in the range of 1 to 100 micrometers, since current systems do not provide sufficient resolution in the whole workspace when they are fixed, and they are too bulky to be brought close enough to the components. A small-size 3D vision system is expected to provide two decisive advantages: high accuracy and high flexibility. The presented work aims to develop a 3D vision sensor easily embedded in a micro-assembly robot. The paper starts by a screening of 3D sensing methods, performed in order to identify the best candidates for miniaturization, and that results in the selection of the multifocus principle (which elegantly avoids the depth of field problem encountered for example in stereo vision). Here, depth is measured by determination of sharpness maxima in a stack of images acquired at different elevations. Then, it presents a preliminary system configuration, that delivers images of a 1300×1000 micrometers field of view with lateral resolution better than 5 micrometers and vertical resolution better than 20 micrometers. Finally, future steps in development of a real-time embedded multifocus sensor are presented, with a discussion of the most critical tradeoffs.
A large number of 3D cameras suffer from so-called holes in the
data, i.e. the measurement lattice is affected by invalid
measurements and the range image has undefined values.
Conventional image filters used for removing the holes perform not
well in presence of holes with large varying hole sizes. The novel
hole-filling method presented in this paper operates on
reliability attributed range images featuring unwanted holes with
wide varying sizes. The method operates according to a multi
resolution scheme where the image resolution is decreased at the
same time as the range reliability is successively increased until
sufficient confidence is reached. It builds on three main
components. First, the described process performs a weighted local
neighbourhood filter where the contribution of each pixel stands
for its reliability. Second, the filtering combines filters with
different kernel sizes and implements therefore the multi
resolution schema. Third, the processing requires a complete
travel from high resolution down to the resolution of satisfactory
confidence and back again to the highest resolution. The algorithm
for the described method was implemented in a efficient way and
was widely applied in the hole-filling of range images from a
depth from focus process where reliability is obtainable
non-linearly from the local sharpness measurement. The method is
valid in a very general way for all range imagers providing
reliability information. It seems therefore well suited to depth
cameras like time-of-flight, stereo and other similar rangers.
The depth from focus measurement principle relies on the detection of the optimal focusing distance for measuring the depth map of an object and finding its 3D shape. The principle is most effective at microscopic ranges where it is usually found implemented around a z-controlled microscope and sometimes named multifocus 3D microscopy. As
such, the method competes with many other 3D measurement methods showing both advantages and disadvantages. Multifocus 3D microscopy is presented and compared to chromatic aberation, confocal microscopy, white light interferometry. Then, this paper discusses two applications of multifocus 3D microscopy for measuring wood
respectively metallic parts in the sub-millimeter range. The first application aims at measuring the topography of wood samples for surface quality control. The wood samples surface topography is evaluated with data obtained from both confocal microscopy and multifocus 3D microscopy. The profiles and a standard roughness factor are compared. The second application concerns the measurement of burrs on metallic parts. Possibilities and limits of multifocus 3D
microscopy are presented and discussed.
KEYWORDS: 3D modeling, Particles, Atomic force microscopy, 3D metrology, Quartz, Data modeling, Calibration, Data acquisition, Data processing, Image registration
There exist many techniques for the measurement of micro and nano surfaces and also several conventional ways to represent the resulting data, such as pseudo color or isometric 3D. This paper addresses the problem of building complete 3D micro-object models from measurements in the submicrometric range. More specifically, it considers measurements provided by an atomic-force microscope and investigates their possible use for the modeling of small 3D objects. The general approach for building complete virtual models requires to measure and merge several data sets representing the considered object observed under different orientations, or views.
A specific configuration for liquid flow metrology consists of a flow of falling drops coupled with a preferred measuring method that derives the flow directly from the drop count. Given the inaccuracy of this counting method, alternative methods have been proposed that measure the volume of each falling drop. The principle consists in deriving the volume from geometric measurements obtained by vision and the basic problem can be described as the estimation of the volume of a drop from its projection. This paper reviews methods previously used and provides an analysis of qualitative and quantitative aspects of drop volume estimation for flow metrology. Three drop shape models and the related volume estimation methods are defined in a first part. A second part is devoted to an experimental analysis of drop shape variations. In a final experimental part, the presented methods are compared and the good performance of a volume measurement method is experimentally demonstrated. It shows a rms-error of 1% in normal measurement conditions. These figures speak for the interest of the measurement by vision and represent a good base for predicting the suitability of the method in various applications.
Nowadays, vision-based inspection systems are present in many stages of the industrial manufacturing process. Their versatility, which permits to accommodate a broad range of inspection requirements, is however limited by the time consuming system setup performed at each production change. This work aims at providing a configuration assistant that helps to speed up this system setup, considering the peculiarities of industrial vision systems. The pursued principle, which is to maximize the discriminating power of the features involved in the inspection decision, leads to an optimization problem based on a high dimensional objective function. Several objective functions based on various metrics are proposed, their optimization being performed with the help of various search heuristics such as genetic methods and simulated annealing methods. The experimental results obtained with an industrial inspection system are presented, considering the particular case of the visual inspection of markings found on top of molded integrated circuits. These results show the effectiveness of the presented objective functions and search methods, and validate the configuration assistant as well.
This paper deals with the problem of capturing the color information of physical 3D objects thanks to a class of digitizers providing color and range data, like range finders based on structured lighting. It appears typically in a modeling procedure that aims at building a realistic virtual 3D model. The color data delivered by such scanners basically express the reflected color intensity of the object and not its intrinsic color. A consequence is therefore the existence, on the reconstructed model, of strong color discontinuities, which results from acquisition done under different illumination conditions. The paper considers three approaches in order to remove these discontinuities and obtained the desired intrinsic color data. The first one converts the reflected color intensity into the intrinsic color by computation, using a reflectance model and known acquisition parameters. The use of simple reflectance models is considered: Lambert and Phong, respectively for perfectly diffuse and mixed diffuse and specular reflection. The second approach is a hardware solution. It aims at using a nearly constant, diffuse and omnidirectional illumination over the visible parts of the object. A third method combines the first computational approach with the use of several known illumination sources. An experimental comparison of these three approaches is finally presented.
This paper proposes range imaging as a means to improve object registration in an augmented reality environment. The addressed problem deals with virtual world construction from complex scenes using object models. During reconstruction, the scene view is augmented by superimposing virtual object representations from a model database. The main difficulty consists in the precise registration of a virtual object and its counterpart in the real scene. The presented approach solves this problem by matching geometric shapes obtained from range imaging. This geometric matching snaps the roughly placed object model onto its real world counterpart and permits the user to update the virtual world with the recognized model. We present a virtual world construction system currently under development that allows the registration of objects present in a scene by combined use of user interaction and automatic geometric matching based on range images. Potential applications are teleoperation of complex assembly tasks and world construction for mobile robotics.
KEYWORDS: 3D vision, Virtual reality, 3D modeling, Databases, Robotics, Image segmentation, Range imaging, Data modeling, Visual process modeling, 3D image processing
Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.
This paper investigates the recognition performance of a geometric matching approach to the recognition of free-form objects obtained from range images. The heart of this approach is a closest point matching algorithm which, starting from an initial configuration of two rigid objects, iteratively finds their best correspondence. While the effective performance of this algorithm is known to depend largely on the chosen set of initial configurations, this paper investigates the quantitative nature of this dependence. In essence, we experimentally measure the range of successful configurations for a set of test objects and derive quantitative rules for the recognition strategy. These results show the conditions under which the closest point matching algorithm can be successfully applied to free-form 3D object recognition and help to design a reliable and cost-effective recognition system.
This paper deals with the problem of segmenting a 3D scene obtained by range imaging. It assumes scenes of arbitrary complexity in which the objects to be recognized are newly added or removed and investigates how the methods of change detection and image difference used in classical image processing can be used in range imaging. In a first step, we consider the case of ideal range images and conduct an analysis of segmentation by range image difference that shows the direct applicability of this principle. In a second step, we consider the case of the wide class of range sensors that suffer from shadowing effects which leads to missing data in the range image. An interpretation of this ambiguity in difference calculation and means to remove it will be given. Additional rules for the practical segmentation of 3D scenes by range image change detection are described. The presented methods lead to the possibility to segment a scene by isolating newly added or removed objects. They are tested using range images from two distinct range imagers of the light stripping type. Results indicate the success of this approach and the practical possibility to use it in the frame of an assembly task.
This paper investigates a new approach for the recognition of 3D objects of arbitrary shape. The proposed solution follows the principle of model-based recognition using geometric 3D models and geometric matching. It is an alternative to the classical segmentation and primitive extraction approach and provides a perspective to escape some of its difficulties to deal with free-form shapes. The heart of this new approach is a recently published iterative closest point matching algorithm, which is applied variously to a number of initial configurations. We examine methods to obtain successful matching. Our investigations refer to a recognition system used for the pose estimation of 3D industrial objects in automatic assembly, with objects obtained from range data. The recognition algorithm works directly on the 3D coordinates of the objects surface as measured by a range finder. This makes our system independent of assumptions on the objects geometry. Test and model objects are sets of 3D points to be compared with the iterative closest point matching algorithm. Substantially, we propose a set of rules to choose promising initial configurations for the iterative closest point matching; an appropriate quality measure which permits reliable decision; a method to represent the object surface in a way that improves computing time and matching quality. Examples demonstrate the feasibility of this approach to free-form recognition.
KEYWORDS: 3D modeling, Visual process modeling, Image segmentation, 3D vision, Model-based design, Object recognition, 3D image processing, Cameras, Image processing, Systems modeling
We present a model-based 3D object recognition architecture that combines pose estimation derived from range images and hypothesis verification derived from intensity images. The architecture takes advantage of the geometrical nature of range images for generating a number of hypothetical poses of objects. Pose and object models are then used to reconstruct a synthetic view of the scene to be compared to the real intensity image for verification. According to the architecture a system has been implemented and successful experiments have been performed with boxes of different shapes and textures. Recognition with our approach is precise and robust. In particular verification can detect false poses resulting from wrong groupings. In addition, the system provides the interesting features to recognize the true pose of shape-symmetrical objects and also to recognize objects that are ambiguous from their sole shape.
A classical approach formulates surface reconstruction in terms of a variational problem by using two-dimensional surfaces defined by generalized spline functions. We present such an approach in the case of range image segmentation. The distinction of our approach lies in the way the discontinuities are detected. The spline is constrained to stay within a certain maximal distance to the discrete measured data, but is free as long as the maximum distance is not reached. Discontinuity emerges on points where the maximum distance constrains the spline. This method leads to a relaxation algorithm that solves the segmentation iteratively, by locally applying a relation that is close to the diffusion equation in the case of the membrane spline. Being iterative and local, the algorithm is suited for parallelism. We applied the method to range data from laser scanners using two different surface models: the membrane spline (more adequate for polyhedric objects), and the thin plate spline (more adequate for curved objects). The results illustrate the practical performance of this method which is simple, parallel, and controlled by few parameters.
KEYWORDS: Object recognition, Data acquisition, Detection and tracking algorithms, Model-based design, Visual process modeling, 3D acquisition, Data modeling, 3D modeling, Machine vision, Plutonium
This paper describes a powerful inexact matching algorithms which has been applied with success to high-level 3D object representations in a 3D object recognition system. The algorithm combines in a promising way several approaches proposed in the past couple of years: an extension to the backtrack strategies for inexact matching of attributed relational sub- graphs, error correction isomorphism, determination of local attribute similarity, and global transformation fitting, features which are efficiently used for search-tree pruning. The algorithm was tested successfully in a series of experiments involving scenes with single and multiple objects.
This paper considers the segmentation of range image measurements into surface patches which are either plane or curved and which are described formally by a function. After a formal description of the segmentation, we present and compare three methods suited for plane and curved patch segmentation and show the results of experiments conducted for testing their practical behaviour. The two first methods use the classical approach of region growing whereas the third method is based on a relaxation process. This original and last method exhibits simplicity and low computational complexity. Thanks to its parallel nature, it can be considered as a good candidate for range image segmentation in real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.