PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A key problem in using automatic visual surface inspection in industry is training and tuning the systems to perform in a desired manner. This may take from minutes up to a year after installation, and can be a major cost. Based on our experiences the training issues need to be taken into account from the very beginning of system design. In this presentation we consider approaches for visual surface inspection and system training. We advocate using a non-supervised learning based visual training method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Joint Photographers Expert Group (JPEG) developed an image compression tool, which is one of the most widely used products for image compression. One of the factors that influence the performance of JPEG compression is the quantization table. Bit rate and the decoded quality are both determined by the quantization table simultaneously. Therefore, the designed quantization table has fatal influences to whole compression performance. The goal of this paper is to seek sets of better quantization parameters to raise the compression performance that means it can achieve lower bit while preserving higher decoded quality. In our study, we employed Genetic Algorithm (GA) to find better compression parameters for medical images. Our goal is to find quantization tables that contribute to better compression efficiency in terms of bit rate and decoded quality. Simulations were carried out for different kinds of medical images, such as sonogram, angiogram, X-ray, etc. Resulting experimental data demonstrate the GA-based seeking procedures can generate better performance than the JPEG does.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of robust shape recognition in the presence of shape deformation as well as changes in part position, orientation and scale. Point Distribution Model (PDM) are deformable templates that have interesting features for industrial inspection tasks, since they are built by statistical analysis of a training set and they define a prototype shape as well a set of possible, acceptable deformations. To further improve their classification capabilities, these deformable templates are extended by adding a constraint on the amount of deformation. A constrained optimization procedure is proposed and successfully tested on an industrial inspection task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The object of the study was to analyze the evolution of the morphology of a plant when it is under water stress conditions. Four parameters - Fourier descriptors, invariant moments, fractal dimension and skeleton parameters - are measured on two sets of plants: stressed plants and control plants. An analysis of variance allows us to determinate the parameter which is the more “apt” to discriminate a control plant from a stress plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Processing, Segmentation, and Feature Analysis II
This paper is an extension of our previous work on the image segmentation of electronic structures on patterned wafers to improve the defect detection process on optical inspection tools. Die-to-die wafer inspection is based upon the comparison of the same area on two neighborhood dies. The dissimilarities between the images are a result of defects in this area of one of the die. The noise level can vary from one structure to the other, within the same image. Therefore, segmentation is needed to create a mask and apply an optimal threshold in each region. Contrast variation on the texture can affect the response of the parameters used for the segmentation. This paper shows a method to anticipate these variations with a limited number on training samples, and modify the classifier accordingly to improve the segmentation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper the color segmentation methods that were used to detect appearance defects on 3 dimensional shape of fresh ham. The use of color histograms turned out to be an efficient solution to characterize the healthy skin, but a special care must be taken to choose the color components because of the 3 dimensional shape of ham.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design of experiments has already been used for several years in different domains. It is often ignored in image processing. In this article, we would like to show that it has it place in this area where it is common to have parameters to be adjusted according to the images to be processed and which should remain valid for a family of images of the same type. These parameters are often numerous and they frequently interfere with each others. The use of an active contour requires several parameters rather delicate to be adjusted. The experimental research methodology allows, the factors to be considered to be listed and then, from these, the identification of those which are the most influential, in order to optimize them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Processing, Segmentation, and Feature Analysis III
In this paper, a set of scalable discrepancy measures is applied in the context of computer vision. These measures allow tuning of edge detectors and segmentation evaluation when a reference is known. Thanks to a scale parameter in an adjustable area the proposed measures allows to weight the importance of over-detection as well as under-detection. They give the intensity of the discrepancy and its relative position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To prove the safety of the concrete constructions, the evaluation of the concrete crack visually is one of the important items. There are many kinds of cracks. In this paper, the method to extract the cracks from the noisy images has been developed. Then, a system to evaluate the concrete crack is introduced. Finally, the method to classify the cracks has been considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractal encoding is the first step in fractal based image compression techniques, but this technique can also be useful outside the image compression field. This paper discusses a fractal encoding technique and some of its variations adapted to the concept of segmenting anomalous regions within an image. The primary goal of this paper is to provide background information on fractal encoding and show application examples to equip the researcher with enough knowledge to apply this technique to other image segmentation applications. After a brief overview of the algorithm, important parameters for successful implementation of fractal encoding are discussed. Included in the discussion is the impact of image characteristics on various parameters or algorithm implementation choices in the context of two applications that have been successfully implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on the Pacific Northwest National Laboratory (PNNL) DOE Initiative in Image Science and Technology (ISAT) research, which is developing algorithms and software tool sets for remote sensing and biological applications. In particular, the PNNL ISAT work is applying these research results to the automated analysis of real-time cellular biology imagery to assist the biologist in determining the correct data collection region for the current state of a conglomerate of living cells in three-dimensional motion. The real-time computation of the typical 120 MB/sec multi-spectral data sets is executed in a Field Programmable Gate Array (FPGA) technology, which has very high processing rates due to large-scale parallelism. The outcome of this artificial vision work will allow the biologist to work with imagery as a creditable set of dye-tagged chemistry measurements in formats for individual cell tracking through regional feature extraction, and animation visualization through individual object isolation/characterization of the microscopy imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a prototype system to monitor a hot glowing wire during the rolling process in quality relevant aspects. Therefore a measurement system based on image vision and a communication framework integrating distributed measurement nodes is introduced. As a technologically approach, machine vision is used to evaluate the wire quality parameters. Therefore an image processing algorithm, based on dual Grassmannian coordinates fitting parallel lines by singular value decomposition, is formulated. Furthermore a communication framework which implements anonymous tuplespace communication, a private network based on TCP/IP and a consequent Java implementation of all used components is presented. Additionally, industrial requirements such as realtime communication to IEC-61131 conform digital IO’s (Modbus TCP/IP protocol), the implementation of a watchdog pattern and the integration of multiple operating systems (LINUX, QNX and WINDOWS) are lined out. The deployment of such a framework to the real world problem statement of the wire rolling mill is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic tracking is essential for a 24 hours intruder-detection and, more generally, a surveillance system. This paper presents an adaptive background generation and the corresponding moving region detection techniques for a Pan-Tilt-Zoom (PTZ) camera using a geometric transform-based mosaicing method. A complete system including adaptive background generation, moving regions extraction and tracking is evaluated using realistic experimental results. More specifically, experimental results include generated background images, a moving region, and input video with bounding boxes around moving objects. This experiment shows that the proposed system can be used to monitor moving targets in widely open areas by automatic panning and tilting in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and fabricated a programmable retina that is capable of recognizing patterns stored in memory in real-time. Each of the pixels of the retina is composed of a photodiode and an electronic device used during the programming phase to digitize the image of the pattern to recognize into a binary image stored in latches. The array of pixels is thus partitioned into two complementary disjoint sub-sets with all the photodiodes of the same sub-set connected together in order to obtain the sum total of the currents. During the analysis phase, an optical correlation between the projected image and the reference binary image memorized in the circuit is done. The result is read-out as two voltages representing the following two currents: a “white” current proportional to the luminous flux falling on the photodiodes pertaining to the “white” part of the binary reference image and a “black” current corresponding to the black part. By comparing these two voltages to expected values, a shift of the pattern or a difference between the observed and programmed pattern can be detected. The retina has been fabricated in standard 0.6μm CMOS technology with three layers of metal from Austria Micro Systems. It consists of a 100×100 pixels image sensor. We present here an application of this sensor for industrial positioning system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a CMOS image sensor for speed determination of fast moving luminous objects. Our circuit furnishes a 16-gray level image that contains both spatial and temporal information on the fast moving object under observation. The spatial information is given by the coordinates of the illuminated pixels and the temporal information is coded in the gray level of the pixels. By applying simple image processing algorithms to the image, trajectory, direction of motion and speed of the moving object can be determined. The circuit is designed and fabricated in standard CMOS 0.6 μm process from Austria MicroSystems (AMS). The core of the circuit is an array of 64×64 pixels based on an original Digital Pixel Sensor (DPS) architecture. Each pixel is composed of a photodiode as the light sensing element, a comparator, a pulse generator and a 4-bit static memory for storing the gray value of the pixel. The working principle of the circuit, its design and some quantitative experimental results are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution single photon emission computed tomography (SPECT) and X-ray computed tomography (CT) imaging have proven to be useful techniques for non-invasively monitoring mutations and disease progression in small animals. A need to perform in vivo studies of non-anesthetized animals has led to the development of a small-animal imaging system that integrates SPECT imaging equipment with a pose-tracking system. The pose of the animal is monitored and recorded during the SPECT scan using either laser-generated surfaces or infrared-reflective markers affixed to the animal. The reflective marker method measures motion by stereoscopically imaging an arrangement of illuminated markers. The laser-based method is proposed as a possible alternative to the reflector method with the advantage that it is a non-contact system. A three-step technique is described for calibrating the surface acquisition system so that quantitative surface measurements can be obtained. The acquired surfaces can then be registered to a reference surface using the iterative closest point (ICP) algorithm to determine the relative pose of the live animal and correct for any movement during the scan. High accuracy measurement results have been obtained from both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fabric's tendency to wrinkle is vitally important to the textile industry as it impacts the visual appeal of apparels. Current methods of grading this characteristic, called fabric smoothness, are very subjective and inadequate. As such, a quantitative method for assessing fabric smoothness is of the utmost importance to the textile community. To that end, we have proposed a laser-based surface profiling system that utilizes a smart camera to sense the 3-D topography of the fabric specimens. The system incorporates methods based on anisotropic diffusion and the facet model for characterizing edge information that ultimately relate to a specimen's degree of wrinkling. In this paper, we detail the initial steps in a large-scale validation of this system. Using histograms of the extracted features, we compare the output of the system among 78 swatches of various color, type, and texture. The results show consistency among repeated scans of the same swatch as well as among different swatches taken from the same fabric sample. Also, since swatches taken from the same piece of fabric typically wrinkle similarly, this adds to the feasibility of the system. In other words, it adequately identifies and measures appropriate features of the wrinkles found on a sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interferometers with a low-coherent illumination allow non-contact evaluating random tissues by locating the visibility maxima of interference fringes. The problem is the light scattering by a tissue, it is why interference fringes are often distorted. Other problem consists in the need to process large amount of data obtained in optical coherence tomography (OCT) imaging systems. We propose to use a stochastic fringe model and Kalman filtering method for noisy low-coherence fringe processing. A fringe signal value is predicted at a next discretization step using full information available before this step and a prediction error is used for dynamic correction of fringe envelope and phase. The advantages of Kalman filtering method consist in its noise-immunity, high-speed data processing and optimal evaluation of fringe parameters. Several specially fabricated wood fiber tissues have been measured with a low-coherence interferometer. The obtained data from the tissue internal structure are evaluated using a dynamic stochastic fringe processing algorithm applied to fringe signal samples series. The statistical approach for characterizing wood fiber tissues of different kinds is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of rough textured surface as road coverings, are generally made on grey level images. This suppose, that the variations of grey levels are representative of the local variations of the relief. This justified assumption, in the case of surfaces uniformly colored, finds its limit in the case when these surfaces present variations of color or aspect. The corresponding image will then present variations of grey levels which can be related to the color variations or to the relief variations or both. It then becomes difficult in this case to work out criteria of roughness based on image analysis. It is then necessary to work out, before any study of roughness, an estimation of the luminance map linked to color variations. In order to do that, we have linked the grey level value to the height variations, obtained through a laser sensor, and to the image grey level. The suggested method allows us to compute the distribution of the luminance map. We characterize this distribution by considering statistical parameters of its histogram. We have tested the effectiveness of our approach by comparing the evolutions of the criteria of roughness on road surfaces, without considering the luminance distribution and by taking it into account. The results obtained show that the developed approach leads to a good discrimination by the criteria of roughness in the case of colored surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classifying texture images is the operation of differentiating between them in the parameters space. Selecting the pertinent parameters for the classification is a very delicate procedure. We present in this paper a new approach of texture image classification based on a cascade system, genetic algorithm - multi-layer neural network. We start by using a genetic approach to optimize the choice of parameters by minimizing a cost function. Then, later on, we realize a supervised classifier based on a multi-layer neural network. The pertinent parameters obtained by the genetic algorithm are used as the inputs of the neural network. This approach is validated on some texture images. The proposed algorithm converges rapidly to the optimal solution with a low rate of misclassification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensionality reduction methods for visualization map the original high-dimensional data typically into two dimensions. Mapping preserves the important information of the data, and in order to be useful, fulfils the needs of a human observer.
We have proposed a self-organizing map (SOM)- based approach for visual surface inspection. The method provides the advantages of unsupervised learning and an intuitive user interface that allows one to very easily set and tune the class boundaries based on observations made on visualization, for example, to adapt to changing conditions or material. There are, however, some problems with a SOM. It does not address the true distances between data, and it has a tendency to ignore rare samples in the training set at the expense of more accurate representation of common samples. In this paper, some alternative methods for a SOM are evaluated. These methods, PCA, MDS, LLE, ISOMAP, and GTM, are used to reduce dimensionality in order to visualize the data. Their principal differences are discussed and performances quantitatively evaluated in a few special classification cases, such as in wood inspection using centile features.
For the test material experimented with, SOM and GTM outperform the others when classification performance is considered. For data mining kinds of applications, ISOMAP and LLE appear to be more promising methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims to present a complete methodology based on a multidisciplinary approach, that combines the extraction of low-level features to describe images in a high-level concept or formalism dedicated to Computer-Aided Categorization of ornamental stones (granite, marble). The problem is resolved thanks to a Content-Based Image Retrieval scheme where each image from the ornamental database is represented by a features vector. This last is composed, on one hand, by a color feature corresponding to a novel characterization of color histogram and on the other hand by a texture feature corresponding to a color-based co-occurrence matrix from where we extract some feature representation. The combination of both color texture descriptors is done thanks to a stage of expert know-how extraction. This know-how is represented by the way of weighting factors and confidence degrees. The fusion of the whole data allows to improve the categorization performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a prototype system is described for the management and content-based retrieval of defect images in huge image databases. This is a real problem in surface inspection applications, since modern inspection systems may produce up to thousands of defect images in a day. We are using a noncommercial, generic content-based image retrieval (CBIR) system called PicSOM that is modified to fit to the special requirements of our application. The system is tested with a small pre-classified database of surface defect images using the MPEG-7 features. The scalability of the system is also examined using a larger database. Results indicate that the system works with a high level of success.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present modifications to a feature-based, image-retrieval approach for estimating semiconductor sidewall (cross-section) shapes using top-down images. The top-down images are acquired by a critical dimension scanning electron microscope (CD-SEM). The proposed system is based upon earlier work with several modifications. First, we use only line-edge, as opposed to full-line, sub-images from the top-down images. Secondly, Gabor filter features are introduced to replace some of the previously computed features. Finally, a new dimensionality reduction algorithm - direct, weighted linear discriminant analysis (DW-LDA) - is developed to replace the previous two-step principal component analysis plus LDA method. Results of the modified system are presented for data collected across several line widths, line spacings, and CD-SEM tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fuzzy C-means algorithm is an unsupervised classification algorithm. This algorithm however, suffers from two difficulties which are the initialization phase and the local optimums. We present in this paper some improvements to this algorithm based on the evolutionary strategies in order to get around these two difficulties. We have designed a new evolutionist fuzzy C-means algorithm. We have proposed a new mutation operator in order for the algorithm to avoid local solutions and to converge to the global solution for a low computational time. This approach is validated on some simulation examples. The experimental results obtained confirm the rapidity of convergence and the good performances of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a search-and-score approach for determining the network structure of Bayesian network classifiers. A selective unrestricted Bayesian network classifier is used which in combination with the search algorithm allows simultaneous feature selection and determination of the structure of the classifier.
The introduced search algorithm enables conditional exclusions of previously added attributes and/or arcs from the network classifier. Hence, this algorithm is able to correct the network structure by removing attributes and/or arcs between the nodes if they become superfluous at a later stage of the search. Classification results of selective unrestricted Bayesian network classifiers are compared to naive Bayes classifiers and tree augmented naive Bayes classifiers. Experiments on different data sets show that selective unrestricted Bayesian network classifiers achieve a better classification accuracy estimate in two domains compared to tree augmented naive Bayes classifiers, whereby in the remaining domains the performance is similar. However, the achieved network structure of selective unrestricted Bayesian network classifiers is simpler and computationally more efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new superquadrics based object representation strategy for automotive parts in this paper. Starting from a 3D watertight surface model, a part decomposition step is first performed to segment the original multi-part objects into their constituent single parts. Each single part is then represented by a superquadric. The originalities of this approach include first, our approach can represent complicated shapes, e.g., multi-part objects, by utilizing part decomposition as a preprocessing step. Second, superquadrics recovered using our approach have the highest confidence and accuracy due to the 3D watertight surfaces utilized. A novel, generic 3D part decomposition algorithm based on curvature analysis is also proposed in this paper. The proposed part decomposition algorithm is generic and flexible due to the popularity of triangle meshes in the 3D computer community. The proposed algorithms were tested on a large set of 3D data and experimental results are presented. The experimental results demonstrate that our proposed part decomposition algorithm can segment complicated shapes, in our case automotive parts, efficiently into meaningful single parts. And our proposed superquadric representation strategy can then represent each part (if possible) of the complicated objects successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the analysis of ancient wooden stamps. The aim is to extract a binary image from the stamp. This image must be the closer to the image produced by inking and using a printing press with the stamps. A range image based method is proposed to extract a stamped image from the stamps. The range image acquisition from a 3D laser scanner is presented. Pre-filtering for range image enhancement is detailed. The range image binarization method is based on an adaptive thresholding. Few simple processes applied on the range image enable a final binarized image computing. The proposed method provides here a very efficient way to perform "virtual" stampings with ancient wooden stamps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a segmentation and interpretation method for the automated delineation of regions of interest, belonging to an object, out of gray level images, in view of the quantitative 3D reconstruction of an imaged object. The proposed approach is part of a three dimensional vision based on-line inspection system. Results on images of manufactured parts acquired under realistic acquisition conditions illustrate the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysis of the dermo-epidermal surface in three-dimensions is important for evaluating cosmetics. One approach is based on the active contour model, which is used for extracting local object boundaries with closed curve form. The dermo-epidermal surface, however, is a plane with open form. We have developed a method of automatically extracting the dermo-epidermal surface from volumetric confocal microscopic images, as well as constructing a 3-D visual model of the surface by using the geometric information contained in the control points. Our method is a 3-D extension of the active contour model, so we call it the active open surface model (AOSM). The initial surface for AOSM is an open curve plane, guided by a 3-D internal force, a 3-D external constraint force, and a 3-D image force, which pull it toward the objective surface. The proposed technique has been applied to extract actual dermo-epidermal surface in the given volumetric confocal microscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of human movements is the object of numerous searches, among them, the study of the face movements and more particularly the eye kinetics estimate represents an important part. A study realized by artificial vision is presented here. It allows to characterize eye movements in normal shooting condition (mobility of the subject, background lighting). Our approach allows to obtain in a simple way the localization of the iris and the characterization of their movement in the three dimensional shape. The absolute 3D movement of eyeballs and their relative movement with regard to the head are obtained, even if this one are moving.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a new method for the modeling of objects with know generic shape such as human faces from video and range data. The method combines the strengths of active laser scanning and passive Shape from Motion techniques. Our approach consists of first reconstructing a few feature-points that can be reliably tracked throughout a video sequence of the object. These features are mapped to corresponding 3D points in a generic 3D model reconstructed from dense and accurate range data acquired only once. The resulting 3D-3D set of matches is used to warp the generic model into the actual object visible in the video stream using thin-plate splines interpolation. Our method avoids the problems of dense matching encountered in stereo algorithms. Furthermore, in the case of face reconstruction, this method provides dense models while not requiring the invasive laser scanning of faces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose in this paper an application of multiresolution analysis techniques to extract information contained in the growth increments of a bivalve mollusk called: Calyptogena. The first stage consists in extracting a range image of the mollusk’s shell using a 3-D scanner. Applying a multiresolution analysis enables us to localize precisely those growth increments by preserving relevant details. Moreover, interesting spatial and frequency properties of the multiresolution analysis underline information contained on the shell. Intra-individual variation and inter-individual variations are compared to assume some conclusions as for the ontogenetic evolution of the animal such as periodicities, which can be later related to certain regular changes in its environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Process Automation, Characterization, and Control I
This paper combines defect detection and process control strategy into an efficient vision-based process control system in layered manufacturing. The purpose of our surface inspection, other than monitoring and classification of defects, is to improve the manufacturing process to reduce defects in subsequent stages.
We examine the surface pattern using intensity image combined with CAD information. A hybrid strategy is used for defect analysis, where randomly occurred defects are detected by 2D texture analysis and assignable defects are obtained from 3D shape reconstruction using shape-from-shading. Instead of reconstructing the whole 3D surface, our approach reconstructs profile from representative signature(s) using parametric approach.
In vision-based process control, we take defect information as input and determine the appropriate control parameter of current stage to minimize the possible defects. A linear model is developed and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an application of the visual servoing approach to vision-based control in robotics. The basic idea addresses the use of a vision sensor in the feedback loop within the controlled vision framework. It consists in tracking of arbitrary 3-D objects travelling at unknown velocities in a 2-D space (depth is given as known). Once the necessary modeling stage is performed, the framework becomes one of automatic control, and naturally stability, performance and robustness questions arise. Here, we consider to track line segments corresponding to the edges extracted from the image being analyzed. Two representations for a line segment are presented and discussed, and an appropriate representation is derived. A SISO (Single Input Single Output) model for each parameter of a line segment is then derived and represented by an orthonormal Laguerre network put in state space form. The appeal of this new approach is that it eliminates the need for assumption about the plant order, the time delay and the unmodeled dynamics. For modeling by Laguerre filters, The system must be stable. This problem is handled by an input output data filtering. Hence the poles of filtered model are relocated inside the unit circle. A simple adaptive predictive control is then used for its simplicity. To illustrate the advantages of using the Laguerre network associated to an adaptive input output data filtering, over the conventional control techniques, we carry out a comparison on simulated examples to a PID controller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the problems associated with under-machine inspection, there is a need to develop remote diagnostics systems capable of exploring narrow areas, capturing data and images from various modalities, and displaying the results at a remote location thus providing ease of identifying and diagnosing various machine problems. In this paper, we present a diagnostics system that is remotely controlled and can be deployed with a variety of imaging sensors to capture data. The software allows for segmenting the images and to mosaic the data for a thorough inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an original approach for a vision-based quality control system, built around a cognitive intelligent sensory system. The principle of the approach relies on two steps. First, a so-called initialization phase leads to structural knowledge on image acquisition conditions, type of illumination sources, etc. Second, the image is iteratively evaluated using this knowledge and complementary information (e.g., CAD models, and tolerance information). Finally, the information describing the quality of the piece under evaluation is extracted. A further aim of the approach is to enable building strategies that determine for instance the “next best view” required for completing the current extracted object description through dynamic adjustment of the knowledge base including this description. Such techniques require primarily investigation of three areas, dealing respectively with intelligent self-reasoning 3D sensors, 3D image processing for accurate reconstruction and evaluation software for comparison of image-based measurements with CAD data. However, an essential prior step, dealing with modeling of lighting effects, is required. As a starting point, we first modeled pinpoint light sources. After having introduced in Sections 1 and 2 the objectives and principles of the approach, we present in Section 3 and 4 the implementation and modeling approach for illumination. Some first results illustrating the approach are presented in Section 5. Finally, we conclude with some future directions for improving this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Process Automation, Characterization, and Control II
An artificial nose has been attractive for scientific research and the food industry. This paper proposes that the detection and recognition of odours or chemicals concentrate can be achieved by means of passive and compact size fiber optic sensors (Fiber Bragg Gratings Technology) that will form an olfactory sensor array and a fuzzy logic algorithm that will form the recognition artificial intelligence. The mathematical model of the fiber Bragg gratings olfactory sensor is developed and the design model of the artificial fiber optic nose is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A non-supervised clustering based method for classifying paper according to its quality is presented. The method is simple to train, requiring minimal human involvement. The approach is based on Self-Organizing Maps and texture features that discriminate the texture of effectively.
Multidimensional texture feature vectors are first extracted from paper images. The dimensionality of the data is then reduced by a Self-Organizing Map (SOM). In dimensionality reduction, the feature data are projected to a two-dimensional space and clustered according to their similarity. The clusters represent different paper qualities and can be labeled according to the quality information of the training samples. After that, it is easy to find the quality class of the inspected paper by checking where a sample is placed in the low-dimensional space.
Tests based on images taken in a laboratory environment from four different paper quality classes provided very promising results. Local Binary Pattern (LBP) texture features combined with a SOM-based approach classified the test data almost perfectly: the error percentage was only 0.2% with the multiresolution version of LBP and 1.6% with the regular LBP. The improvement to the previously used texture features in paper inspection is huge: the classification error is reduced over 40 times. In addition to the excellent classification accuracy, the method also offers a self-intuitive user interface and a synthetic view to the inspected data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes an artificial vision system for the quality control of cherries. It allows to obtain three types of information describing the fruit: the color as indicator of ripeness, the presence of defects such as cracking and lastly the size. The sorting of cherries conditioned by all these criteria is carried out at a high cadence (twenty cherries per second). Optimized algorithms of image processing are then necessary. We present in this paper the architecture of the developed system and we show the efficiency through experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Special Session: Image Analysis for Face Recognition
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied ±45 degrees horizontally, ±30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, researches of the facial part acquisition system for the automobile driver support has been made actively. So we are developing a drivers' support system which uses a camera instead of touching with the drivers. In this paper, we use a special photography method to remove the background which disturbs the facial part acquisition. And we propose the new method by which only the face region of a driver is stably obtained in the car. Further, the eye region is detected by using the obtained facial region image in order to apply to the detection of the drowsiness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a method of 3D caricature generation which is based on the automatic extraction of the facial parts for the 3D facial image. This method is likely to suffer sometimes fatal degradations in the feature extraction caused by a variation of the head pose (Roll, Pitch and Yaw rotations). Therefore, we propose a method of head pose modification by estimating roll and yaw rotations which are based on the irises position extracted by Hough transform from texture image. We improved the quality of the mean face and the caricatures by this head pose estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face is the most effective visual media for supporting human interface and communication. We have proposed a typical KANSEI machine vision system to generate the facial caricature so far. The basic principle of this system uses the “mean face assumption” to extract individual features of a given face. This system did not provide for feedback from the gallery of the caricature; therefore, to allow for such feedback, in this paper, we propose a caricaturing system by using the KANSEI visual information acquired from the Eye-camera mounted on the head of a gallery, because it is well know that the gaze distribution represents not only where but also how he is looking at the face. The caricatures created in this way could be based on several measures which are provided from the distribution of the number of fixations to the facial parts, the number of times the gaze came to a particular area of the face, and the matrix of the transitions from a facial region to the other. These measures of the gallery’s KANSEI information were used to create caricatures with feedback from the gallery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a preliminary study aimed at improving the quality of soft-blue veined cheeses by the use of magnetic resonance images analysis. MRI measurements were performed on thirty-two samples from two different processing conditions and at three different stages from day 3 after the production to day 37. A segmentation algorithm based on a Self Organizing Map was used to segment the images into six classes. A cavity extraction was then performed. A principal component analysis was computed on variables corresponding to the cavities surface distribution. The results pointed out differences between the two types of cheese, particularly for day 3 and day 37. This confirmed the interest to use MRI to analyze such products. Further investigations are planned for the analysis of other characteristics of the cheeses and other methods of segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an application of data fusion to improve the inspection of castings by X-rays. An attenuation spectrum is combined with a radioscopic image through the use of the Dempster-Shafer Evidence theory. X-ray imaging by radioscopy is widely used in castings inspection for automatic defect detection, but the sensitivity to low contrasted defects is rather poor. On the other hand, spectrometry is known to be an accurate method for thickness measurement, but is not actually an imaging tool. A profile extraction method was developed to complement classical image segmentation, in such a way to deliver an information similar to spectrometry. This profile approach, together to a confidence level determination gives good results with respect to image processing only. First results are shown on a relatively thin sample, and for this reason, spectrometry didn’t give the expected accuracy. However, the method remains promising for thicker samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper a technique that makes benefit of a virtual X-ray simulation tool to both assess the optimal spectra and calibrate a dual-energy technique. The proposed method is applied to the selective imaging of glass wool materials. To optimize the choice of energy spectra, a signal-to-noise (SNR) criterion on the materials estimated thickness is derived using a constant absorbed energy constraint in the detector. To study further its reliability, the criterion is related to the measurement quality, expressed by a contrast to noise ratio of the input projections, and to the inversion stability, expressed by a contrast to noise ration of the input projections, and to the inversion stability, expressed by the numerical conditioning of the linear dual-energy attenuation system. Once the choice of energy spectra is settled, apparent thicknesses are modeled as third order polynomials expressed in terms of X-ray attenuation measures. The best polynomial fit and the choice of the degree can again be advantageously assessed using virtual X-ray imaging. A semi-empirical catalog is here used to characterize the X-ray source spectrum, and attenuation coefficients for each corresponding compound substance are obtained from standard databases. After completion of those calibration phases, a glass wool phantom composed of PMMA and glass (combined step wedges) is used to validate using real experimental data the selected dual-energy protocol obtained by virtual X-ray imaging. The worse error on the estimated thickness is about 5% for both the binder and the glass fibers. Quantitative imaging in thickness of glass fibers and binder is finally presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human skin color is a powerful fundamental cue that can be used in particular, at an early stage, for the important applications of face and hand detection in color images, and ultimately, for meaningful human-computer interactions. In this paper, we analyze the distribution of human skin for a large number of three-dimensional (3-D) color spaces (or 2-D chrominance spaces) and for skin images recorded with two different camera systems. By use of seven different criteria, we show that mainly the normalized r-g and CIE-xy chrominance spaces, or spaces constructed as a suitable linear combination or as ratios of normalized r, g and b values, or a space normalized by √R2+G2+B2, are consistently the most efficient for skin pixel detection and consequently, for image segmentation based on skin color. In particular, in these spaces the skin distribution can be modeled by a simple, single elliptical Gaussian, and it is most robust to a change of camera system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral imagery is a large domain with number of practical applications: thermography, quality control in industry, food science and agronomy, etc. The main interest is to obtain spectral information of the objects for which reflectance signal can be associated with physical, chemical and/or biological properties.
Agronomic applications of multispectral imagery generally involve the acquisition of several images in the wavelengths of visible and near infrared.
This paper will first present different kind of multispectral devices used for agronomic issues and will secondly introduce an original multispectral design based on a single CCD. Third, early results obtained for weed detection are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral fluorescence images reveal useful information for detecting skin tumor on poultry carcasses. In this paper, a hyperspectral fluorescence imaging system with fuzzy interference scheme is presented for detecting skin tumors on poultry carcasses. Image samples are obtained from a hyperspectral fluorescence imaging system for 65 spectral bands whose wavelength is ranged from 425(nm) to 711(nm). The approximation component of the level-1 decomposition of discrete wavelet transform is used for processing to reduce a large amount of hyperspectral image data. Features are computed from two spectral bands corresponding to the two peaks of relative fluorescence intensity. A fuzzy interference system with a small number of fuzzy rules and Gaussian membership functions successfully detects skin tumors on poultry carcasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The construction of a new structure requires that the soil be characterized in order to predict its future behaviour and judge its ability to host the building. This geotechnical characterization aims at describing the soil in a physical way (grain size distribution (GSD), water content, fine particle proportion, water sensibility. ..) as much as in a mechanical way (compaction degree, resistance. ..). At the present time, this characterization is done thanks to laboratory tests carried out on undisturbed or disturbed samples, and also to in situ tests used mainly because they are generally faster than the laboratory tests and because they test the soil in its natural environment. That's the reason why one of our main research goals is to develop new in situ characterization tools. Contrary to the wide range of in situ tests for a soil mechanical characterization (static and dynamic penetration tests, pressiometer, scissometer, ..), there are very few in situ tests for a physical description. That's why we have developed a new in situ characterization tool based on the use of endoscopy and image analysis techniques: Geoendoscopy.' Grain size distribution (GSD) is undoubtedly one of the most important feature of a soil or a granular material. As a matter of fact, the GSD of a material is one of the major parameters for the identification, the classification and the determination of the predictable behaviour of a granular material. This is the main reason why our first work item was to set a prior treatment and image analysis procedure on our endoscopic images, in order to prepare a future GSD routine. Given the field condition, water presence in soils can disturb the treatments and analyses. This article presents the images that we worked on, then the treatment procedure to improve them before applying particle disconnection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the organoleptic method of tea testing has been traditionally used for quality monitoring, an alternative way by machine vision may be advantageous. Although, the three main quality descriptors estimate the overall quality of made-tea, viz., strength, briskness and brightness of tea liquor, the exact colour detection in fermenting process leads to a good quality-monitoring tool. The use of digital image processing technique for this purpose is reported to play an effective role towards the production of good quality tea though it is not the only quality determining parameter. In this paper, it has been tried to compare the contribution of the chemical constituents towards the final product with the visual appearance in the processing stage by imaging. The use of machine intelligence supports the process somewhat invariantly in comparison to the human decision and colorimetric approach. The captured images are processed for colour matching with a standard image database using HSI colour model. The application of colour dissimilarity and perceptron learning for the standard images and the test images is ensured. Moreover, the performance of the system is being tried to correlate with the decision made by the organoleptic panel assigned for the tea testing and chemical test results on the final product. However, it should be noted that the optimized result could be achieved only when the other quality parameters such as withering, flavour (aroma) detection, drying status etc. are properly maintained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an automated image processing approach for detecting and characterizing cavitation pits on stainless steel surfaces. The image sets to be examined have been captured by a scanning electron microscope (SEM). Each surface region is represented by a pair of SEM images, one captured before and one after the cavitation-causing process. Unfortunately, some required surface preparation steps between pre-cavitation and post-cavitation imaging can introduce artifacts and change image characteristics in such a way as to preclude simple image-to-image differencing. Furthermore, all of the images were manually captured and are subject to rotation and translation alignment errors as well as variations in focus and exposure. In the presented work, we first align the pre- and post- cavitation images using a Fourier-domain technique. Since pre-cavitation images can often contain artifacts that are very similar to pitting, we perform multi-scale pit detection on each pre- and post-cavitation image independently. Coincident regions labeled as pits in both pre- and post-cavitation images are discarded. Pit statistics are exported to a text file for further analysis. In this paper we provide background information, algorithmic details, and show some experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As U.S. natural gas supply pipelines are aging, non-destructive inspection techniques are needed to maintain the integrity and reliability of the natural gas supply infrastructure. Ultrasonic waves are one promising method for non-destructive inspection of pipeline integrity. As the waves travel through the pipe wall, they are affected by the features they encounter. In order to build a practical inspection system that uses ultrasonic waves, an analysis method is needed that can distinguish between normal pipe wall features, such as welds, and potentially serious flaws, such as cracks and corrosion. Ideally, the determination between “flaw” and “no-flaw” must be made in real-time as the inspection system passes through the pipe. Because wavelet basis functions share some common traits with ultrasonic waves, wavelet analysis is particularly well-suited for this application. Using relatively simple features derived from the wavelet analysis of ultrasonic wave signatures traveling in a pipe wall, we have successfully demonstrated the ability to distinguish between the “flaw” and “no-flaw” classes of ultrasonic features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inspection and monitoring of the wear of grinding tools is essential in order to ensure the quality of the grinding tool and the finished product. Present methods rely on dismounting the grinding tool for examination of the grinding tool surface. Often, the state of the grinding tool surface is checked indirectly by evaluating the quality of the workpiece. The application of image processing which offers an effective means for in situ inspection and monitoring is described in the paper. By using multi-directional illumination and image fusion, an image with a high degree of relevant information is generated that is then segmented using the wavelet transform (MSA) and classified to distinguish grains and cavities. Results of the application of the algorithms for a high performance grinding wheel with CBN grains embedded in a resin base are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quick development of constructional composite materials application is caused by their good durability properties at low specific weight and resistance against corrosion. Requirements for prolonged service time generate the need for application of more efficient methods and diagnosis technics.
The main reason of defects in structures of composite materials is the variability of working charges in constructions during the process of using. Existed defects are complicated because of the effects like loss of continuity of reinforced fibres, binder cracks and loss of fibres adhesiveness to binders. Generally defects in composite material are usually more complicated than in metals. Diagnosis technics checked in metal constructions are little of use at composite construction research. At the present time an infrared diagnostics becomes more popular.
In the paper we present using of lock-in thermography for detection of destruction area in composite materials. Lock-in thermography is one of NDE methods providing phase images of thermal waves in a sample leading to receiving a distribution of internal defects and allowing for thermal properties evaluation. We used lock-in thermography in connection with modulated heat source synchronized with the IR image acquisition camera. We used a special lamp as the modulated heat source synchronized with the IR image acquisition camera. We used a special lamp as the modulated heat source. In this paper I present both simulated and measured results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The object of the work described in this paper concerns directional structures detection for particular aspects of inspection, such as scratches and marbling defect detection in leather images. Because of the very specific geometry of these structures, we intend to apply a multiscale and orientation-shiftable method. Scratches and marbling have various shapes and sizes. Multiscale approaches using oriented filters have proved to be efficient to detect such curvilinear patterns. We first use the information given by the increase of gray levels in the image to locate suspicious regions. The detection is then based on steerable filters, which can be steered to any orientation fixed by the user, and are synthesized using a limited number of basic filters. These filters are used in a recursive multi-scale transform: the steerable pyramid. Then, the curvilinear structures are extracted from the directional images at different scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The novel method, the smooth gray level detection, is proposed for localizing text in images. In the smooth gray level detection, the smoothness of gray levels between the neighboring pixels is used to determine the text blocks in images. Furthermore, the combination of the smooth gray level detection and line detection with variable block size is proposed for localizing text in images. In the experiments, our proposed method can locate the noise, skew and variable sizes of text on the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method of implementation improvement of the decision rule of the support vector machine, applied to real-time image segmentation. We present very high speed decisions (approximately 10 ns per pixel) which can be useful for detection of anomalies on manufactured parts. We propose an original combination of classifiers allowing fast and robust classification applied to image segmentation. The SVM is used during a first step, pre-processing the training set and thus rejecting any ambiguities. The hyperrectangles-based learning algorithm is applied using the SVM classified training set. We show that the hyperrectangle method imitates the SVM method in terms of performances, for a lower cost of implementation using reconfigurable computing. We review the principles of the two classifiers: the Hyperrectangles-based method and the SVM and we present our combination method applied on image segmentation of an industrial part.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The K-means algorithm is a well-known method for searching the clustering. However, the K-means algorithm is suitable to find the clustering that contains compact spherical clusters. If the shape of clusters is not spherical, the K-means algorithm is failure to find the clustering result. Therefore, in this study, the genetic clustering algorithm is proposed to find the clustering whether the shape of clusters is spherical or not. Also, the genetic clustering algorithm can automatically find the number of clusters in the data set. Thus, the users need not to pre-dine the number of clusters in the data set. Experimental results show our proposed genetic clustering algorithm achieves better performance than the traditional clustering algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The commercial face recognition software FaceIt Identification and Surveillance was evaluated using the Facial Recognition Technology (FERET) database. The experimental results show the performance of FaceIt with variations in illumination, expression, age, head size, pose, and the size of the database which all remain difficult problems in face recognition technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Caricature is affected strongly by the attribute relationship between input face and mean face. This paper proposes a method of facial attribute classification by means of the statistics of many mean faces and an input face. These processes are made up by the estimation function of the input face and the attribute matrix which is defined by the distances of all feature points of the face and its variances. There should be many attribute matrices characterized by the different age and different gender set of faces. This proposal delivered the expected results enough for the automation of the mean face selection and clarification as the new caricature generation principle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time vision system for TV screen quality inspection is introduced. The whole system consists of eight cameras and one processor per camera. It acquires and processes 112 images in 6 seconds. The defects to be inspected can be grouped into four main categories (bubble, line-out, line reduction and landing) although there exists a large variability among each particular type of defect. The complexity of the whole inspection process has been reduced by dividing images into smaller ones and grouping the defects into frequency and intensity relevant ones. Tools such as mathematical morphology, Fourier transform, profile analysis and classification have been used. The performance of the system has been successfully proved against human operators in normal production conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vector Quantization (VQ), is an efficient technique for signal compression. In traditional VQ, the major computation is on searching the nearest codeword of the codebook for every input vector. This paper presents an efficient search method to speed up the encoding process. The search algorithm is based on partial distance Elimination (PDE) and binary search is used to determine first search point. We sort the codebook by the mean value in pre-processing before all the practical compression. The first search point is the closest mean value between the input vector and the codewords in the codebook. Then, find the best match codeword by PDE to reduce the search time. The proposed algorithm demonstrates outstanding performance in terms of the time saving and arithmetic operations. Compared to full search algorithms, it saves more than 95% search time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Very few image processing applications dealt with x-ray luggage scenes in the past. In this paper, a series of common image enhancement techniques are first applied to x-ray data and results shown and compared. A novel simple enhancement method for data de-cluttering, called image hashing, is then described. Initially, this method was applied using manually selected thresholds, where progressively de-cluttered slices were generated and displayed for screeners. Further automation of the hashing algorithm (multi-thresholding) for the selection of a single optimum slice for screener interpretation was then implemented. Most of the existing approaches for automatic multi-thresholding, data clustering, and cluster validity measures require prior knowledge of the number of thresholds or clusters, which is unknown in the case of luggage scenes, given the variety and unpredictability of the scene’s content. A novel metric based on the Radon transform was developed. This algorithm finds the optimum number and values of thresholds to be used in any multi-thresholding or unsupervised clustering algorithm. A comparison between the newly developed metric and other known metrics for image clustering is performed. Clustering results from various methods demonstrate the advantages of the new approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.