The paper is interested in indexing lecture videos for semantic search of learning material. We present a comparative study between DCT, Grayscale and Marginal space using classical k-means technique. For stable segmentation we propose to introduce an automatic threshold based on moment preservation. We discuss the suitability of each space on different images and then focus on educational video frames that are not so predominant in color and present the best transformation technique for segmentation of lecture video frame. We also present a technique to localize the slide based on some heuristics.
In the present work, we have analyzed the optical parameters of the thin layers Ge, Ag, and Au prepared by lithography
and used as front contact and absorber in the C-MOS device structure. The experimental data concern the reflectance R
and transmittance T in the visible and near infrared region. The interpretation of these results are based on the method of
analysis of R and T developed by Tomlin and on the Mueller numerical method of resolution of nonlinear equation give
by Abelès[1] method. Thus we have chosen by comparing all method to use Lorenz-Mie theory diffusion [2]. The
simulation results allow us to choose gold to realize our pattern.
Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in
adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition
to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA.
Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper,
we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One
solves demosaicking problems by means of vector median filters, and the other applies median filtering to the
demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by
demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability
of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector
median filtering performed less well for natural images except black and white images, however the refinement
step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA
demosaicking into MSFA domain.
Many methods have been developed in image processing for face recognition, especially in recent years with the increase
of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range
of the electromagnetic spectrum.
The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques
used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either
combining them or analyzing them separately in order to extract the most appropriate information for face recognition.
We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible
Spectrum.
In this paper, we propose a simple method for wine color characterization, classification and
reproduction. The aim is to represent the colors of wines with limited number of hues that we call nuances.
Burgundy wines (France) constitute the wine samples in this study but the method remains general. The method
consists of four steps: spectral transmittance measures of a large number of wine samples. Then standard and
gamma corrected colors are reconstructed from spectral data. Afterwards, a ΔE-based classification is performed
in the CIELAB, which provides good visual uniformity and thus offers the best discrimination between the
different samples. The last step is a spectral-based color reproduction using synthetic liquids. The obtained
results are encouraging in that they permit an accurate characterization and reproduction of wine color.
For activities of agronomical research institute, the land experimentations are essential and provide relevant information
on crops such as disease rate, yield components, weed rate... Generally accurate, they are manually done and present
numerous drawbacks, such as penibility, notably for wheat ear counting. In this case, the use of color and/or texture
image processing to estimate the number of ears per square metre can be an improvement. Then, different image
segmentation techniques based on feature extraction have been tested using textural information with first and higher
order statistical methods. The Run Length method gives the best results closed to manual countings with an average
error of 3%. Nevertheless, a fine justification of hypothesis made on the values of the classification and description
parameters is necessary, especially for the number of classes and the size of analysis windows, through the estimation of
a cluster validity index. The first results show that the mean number of classes in wheat image is of 11, which proves
that our choice of 3 is not well adapted. To complete these results, we are currently analysing each of the class
previously extracted to gather together all the classes characterizing the ears.
There is growing interest in video-based solutions for people monitoring and counting in business and security
applications. Compared to classic sensor-based solutions the
video-based ones allow for more versatile functionalities,
improved performance with lower costs. In this paper, we propose a real-time system for people counting
based on single low-end non-calibrated video camera.
The two main challenges addressed in this paper are: robust estimation of the scene background and the number
of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely,
e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation
algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination
and static objects changes, a background substraction is performed using an adaptive background model
(updated over time based on motion information) and automatic thresholding. Furthermore, post-processing
of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are
tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under
heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates.
Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.
KEYWORDS: RGB color model, 3D modeling, LCDs, Instrument modeling, Optimization (mathematics), Data modeling, Digital Light Processing, CRTs, Projection devices, Projection systems
We have defined an inverse model for colorimetric characterization of additive displays. It is based on an
optimized three-dimensional tetrahedral structure. In order to minimize the number of measurements, the
structure is defined using a forward characterization model. Defining a regular grid in the device-dependent
destination color space leads to heterogeneous interpolation errors in the device-independent source color space.
The parameters of the function used to define the grid are optimized using a globalized Nelder-Mead simplex
downhill algorithm. Several cost functions are tested on several devices. We have performed experiments with
a forward model which assumes variation in chromaticities (PLVC), based on one-dimensional interpolations for
each primary ramp along X, Y and Z (3×3×1-D). Results on 4 devices (2 LCD and a DLP projection devices,
one LCD monitor) are shown and discussed.
In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.
KEYWORDS: Simulation of CCA and DLA aggregates, Image segmentation, Principal component analysis, Landsat, Multispectral imaging, Optical engineering, Image processing, Satellite imaging, Satellites, Earth observing sensors
We describe some applications of linear and nonlinear projection methods in order to reduce the number of spectral bands in Landsat multispectral images. The nonlinear method is curvilinear component analysis (CCA), and we propose an adapted optimization of it for image processing, based on the use of principal-component analysis (PCA, a linear method). The principle of CCA consists in reproducing the topology of the original space projection points in a reduced subspace, keeping the maximum of information. Our conclusions are: CCA is an improvement for dimension reduction of multispectral images; CCA is really a nonlinear extension of PCA; CCA optimization through PCA (called CCAinitPCA) allows a reduction of the computation burden but provides a result identical to that of CCA.
This article proposes to deal with noisy and variable size color textures. It also proposes to deal with quantization methods and to see how such methods change final results. The method we use to analyze the robustness of the textures consists of an auto-classification of modified textures. Texture parameters are computed for a set of original texture samples and stored into a database. Such a database is created for each quantization method. Textures from the set of original samples are then modified, eventually quantized and classified according to classes determined from a precomputed database. A classification is considered incorrect if the original texture is not retrieved. This method is tested with 3 textures parameters: auto-correlation matrix, co-occurrence matrix and directional local extrema as well as 3 quantization methods: principal component analysis, color cube slicing and RGB binary space slicing. These two last methods compute only 3 RGB bands but could be extended to more. Our results show that, with or without quantization, autocorrelation matrix parameter is less sensitive to noise and to scaling than the two other tested texture parameters. This implies that autocorrelation matrix should probably be preferred for texture analysis with non controlled condition, typically industrial applications where images could be noisy. Our results also shows that PCA quantization does not change results where the two other quantization methods change them dramatically.
We present a new approach to optically calibrate a multispectral imaging system based on interference filters. Such a system typically suffers from some blurring of its channel images. Because the effectiveness of spectrum reconstruction depends heavily on the quality of the acquired channel images, and because this blurring negatively affects them, a method for deblurring and denoising them is required. The blur is modeled as a uniform intensity distribution within a circular disk. It allows us to characterize, quantitatively, the degradation for each channel image. In terms of global reduction of the blur, it consists of the choice of the best channel for the focus adjustment according to minimal corrections applied to the other channels. Then, for a given acquisition, the restoration can be performed with the computed parameters using adapted Wiener filtering. This process of optical calibration is evaluated on real images and shows large improvements, especially when the scene is detailed.
Projection displays generally do not reproduce colors evenly at different locations of the display. Depending on the display technology, the non-uniformity may be in luminance only, typically due to optical effects in the lens, or in all color dimensions of luminance, chroma and hue. Even though this non-uniformity often remains unnoticed by the user, for certain applications such as tiling/stitching of projection displays, the non-uniformity is an important problem.
In this study we investigate the feasibility of using an inexpensive webcam to correct the projection display non-uniformity. Two main approaches are proposed and evaluated, one using colorimetric characterization of camera and display, and another closed-loop approach. Both approaches are based on displaying images that should ideally have a uniform color distribution, capturing the displayed images with the webcam, and using these captured images to create a correction function, which is then applied to images in order to correct them.
Our results show that the feasibility of the proposed methods depends heavily on the qualities of the equipment involved. For standard low-end webcams it is generally difficult to obtain reliable device-independent color measurements needed for the colorimetric characterization approach, but the direct approach still gives reasonable results.
The TDS project is concerned with high resolution spectroscopy of spherical top molecules. These molecules are known for the complexity of their spectra as well as their specific role in advanced fundamental and applied research in molecular physics and quantum chemistry. The prototype of the TDS computer package, which is the concrete result of a collaboration supported by CNRS and Russian Academy of Sciences, was presented at the Dijon Colloquium in September 1991. A brief documentation on this updated system runnable on IBM PC and compatible is presented here in the form of seven questions and answers and illustrated by selected screen copies. The electronic publication of an operating package (including sample data) via Spectrochimica Acta Electronica is envisaged.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.