PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The term pixel, for picture element, was first published in two different SPIE Proceedings in 1965, in articles by Fred C. Billingsley of Caltech's Jet Propulsion Laboratory. The alternative pel was published by William F. Schreiber of MIT in the Proceedings of the IEEE in 1967. Both pixel and pel were propagated within the image processing and video coding field for more than a decade before they appeared in textbooks in the late 1970s. Subsequently, pixel has become ubiquitous in the fields of computer graphics, displays, printers, scanners, cameras, and related technologies, with a variety of sometimes conflicting meanings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design and performance of two new high-resolution full-frame architecture CCD imaging devices for use in professional color, digital still-imaging applications. These devices are made using 6.8 μm pixels and contain a dual-split HCCD register with two outputs to increase frame rate. The KODAK KAF-31600 Image Sensor (31 Mp) is designed with microlenses to maximize sensitivity, whereas the KODAK KAF-39000 Image Sensor (39 Mp) is designed without microlenses to maximize incident light-angle response. Of particular interest is the implementation of an under-the-field oxide (UFOX) lateral overflow drain (LOD) and thin light shield process technologies. The new UFOX LOD structure forms the LOD under the thick-field oxide that eliminates a breakdown condition, allowing much higher LOD doping levels to be used. The net result is that the LOD may be scaled to smaller dimensions, thereby enabling larger charge capacities without compromising blooming control. The thin light shield process utilizes only the TiW portion of the TiW/Al metal bilayer to form the pixel aperture. This reduces the overall stack height that helps improve angle response (for pixels using microlenses) or critical crosstalk angles (for pixels without microlenses).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new architecture found in the KODAK KAC-3100 CMOS Image Sensor has been created to dramatically improve CMOS image performance in mobile applications. The method of operation and implementation is explained and the improvement of performance parameters on image quality is discussed. The benefits of the new architecture are discussed in relation to competitive CMOS technologies used in high-demanding mobile imaging applications today.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise simulation of digital camera architectures requires an accurate description of how the radiance image is transformed by optics and sampled by the image sensor array. Both for diffraction-limited imaging and for all practical lenses, the width of the optical-point-spread function differs at each wavelength. These differences are relatively small compared to coarse pixel sizes (6μm-8μm). But as pixel size decreases, to say 1.5μm-3μm, wavelength-dependent point-spread functions have a significant impact on the sensor response. We provide a theoretical treatment of how the interaction of spatial and wavelength properties influences the response of high-resolution color imagers. We then describe a model of these factors and an experimental evaluation of the model's computational accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As an imaging scheme with a single solid-state image sensor, the direct color-imaging approach is considered promising as an acquisition scheme of color data with high spatial-resolution. The sensor has three photo-sensing layers along its depth direction. Although each pixel has three color channels, their spectral sensitivities are overlapped with each other to a considerable extent, and the direct color-imaging approach is considered to have a problem in color separation. To cope with this problem, this paper presents a hybrid color-imaging approach between the direct color-imaging approach and the color-filter-array approach. Our hybrid approach uses a direct color-imaging sensor that has three photo-sensing layers, and pastes the green and the magenta color filters on pixels' surfaces of the direct color-imaging sensor according to a checkered pattern. The use of the checkered green-magenta color-filter-array improves color separation, but the sensed reddish and the sensed bluish color channels still have somewhat different color spectra from those of the red and the blue primary color channels. In addition to it, the sensed green color channel and the sensed reddish / bluish color channels are sub-sampled in a ratio of 2 to 1 according to the checkered pattern. To recover RGB primary color images with full spatial resolution, not only the color transformation of the reddish and the bluish color channels but also the interpolation of the sensed three color channels are needed. This paper presents a method for the color transformation and a method for the interpolation. Our methods achieve high spatial resolution better than the pure color-filter-array approach, and at the same time improve color separation to some extent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel heterogeneity-projection hard-decision adaptive interpolation (HPHD-AI) algorithm is proposed in this paper for color reproduction from Bayer mosaic images. The proposed algorithm aims to estimate the optimal interpolation direction and perform hard-decision interpolation, in which the decision is made before interpolation. To do so, a new heterogeneity-projection scheme based on spectral-spatial correlation is proposed to decide the best interpolation direction from the original mosaic image directly. Exploiting the proposed heterogeneity-projection scheme, a hard-decision rule can be designed easily to perform the interpolation. We have compared this technique with three recently proposed demosaicing techniques: Lu's, Gunturk's and Li's methods, by utilizing twenty-five natural images from Kodak PhotoCD. The experimental results show that HPHD-AI outperforms all of them in both PSNR values and S-CIELab ▵Ε*ab measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two-step color demosaicing algorithm for Bayer-pattern mosaic images is presented. Missing primary colors are at first estimated by an asymmetric average interpolation, and then sharpness of the initial estimate is improved by an iterative procedure. The intensity variation along an edge is not always uniform along one direction and its opposite with respect to a target pixel to be interpolated. Spatially asymmetric averaging along an edge is hence introduced in this study, where less intensity variation is assumed to be of stronger significance in the sense of stable restoration for details. Also, we restrict ourselves to use short-kernel filters for sharpness recovery. Spatially-adaptive filtering is involved with color demosaicing and an optical system for image acquisition and color filter array (CFA) sampling are subjected to the spatio-temporal aperture effect. Hence it is unavoidable to produce a blurred restoration to some extent. In order to overcome these difficulties and to restore a sharp image, an iterative procedure is introduced. Experimental results have shown a favorable performance in terms of objective measures such as PSNR and CIELAB color difference and subjective visual appearances, especially in sharpness recovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We previously presented a demosaicking method that simultaneously removes image blurs caused by an optical low-pass
filter used in a digital color camera with the Bayer's RGB color filter array. Our prototypal sharpening-demosaicking method restored only spatial frequency components lower than the Nyquist frequency corresponding to the mosaicking pattern, but it often produced ringing artifacts near color edges. To overcome this difficulty, afterward we introduced the super-resolution into the prototypal method. We formulated the recovery problem in the DFT domain, and then introduced the super-resolution by the total-variation (TV) image regularization into the sharpening-demosaicking approach. The TV-based super-resolution effectively demosaiced sharp color images while preserving such image structures as intensity values are almost constant along edges, without producing ringing artifacts. However, the TV image regularization works as the smoothing, and it tends to suppress small intensity variations excessively. Hence, the TV-based super-resolution sharpening-demosaicking approach tends to crash texture details in texture image regions. To remedy the drawback, in this paper we introduce a spatially adaptive technique that controls the TV image regularization according to the saliency of color edges around a pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the potential application of the multispectral filter array (MSFA) techniques in multispectral imaging for reasons like low cost, exact registration, and strong robustness. In both human and many animal visual systems, different types of photoreceptors are organized into mosaic patterns. This behavior has been emulated in the industry to develop the so-called color filter array (CFA) in the manufacture of digital color cameras. In this way, only one color component is measured at each pixel, and the sensed image is a mosaic of different color bands. We extend this idea to multispectral imaging by developing generic mosaicking and demosaicking algorithms. The binary tree-driven MSFA design process guarantees that the pixel distributions of different spectral bands are uniform and highly correlated. These spatial features facilitate the design of the generic demosaicking algorithm based on the same binary tree, which considers three interrelated issues: band selection, pixel selection and interpolation. We evaluate the reconstructed images from two aspects: better reconstruction and better target classification. The experimental results demonstrate that the mosaicking and demosaicking process preserves the image quality effectively, which further supports that the MSFA technique is a feasible solution for multispectral cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We previously showed the necessity of utilizing dynamic methods to select focus window for passive autofocus in digital imaging systems. One possibility is to track the photographer's pupil through a modified viewfinder so that the region of interest in a target image can be determined, as previously described. Yet this assumes that a user is on site and he/she looks through the viewfinder, which is less and less practiced as a result of the availability of liquid crystal displays (LCD) on most consumer digital imaging systems. An alternative is to use pattern recognition to select focus windows when the imaging targets are known in advance and can be extracted from their background. In this paper, one of such cases, where the imaging targets are humans, is discussed in detail. The theoretical basis for dynamic focus window selection is briefly reviewed. And an example is given to demonstrate the effects of different focus windows on the imaging results. Then the focus window selecting technique using a statistical model of human skin colors is described in detail. The incoming target image in RGB color space is transformed into 2-dimension (r, g) space. Each pixel is binarized according to the relationship between its (r, g) value and the skin color distribution. Thus skin regions in the image are extracted. Morphological operations are then applied to the resultant binary image to reduce the number and irregularity of the skin regions. A rectangle can be fitted to the extracted skin region and used as the focus window. Experimental results are given to demonstrate the advantages of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a combinational system, which can perform the functionalities of Auto Focus (AF) and Auto Exposure (AE) at the same time in a very efficient manner. At the first step, this system uses a DOG (Difference of Gaussian) filter to measure image's contrast and sharpness simultaneously. Then, a fuzzy logic-based scheme is proposed for the adjustment of focus and exposure. This system can be easily implemented with low hardware complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The illuminant estimation has an important role in many domain applications such as digital still cameras and mobile phones, where the final image quality could be heavily affected by a poor compensation of the ambient illumination effects. In this paper we present an algorithm, not dependent on the acquiring device, for illuminant estimation and compensation directly in the color filter array (CFA) domain of digital still cameras. The algorithm proposed takes into account both chromaticity and intensity information of the image data, and performs the illuminant compensation by a diagonal transform. It works by combining a spatial segmentation process with empirical designed weighting functions aimed to select the scene objects containing more information for the light chromaticity estimation. This algorithm has been designed exploiting an experimental framework developed by the authors and it has been evaluated on a database of real scene images acquired in different, carefully controlled, illuminant conditions. The results show that a combined multi domain pixel analysis leads to an improvement of the performance when compared to single domain pixel analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The chromaticity of an acquired image reconstructed from a Bayer pattern image sensor is heavily dependent on the scene illuminant and needs color corrections to match human visual perception. This paper presents a method to 'white balance' an image that is computationally inexpensive for hardware implementation, has reasonable accuracy without the need of storing the full image, and is aligned to the current technical development of the field. The proposed method introduces the use of a 2D chromaticity diagram of the image to extract information about the resultant scene reflectance. It assumes that the presence of low-saturated colors in the scene will increase the probability of retrieving accurate scene color information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method of filtering a digital sensor image to efficiently reduce noise and to improve the sharpness of an image. To reduce the noise in an image captured by conventional image sensor, the proposed noise reduction filter selectively outputs one of results obtained by recursive temporal and spatial noise filtering values. By proposed noise filtering method, image detail can be well preserved and noise filtering artifacts which can be generated along the moving object boundary in image sequences by applying temporal noise filtering are prevented. Since the sharpness of noise filtered image can be inevitably deteriorated by noise filtering, the adaptive noise suppressed sharpening filter is also proposed. The proposed sharpening filter generates filter mask adaptively according to the pixel similarity information within filter mask and can obtain the continuous image quality by the easy-controllable gain control algorithm without noise boost-up in the smooth region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical bilateral filter smoothes images and preserves edges using a nonlinear combination of surrounding pixels. Our modified bilateral filter advances this approach by sharpening edges as well. This method uses geometrical and photometric distance to select pixels for combined low and high pass filtering. It also uses a simple window filter to reduce computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital photograph stitching blends multiple images to form a single
one with a wide field of view. Sometimes, artifacts may arise, often
due to photometric inconsistency and geometric misalignment among
the images. Several existing techniques tackle this problem by
methods such as pixel selection or pixel blending, which involve the
matching of intensity, frequency, and gradient among the input
images and adjust them to find the optimal match with the input
images. However, our experience indicates that these methods have
yet fully incorporated the mathematical properties of the
photometric inconsistency. In this paper, we first introduce a
general mathematical model describing the properties and effects of
the photometric inconsistency. This model supports our claim that
matching on the intensity and even the gradient domain is
insufficient. Our method thus adds the extra requirement of an
optimal matching of curvature. Simulations are carried out using our
method, with input images suffering from different kinds of
photometric inconsistency under the aligned and misaligned
situations. We evaluate the results using both objective and
subjective criteria, and we find that our method indeed shows an
improvement for certain kinds of photometric inconsistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The European collaborative research project IST-2000-28008-VITRA ('Veridical Imaging of Transmissive and Reflective Artefacts') developed an innovative system for high-resolution digital image acquisition for conservation and heritage applications. Using a robotic platform to carry both camera and lighting, it can capture colorimetric images up to 15 metres above floor level, thus eliminating the need for scaffold towers. Potential applications include wall-paintings, tapestries, friezes and stained glass windows in historic buildings such as churches, cathedrals, palaces and monuments. Evaluation of the system was conducted at four sites in Germany and the UK. In the course of the project a number of significant technical innovations were made, including a new panoramic image viewer for the Internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Source camera identification is the process of discerning which camera has been used to capture a particular image. In this paper, we consider the more fundamental problem of trying to classify images captured by a limited number of camera models. Inspired by the previous work that uses sensor imperfection, we propose to use the intrinsic lens aberration as features in the classification. In particular, we focus on lens radial distortion as the primary distinctive feature. For each image under investigation, parameters from pixel intensities and aberration measurements are obtained. We then employ a classifier to identify the source camera of an image. Simulation is carried out to evaluate the success rate of our method. The results show that this is a viable procedure in source camera identification with a high probability of accuracy. Comparing with the procedures using only image intensities, our approach improves the accuracy from 87% to 91%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of digital cameras has undergone a magnificent development during the last 10 years. So have the methods to evaluate the quality of these cameras. At the time the first consumer digital cameras were released in 1996, the first ISO standards on test procedures were already on their way. At that time the quality was mainly evaluated using a visual analysis of images taken of test charts as well as natural scenes. The ISO standards lead the way to a couple of more objective and reproducible methods to measure characteristics such as dynamic ranges, speed, resolution and noise. This paper presents an overview of the camera characteristics, the existing evaluation methods and their development during the last years. It summarizes the basic requirements for reliable test methods, and answers the question of whether it is possible to test cameras without taking pictures of natural scenes under specific lighting conditions. In addition to the evaluation methods, this paper mentions the problems of digital cameras in the past concerning power consumption, shutter lag, etc. It also states existing deficits which need to be solved in the future such as optimized exposure and gamma control, increasing sensitivity without increasing noise, and the further reduction of shutter lag etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Imatest program was developed to enable photographers and imaging system developers to conveniently measure the key image quality factors in cameras, lenses, and printers. For cameras and lenses these factors include sharpness (measured by MTF), noise, dynamic range, tonal response (OECF curve), color and exposure accuracy, lens distortion, light falloff (vignetting), and lateral chromatic aberration. For printers they include tonal response, Dmax (the maximum reproducible black density), color response, and color gamut. Although some measurements follow ISO standards, emphasis is on simple, convenient, and affordable measurements. We begin with an overview of Imatest, briefly describing each module. Then we focus on the issue of comparing cameras with different degrees of sharpening. Oversharpening, which is common in compact digital cameras, results "halos" near edges that make small prints look good but can be objectionable at large magnifications. It also results in exaggerated MTF measurements. Most digital single-lens reflex cameras (DSLRs) have little sharpening, putting them at a disadvantage in MTF comparisons. Imatest uses an algorithm called standardized sharpening that facilitates comparisons between cameras by adding or removing sharpening to make edge overshoot relatively consistent. The present algorithm adjusts the sharpening amount so that MTF at 0.3 times the Nyquist frequency is equal to MTF at low spatial frequencies. Determining the optimum sharpening radius R can be challenging because of the large variety of camera edge responses. We discuss considerations in selecting R and constraints on the sharpening amount that make it difficult to find a unique solution that fits all cameras-noisy compacts as well as low-noise digital SLRs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the size of a CMOS imaging sensor array is fixed, the only way to increase sampling density and spatial resolution is to reduce pixel size. But reducing pixel size reduces the light sensitivity. Hence, under these constraints, there is a tradeoff between spatial resolution and light sensitivity. Because this tradeoff involves the interaction of many different system components, we used a full system simulation to characterize performance. This paper describes system simulations that predict the output of imaging sensors with the same dye size but different pixel sizes and presents metrics that quantify the spatial resolution and light sensitivity for these different imaging sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many conventional image processing algorithms such as noise filtering, sharpening and deblurring, assume a noise model of Additive White Gaussian Noise (AWGN) with constant standard deviation throughout the image. However, this noise model does not hold for images captured from typical imaging devices such as digital cameras, scanners and camera-phones. The raw data from the image sensor goes through several image processing steps such as demosaicing, color correction, gamma correction and JPEG compression, and thus, the noise characteristics in the final JPEG image deviates significantly from the widely-used AWGN noise model. Thus, when the image processing algorithms are applied to the digital photographs, they may not provide optimal image quality after the image processing due to the inaccurate noise model. In this paper, we propose a noise model that better fits the images captured from typical imaging devices and describe a simple method to extract necessary parameters directly from the images without any prior knowledge of imaging pipeline algorithms implemented in the imaging devices. We show experimental results of the noise parameters extracted from the raw and processed digital images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manufacturers of mobile phones are seeking a default procedure to test the quality of mobile phone cameras. This paper presents such a default procedure based as far as possible on ISO standards and adding additional useful information based on easy to handle methods. In addition to this paper, which will be a summary of the measured values with a brief description on the methods used to determine these values, a white paper for the complete procedure will be available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.