In this paper we discuss and quantitatively evaluate the mapping of raw sensor chromaticities, i.e., r = R/(R +
G+B) and b = B/(R+G+B), into the CIE 1931 xy chromaticity space, with the constraint that solely training
chromaticities are being used which have been obtained using a color checker at a certain illumination. The region
near the Planckian locus is considered to be most relevant and a least-squares weighting scheme is proposed to
minimize the residuals in this region. Furthermore, the Planckian and daylight loci are approximated in the rb
raw sensor chromaticity space using color checker chromaticities at three illuminations, those commonly available
in light-booths. The effect of daylight emulation compared to the standard daylight illumination is evaluated.
In another part of this paper the mapping of rb chromaticities to correlated color temperature is discussed and
evaluated. The proposed method is based on a weighted least-squares fit of a 2nd-order 2D polynomial and
outperforms two other estimation methods. We present a comprehensive set of simulation results with real
measurements of reflectance, sensitivity, and emission spectrums.
Image sensor arrays may have defect pixels, either originating from manufacturing or being developed over
the lifetime of the image sensor array. Continuous defect pixel detection and correction performing during
camera runtime is desirable. On-the-fly detection and correction is challenging since edges and high-frequency
image content might get identified as defect pixel regions and intact pixels become corrupted during defect pixel
replacement. We propose a table-based detection and correction method which by and by fills the non-volatile
table during normal camera operation. In this work we model defect pixels and pixel clusters to be stuck to
fixed values or at least fixed to a narrow value range whereas the local neighborhood of these pixels indicate a
normal behavior. The idea is to temporally observe the value ranges of small group of pixels (e.g. 4x4 pixel
blocks) and to decide about their defective condition depending on their variability with respect to their neighbor
pixels. Our method is computationally efficient, requires no frame buffer, requires modest memory, and therefore
is appropriate to operate in line-buffer based image signal processing (ISP) systems. Our results indicate high
reliability in terms of detection rates and robustness against high-frequency image content. As part of the defect
pixel replacement system we also propose a simple and efficient defect pixel correction method based on the
mean of medians operating on the Bayer CFA image domain.
A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects.
In this paper we present our reconstruction framework which determines a polygonal surface from a set of
dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to
achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.
The pixel densities of current CMOS sensors increase and bring new challenges for image sensor designers.
Todays sensor modules with miniature lenses often exhibit a considerable amount of color lens shading. This
shading is spatial-variant and can be easily identified when capturing a flat textureless Lambertian surface and
inspecting the light fall-off and hue change from the image center towards the borders. In this paper we discuss
lens shade compensation using spatially dependent gains for each of the four color channel in the Bayer color
filter array. We determine reference compensation functions in off-line calibration and efficiently parameterize
each function with a bilinear spline which we fit to the reference function using constrained least-squares and
Lagrangian conditions ensuring continuity between the piece-wise bilinear functions. For each spline function
we optimize a rectilinear grid on which the spline knots are aligned by minimizing the square errors between
reference and approximated compensation function. Our evaluations provide quantitative results with real image
data using three recent CMOS sensor modules.
The dynamic range of digital image sensors is growing faster than the reproducible dynamic range of standard display and print devices. In an assessment, we evaluated eight real-time-capable, high-dynamic-range (HDR) tone-mapping operators, using HDR images restricted to 12 bits per channel, and mapping to 8-bit low-dynamic-range (LDR) output data. In our survey, 51 test persons rated the subjective quality of LDR images of 15 scenes. For each scene, the operators were tested against linear tone mapping. From that survey, we derived three operators that performed better than linear mapping and evaluated their impact on exposure when using tone-mapping in an imaging chain. To mathematically predict the subjective quality of tone-mapped images, we tested five figures of merit against the empirical results of the survey. We propose a quality measure derived from the Naka-Rushton equation. We evaluate the proposed metric against four other image quality metrics, with our novel measure matching the subjective results best.
Digital video stabilization is a cost-effective way to reduce the effect of camera shake in handheld video cameras.
We propose several enhancements for video stabilization based on integral projection matching,1 which is a simple
and efficient global motion estimation technique for translational motion. One-dimensional intensity projections
along the horizontal and vertical axes provide a signature of the image. Global motion estimation aims at
finding the largest similarity between shifted intensity projections between consecutive frames. The obtained
shifts provide information about the global inter-frame motion. Relying upon the estimated global motion an
output frame of reduced size is determined deploying motion smoothing. We propose several enhancements
of prior works to improve the stabilization performance and to reduce computational complexity and memory
requirements. The main enhancement is a partitioning of the projection intensities to better cope with in-scene
motion. Logarithmic search is deployed to seek for a minimum matching error for selected partitions in two
subsequent frames. Furthermore we propose a novel motion smoothing approach we call center-attracted motion
damping. We evaluate the performance of the enhancements under various imaging conditions using real video
sequences as well as synthetic video sequences with provided ground-truth motion. The stabilization accuracy is
sufficient under most imaging conditions so that the effect of camera shake is eliminated or significantly reduced
in the stabilized video.
In many modern CMOS imagers employing pixel arrays the optical integration time is controlled by the method known as
rolling shutter. This integration technique, combined with fluorescent illuminators exhibiting an alternating light intensity,
causes spatial flicker in images varying through the sequence. This flicker can be avoided when the integration time of
the imager is adjusted to a multiple of the flicker period. Since the flicker frequency can vary upon the local AC power
frequency, a classification must be performed beforehand. This is either performed utilizing an additional illumination
intensity detector or, in the case we focus on, restricting to image information only. In this paper we review the state of the
art techniques of flicker detection and frequency classification, and propose two robust classification methods based on a
clear mathematical model of the illumination flicker problem. Finally we present another approach for compensating for
flicker in single images suffering from these artifacts by inversing the flicker model function. Therefore, the flicker phase,
amplitude and frequency are to be adjusted adaptively. This however compensates for the fact, that the shutter width is no
longer limited to a multiple of the flicker period. We present our simulation results with synthesized image series as well
as with real captured sequences under different illumination frequencies, whereas our approaches classify robustly in most
imaging situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.