PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8660, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the latest computer vision applications, it becomes more and more popular to take advantage of multichannel
cameras (RGB cameras, etc.) to obtain not only gray values but also color information of pixels. The currently
most common approach for multichannel camera calibration is the straightforward application of methods developed
for calibration of single channel cameras. These conventional calibration methods may give quite poor
performances including color fringes and displacement of features, especially for high-resolution multichannel
cameras. In order to suppress the undesired effects, a novel multichannel camera calibration approach is introduced
and evaluated in this paper. This approach considers each single channel individually and involves
different transversal chromatic aberration models. In comparison to the standard approach, the proposed approach
provides more accurate calibration results in most cases and should lead subsequently to more reliable
estimation results for computer vision issues. Moreover, besides the existing transversal chromatic aberration
(TCA) model, further TCA models and correction methods are introduced which are superior to the existing
ones. Since the proposed approach is based on the most popular calibration routine, only minimal modifications
have to be made to the existing approaches to obtain the improved calibration quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral reflectance is an inherent property of objects that is useful for many computer vision tasks. The spectral
reflectance of a scene can be described as a spatio-spectral (SS) datacube, in which each value represents the
reflectance at a spatial location and a wavelength. In this paper, we propose a novel method that reconstructs
the SS datacube from raw data obtained by an image sensor equipped with a multispectral filter array. In our
proposed method, we describe the SS datacube as a linear combination of spatially adaptive SS basis vectors.
In a previous method, spatially invariant SS basis vectors are used for describing the SS datacube. In contrast,
we adaptively generate the SS basis vectors for each spatial location. Then, we reconstruct the SS datacube
by estimating the linear coefficients of the spatially adaptive SS basis vectors from the raw data. Experimental
results demonstrate that our proposed method can accurately reconstruct the SS datacube compared with the
method using spatially invariant SS basis vectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Focus stacking and high dynamic range (HDR) imaging are two paradigms of computational photography. Focus
stacking aims to produce an image with greater depth of field (DOF) from a set of images taken with different focus
distances, whereas HDR imaging aims to produce an image with higher dynamic range from a set of images taken
with different exposure settings. In this paper, we present an algorithm which combines focus stacking and HDR
imaging in order to produce an image with both higher dynamic range and greater DOF than any of the input
images. The proposed algorithm includes two main parts: (i) joint photometric and geometric registration and (ii)
joint focus stacking and HDR image creation. In the first part, images are first photometrically registered using an
algorithm that is insensitive to small geometric variations, and then geometrically registered using an optical flow
algorithm. In the second part, images are merged through weighted averaging, where the weights depend on both
local sharpness and exposure information. We provide experimental results with real data to illustrate the algorithm.
The algorithm is also implemented on a smartphone with Android operating system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FUJIFILM X10 is a high-end enthusiast compact digital camera using an unusual sensor design.
Unfortunately, upon its Fall 2011 release, the camera quickly became infamous for the uniquely disturbing "white
orbs" that often appeared in areas where the sensor was saturated. FUJIFILM's first attempt at a fix was firmware
released on February 25, 2012 if it had little effect. In April 2012, a sensor replacement essentially solved the
problem.
This paper explores the "white orb" phenomenon in detail. After FUJIFILM's attempt at a firmware fix failed,
the author decided to create a post-processing tool that automatically could repair existing images. DeOrbIt was
released as a free tool on March 7, 2012. To better understand the problem and how to fix it, the WWW form
version of the tool logs images, processing parameters, and evaluations by users. The current paper describes the
technical problem, the novel computational photography methods used by DeOrbit to repair affected images, and
the public perceptions revealed by this experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gigapixel-class cameras present new challenges in calibration, mechanical testing, and optical performance evaluation. The AWARE-2 gigapixel camera has nearly one-hundred micro-cameras covering a 120 degree wide by 40
degree tall field of view, with one pixel spanning an 8 arcsec field angle. Viewing the imagery requires stitching
the sub-images together by applying an accurate mapping of registration parameters over the entire field of view.
For this purpose, a testbed has been developed to automatically calibrate and test each micro-camera in the
array. Using translation stages, rotation stages, and a spatial light modulator for object space, this testbed can
project any test scene into a specified micro-camera, building up image quality metrics and a registration look-up
table over the entire array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel adaptive dictionary learning approach for a single-image super-resolution based on
a sparse representation. The adaptive dictionary learning approach of the sparse representation is very powerful,
for image restoration such as image denoising. The existing adaptive dictionary learning requires training image
patches which have the same resolution as the output image. Because of this requirement, the adaptive dictionary
learning for the single-image super-resolution is not trivial, since the resolution of the input low-resolution image
which can be used for the adaptive dictionary learning is essentially different from that of the output high-
resolution image. It is known that natural images have high across-resolution patch redundancy which means
that we can find similar patches within different resolution images. Our experimental comparisons demonstrate
that the proposed across-resolution adaptive dictionary learning approach outperforms state-of-the-art single-image super-resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational Aesthetics applied on digital photography is becoming an interesting issue in different frameworks
(e.g., photo album summarization, imaging acquisition devices). Although it is widely believed and can often be
experimentally demonstrated that aesthetics is mainly subjective, we aim to find some formal or mathematical
explanations of aesthetics in photographs. We propose a scoring function to give an aesthetic evaluation of
digital portraits and group pictures, taking into account faces aspect ratio, their perceptual goodness in terms
of lighting of the skin and their position. Also well-known composition rules (e.g., rule of thirds) are considered
especially for single portrait. Both subjective and quantitatively experiments have confirmed the effectiveness of
the proposed methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a correcting method for saturated images which is operated in the YCbCr color space. The algorithm
is based on two human visual characteristics, one is the visual sensitivities to color differences and the other is the Hunt
effect. During the process of correcting colors, MacAdam ellipse model mapped to the YCbCr color space is used to
search the nearest color. And during the process of the quantification of the YCbCr components for digital
implementation, the regions with high luminance are set to have less saturation based on the Hunt effect. Experimental
results show that the proposed method is more effective in correcting saturated pixels, especially for the optimization of
the region with less luminance and more colorfulness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
They are here reported the developments and experimental results of fully operating matrices of spectrally tunable pixels
based on the Transverse Field Detector (TFD). Unlike several digital imaging sensors based on color filter arrays or
layered junctions, the TFD has the peculiar feature of having electrically tunable spectral sensitivities. In this way the
sensor color space is not fixed a priori but can be real-time adjusted, e.g. for a better adaptation to the scene content or
for multispectral capture. These advantages come at the cost of an increased complexity both for the photosensitive
elements and for the readout electronics. The challenges in the realization of a matrix of TFD pixels are analyzed in this
work. First experimental results on an 8x8 (x 3 colors) and on a 64x64 (x 3 colors) matrix will be presented and analyzed
in terms of colorimetric and noise performance, and compared to simulation predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer simulations have played an important role in the design and evaluation of imaging sensors with applications in
remote sensing and consumer photography. In this paper, we provide an example of computer simulations used
to guide the design of imaging sensors for a biomedical application: We consider how sensor design, illumination,
measurement geometry, and skin type influence the ability to detect blood oxygen saturation from non-invasive
measurements of skin reflectance. The methodology we describe in this paper can be used to design, simulate and
evaluate the design of other biomedical imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set of hyperspectral image data are made available, intended for use in modeling of imaging systems. The set contains
images of faces, landscapes and buildings. The data cover wavelengths from 0.4 to 2.5 micrometers, spanning the
visible, NIR and SWIR electromagnetic spectral ranges. The images have been recorded with two HySpex line-scan
imaging spectrometers covering the spectral ranges 0.4 to 1 micrometers and 1 to 2.5 micrometers. The hyperspectral
data set includes measured illuminants and software for converting the radiance data to estimated reflectance. The
images are being made available for download at http://scien.stanford.edu
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data
captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been
based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of
optimizing end-to-end system performance for a specific application. Such design optimization requires design
tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor
characteristics.
In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera
with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot
spectral data acquisition.1–3
We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we
briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality.
We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array
on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of
a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in
adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition
to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA.
Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper,
we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One
solves demosaicking problems by means of vector median filters, and the other applies median filtering to the
demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by
demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability
of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector
median filtering performed less well for natural images except black and white images, however the refinement
step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA
demosaicking into MSFA domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We recently proposed a natural scene statistics based image quality assessment (IQA) metric named STAIND, which
extracts nearly independent components from natural image, i.e., the divisive normalization transform (DNT)
coefficients, and evaluates perceptual quality of distortion image by measuring the degree of dependency between
neighboring DNT coefficients. To improve the performance of STAIND, its feature selection strategy is thoroughly
analyzed in this paper.
The basic neighbor relationships in STAIND include scale, orientation and space. By analyzing the joint histograms of
different neighborships and comparing the IQA model performances of diverse feature combination schemes on the
publicly available databases such as LIVE, CSIQ and TID2008, we draw the following conclusions: 1) Spatial neighbor
relationship contributes most to the model design, scale neighborship takes second place, and orientation neighbors
might introduce negative effects; 2) In spatial domain, second order spatial neighbors are beneficial supplements to first
order spatial neighbors; 3) The combined neighborship between the scales, spaces and the introduced spatial parents is
very efficient for blind IQA metrics design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image quality assessment (IQA) aims to predict perceived image quality consistently with the
corresponding subjective perceptual quality. Searching for features efficiently representing natural
images and investigating their statistics are the fundamentals in the task of IQA models design. In this
context, we have proposed previously a novel reduced reference (RR) IQA model in which groups of
the named edge patterns are good to represent the local distribution of the zero-crossings both for
natural images and their distorted counterpart, and then proposed a RR IQA model. In this paper, we
focus on the issue of the interesting edge patterns related to natural images, i.e., what are the edge
patterns good at representing ZC distribution of natural images? And how should we do to use them for
IQA model design? Along those ideas, we extract 39 groups of edge patterns from 110 natural pictures
by a defined curvature rule. Combined with error tolerance, the 39 groups of edge patterns can well
represent the ZC distribution of both the reference and distortion images. Based on them, a RR IQA
model is built on the statistical analysis of the selected edge patterns. Experimental results show that
the proposed model works fairly good compared to its competitor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Evaluation Methods/Standards for Mobile and Digital Photography I: Joint Session with Conferences 8653, 8660, and 8667C
Despite the acceptable performance of current full-reference image quality assessment (IQA) algorithms, the need for a
reference signal limits their application, and calls for reliable no-reference algorithms. Most no-reference IQA
approaches are distortion specific, aiming to measure image blur, JPEG blocking or JPEG2000 ringing artifacts
respectively. In this paper, we proposed a no-reference IQA algorithm based on the statistic property of principal
component analysis on nature image, named SPCA, which does not assume any specific type of distortion of the image.
The method gets statistics of discrete cosine transform coefficients from the distort image’s principal components. Those
features are trained by -support vector regression method and finally test on LIVE database. The experimental results
show a high correlation with human perception of quality (averagely over 90% by scores of SROCC), which is fairly
competitive with the existing no-reference IQA metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurement of visual quality is of fundamental importance for numerous image and video processing applications. This
paper presented a novel and concise reduced reference (RR) image quality assessment method. Statistics of local binary
pattern (LBP) is introduced as a similarity measure to form a novel RR image quality assessment (IQA) method for the
first time. With this method, first, the test image is decomposed with a multi-scale transform. Second, LBP encoding
maps are extracted for each of subband images. Third, the histograms are extracted from the LBP encoding map to form
the RR features. In this way, image structure primitive information for RR features extraction can be reduced greatly.
Hence, new RR IQA method is formed with only at most 56 RR features. The experimental results on two large scale
IQA databases show that the statistic of LBPs is fairly robust and reliable to RR IQA task. The proposed methods show
strong correlations with subjective quality evaluations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Evaluation Methods/Standards for Mobile and Digital Photography II: Joint Session with Conferences 8653, 8660, and 8667C
Although there is steady progress in sensor technology, imaging with a high dynamic range (HDR) is still difficult
for motion imaging with high image quality. This paper presents our new approach for video acquisition with high
dynamic range. The principle is based on optical attenuation of some of the pixels of an existing image sensor.
This well known method traditionally trades spatial resolution for an increase in dynamic range. In contrast
to existing work, we use a non-regular pattern of optical ND filters for attenuation. This allows for an image
reconstruction that is able to recover high resolution images. The reconstruction is based on the assumption
that natural images can be represented nearly sparse in transform domains, which allows for recovery of scenes
with high detail. The proposed combination of non-regular sampling and image reconstruction leads to a system
with an increase in dynamic range without sacrificing spatial resolution. In this paper, a further evaluation is
presented on the achievable image quality. In our prototype we found that crosstalk is present and significant.
The discussion thus shows the limits of the proposed imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spectral response function of a camera maps the relative sensitivity of the camera imaging system as a function
of the wavelength of the light. The spectral response function of the colour channels of a commercial-off-the-shelf
(COTS) Red/Green/Blue (RGB) camera is often unknown and not typically provided by the manufacturer
of the camera. Knowledge of this response can be useful for a wide variety of applications such as simulating
animal vision, colour correction and colour space transformations of the images captured by the camera. COTS
cameras are widely used due to their low cost and ease of implementation. We investigate a method of using a
Linear Variable Edge Filter (LVEF) and a low-cost spectrometer to characterise an RGB camera. This method
has the advantage over previous methods in the simplicity and small number of measurements needed for spectral
characterisation. Results are presented for three cameras: a consumer-level digital SLR and two point-and-shoot
consumer grade cameras, one of them being an underwater camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method for accurately determining the best focus position of a camera lens in the context of image
quality evaluation and modulation transfer function (MTF) measurement. Our method makes use of the “live preview”
function of digital cameras to image a test chart containing spatially and rotationally invariant alignment patterns. The
patterns can be located to sub-pixel accuracy even under defocus using the technique of blur-invariant phase correlation,
which leads to an absolute measure of focus position, independent of any backlash in the lens mechanism. We describe
an efficient closed feedback loop algorithm which makes use of this to drive the lens rapidly to best focus. This method
achieves the peak focus position to within a single step of the focus drive motor, typically allowing the peak focus MTF
to be measured to within 1.4% RMS. The mean time taken to find the peak focus position and drive the focus motor back
to that position ready for a comprehensive test exposure is 11.7 seconds, with maximum time 26 seconds, across a
variety of lenses of varying focal lengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe frequency division multiplexed imaging (FDMI), where multiple images are captured
simultaneously in a single shot and can later be extracted from the multiplexed image. This is achieved by
spatially modulating the images so that they are placed at different locations in the Fourier domain. The
technique assumes that the images are band-limited and they are placed at non-overlapping frequency regions
through the modulation process. The FDMI technique can be used for extracting sub-exposure information and
in applications where multiple cameras or captures are needed, such as high-dynamic-range and stereo imaging.
We present experimental results to illustrate the FDMI idea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a Bayesian method for estimating the system spectral sensitivities of a color imaging device such
as a scanner and a camera from an acquired color chart image. The system sensitivities are defined by the
product of spectral sensitivities of camera and spectral power distribution of illuminant, and characterize color
separation. In addition we proposed a scheme for predicting the optimal filter to increase color accuracy of
the device based on the estimated sensitivities. The predicted filter is attached to the front of camera and
modifies the system spectral sensitivities. This study aimed to improve color reproduction of the imaging
device in practical way even if the spectral sensitivities of the device are unknown. The proposed method is
derived by introducing the non-negativity, the smoothness and the zero boundaries of the sensitivity curves as
prior information. All hyperparameters in the proposed Bayesian model can be determined automatically by
the marginalized likelihood criterion. The modified system sensitivities and their color accuracy are predicted
computationally. An experiment was carried out to test the performance of the proposed method for predicting
the color accuracy improvement using two scanners. The average color difference was reduced from 3.07 to 2.04
and from 2.11 to 1.77 in the two scanners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several recent NASA missions have used the state-of-the-art wavelet-based ICER Progressive Image Compressor
for lossy image compression. In this paper, we describe a methodology for using evolutionary computation to
optimize wavelet and scaling numbers describing reconstruction-only multiresolution analysis (MRA) transforms
that are capable of accepting as input test images compressed by ICER software at a reduced bit rate (e.g., 0.99 bits
per pixel [bpp]), and producing as output images whose average quality, in terms of mean squared error (MSE),
equals that of images produced by ICER’s reconstruction transform when applied to the same test images
compressed at a higher bit rate (e.g., 1.00 bpp). This improvement can be attained without modification to ICER’s
compression, quantization, encoding, decoding, or dequantization algorithms, and with very small modifications to
existing ICER reconstruction filter code. As a result, future NASA missions will be able to transmit greater amounts
of information (i.e., a greater number of images) over channels with equal bandwidth, thus achieving a no-cost
improvement in the science value of those missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective and repeatable measurement of the color of artifacts is a much needed practice in archeological research.
Indeed, in many cases, color information are crucial for the interpretation of cultural products. To avoid the risks of a too
subjective autoptic recognition, Munsell system is commonly adopted. This method requires that a human operator
matches the perceived color to its standardized version in Munsell Charts. This approach has significant limitations that
can mislead archaeologists in their daily work. The alternative would be the use of accurately calibrated sensors in a
controlled illumination environment. These commodities are rarely available for most of the “on field” studies. In this
paper a simple, economical, based on consumer level electronics and sensors, semi-automatic method of color detection
on accurately and precisely selected regions of digital images of ancient pottery is presented. The proposed method
indeed uses only the data from a common CCD sensor supported by a simple color measurement pipeline. Our tool is
aimed to prevent subjective errors during color identification and to speed up the process of identification itself. The
results obtained and percentages of successful matching with human Munsell color identification have statistically shown
that our proposal is an interesting starting point to develop a full, cheap, easy to use system that could facilitate some
aspects of the archaeologist’s work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Complex multidimensional capturing setups such as plenoptic cameras (PC) introduce a trade-off between various
system properties. Consequently, established capturing properties, like image resolution, need to be described
thoroughly for these systems. Therefore models and metrics that assist exploring and formulating this trade-off
are highly beneficial for studying as well as designing of complex capturing systems. This work demonstrates the
capability of our previously proposed sampling pattern cube (SPC) model to extract the lateral resolution for
plenoptic capturing systems. The SPC carries both ray information as well as focal properties of the capturing
system it models. The proposed operator extracts the lateral resolution from the SPC model throughout an
arbitrary number of depth planes giving a depth-resolution profile. This operator utilizes focal properties of the
capturing system as well as the geometrical distribution of the light containers which are the elements in the SPC
model. We have validated the lateral resolution operator for different capturing setups by comparing the results
with those from Monte Carlo numerical simulations based on the wave optics model. The lateral resolution
predicted by the SPC model agrees with the results from the more complex wave optics model better than both
the ray based model and our previously proposed lateral resolution operator. This agreement strengthens the
conclusion that the SPC fills the gap between ray-based models and the real system performance, by including
the focal information of the system as a model parameter. The SPC is proven a simple yet efficient model for
extracting the lateral resolution as a high-level property of complex plenoptic capturing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.