PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6575, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper reports about the development of a software module that allows autonomous object detection, recognition and
tracking in outdoor urban environment. The purpose of the project was to endow a commercial PTZ camera with object
tracking and recognition capability to automate some surveillance tasks. The module can discriminate between various
moving objects and identify the presence of pedestrians or vehicles, track them, and zoom on them, in near real-time.
The paper gives an overview of the module characteristics and its operational uses within the commercial system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rollover incidents of military vehicles have resulted in soldiers incurring injuries or losing their lives. A recent
report identified that one cause of vehicle rollovers is the driver's inability to assess rollover threat, such as a
cliff, soft ground, water, or culvert on the passenger side of the vehicle. The vehicle's width hinders the
driver's field of view. To reduce the number of military vehicles rolling over, a road edge detection and driver
warning system is being developed to warn the driver of potential rollover threats and keep the driver from
veering off the side of the road. This system utilizes a unique, ultra-fast, image-processing algorithm based on
the neurobiology of insect vision, specifically fly vision. The system consists of a Long-Wavelength Infrared
(LWIR) camera and visible spectrum monochrome video camera system, a long-range laser scanner, a
processing module in which a biomimetic image processor detects road edges in real-time, and a Driver's
Vision Enhancer (DVE) which displays the road image, detected boundaries and road-side terrain steepness
in real-time for the driver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Skew detection in document images is an important pre-processing step for several document analysis algorithms. In this
work, we propose a fast method that estimates skew angles based on a local-to-global approach. Many existing
techniques that are based on connected component analysis, group together pixels in order to form small document
objects. Then, a Hough transform is used to estimate the skew angle. The connected components detection process
introduces an undesired overhead. Nearest neighbor based techniques are only based on local groups and thus fail to
achieve great skew accuracy. Techniques based on projections create 1-D profiles by successively rotating the document
in a range of angles. The detection speed can be accelerated considering rotations from coarse to fine. However, the
rotation and projection can be relatively slow. The proposed technique is characterized by both high processing speed
and high skew estimation accuracy. First, local ring-shaped areas are analyzed for an initial skew estimation by building
angle histograms between random points and the ring centers. Following a ring selection process, a single histogram is
obtained. A range of angles around the best candidates obtained from the initial skew estimation is further examined.
Experimental results have shown that the proposed technique yields superior results in terms of estimation accuracy and
speed compared to existing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature-specific imaging (FSI) refers to any imaging system that directly measures linear projections of an object irradiance distribution. Numerous reports of FSI (also called compressive imaging) using static projections can be found in the literature. In this paper we will present adaptive methods of FSI suitable for the applications of (a) image reconstruction and (b) target detection. Adaptive FSI for image reconstruction is based on Principal Component and Hadamard features. The adaptive algorithm employs an updated training set in order to determine the optimal projection vector after each measurement. Adaptive FSI for detection is based on a sequential hypothesis testing framework. The probability of each hypothesis is updated after each measurement and in turn defines a new optimal projection vector. Both of these new adaptive methods will be compared with static FSI. Adaptive FSI for detection will also be compared with conventional imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are
over 250 different tools which embed data into an image without causing noticeable changes to the image. From a
forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool
that can scan and accurately identify files suspected of containing malicious information. The identification process is
termed the steganalysis problem which focuses on both blind identification, in which only normal images are available
for training, and multi-class identification, in which both the clean and stego images at several embedding rates are
available for training. In this paper an investigation of a clustering and classification technique (Expectation
Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis
problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego
images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification
technique is highly suitable for both blind detection and the multi-class problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most man-made objects provide characteristic straight line edges and, therefore, edge extraction is a commonly used target detection tool. However, noisy images often yield broken edges that lead to missed detections, and extraneous edges that may contribute to false target detections. We present a sliding-block approach for target detection using weighted power spectral analysis. In general, straight line edges appearing at a given frequency are represented as a peak in the Fourier domain at a radius corresponding to that frequency, and a direction corresponding to the orientation of the edges in the spatial domain. Knowing the edge width and spacing between the edges, a band-pass filter is designed to extract the Fourier peaks corresponding to the target edges and suppress image noise. These peaks are then detected by amplitude thresholding. The frequency band width and the subsequent spatial filter mask size are variable parameters to facilitate detection of target objects of different sizes under known imaging geometries. Many military objects, such as trucks, tanks and missile launchers, produce definite signatures with parallel lines and the algorithm proves to be ideal for detecting such objects. Moreover, shadow-casting objects generally provide sharp edges and are readily detected. The block operation procedure offers advantages of significant reduction in noise influence, improved edge detection, faster processing speed and versatility to detect diverse objects of different sizes in the image. With Scud missile launcher replicas as target objects, the method has been successfully tested on terrain board test images under different backgrounds, illumination and imaging geometries with cameras of differing spatial resolution and bit-depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A local statistics based contrast enhancement technique for enhancing the reconstructed high resolution image from a set of shifted and rotated low resolution images is proposed in this paper. Planar shifts and rotations in the low resolution images are determined by a phase correlation approach performed on the polar coordinate representations of their Fourier transforms. The pixels of the low resolution images are expressed in the coordinate frame of the reference image and the image values are interpolated on a regular high-resolution grid. The non-uniform interpolation technique which allows for the reconstruction of functions from samples taken at non-uniformly distributed locations has relatively low computational complexity. Since bi-cubic interpolation produces blurred edges due to its averaging effect, the edges of the reconstructed image are enhanced using a local statistics based approach. The center-surround ratio is adjusted using global statistics of the reconstructed image and used as an adaptive gamma correction to achieve the local contrast enhancement which increases the image sharpness. Performance of the proposed algorithm is evaluated by conducting experiments on both synthetic and real image sets and the results are encouraging in terms of visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An innovative approach for navigation in non-GPS environment is presented based on all source adaptive fusion of any
available information encompassing passive imaging data, digital elevation terrain data, IMU/GPS, altimeters, and star
tracker. The approach provides continuous navigation through non-GPS environment and yields an improved navigation
in the presence of GPS. The approach also provides reduced target location error and moving target indication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that
can be detected by the naked eye may not be visible in recorded images. Lighting is here an important
factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing
a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and
light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in
which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present
how such algorithms can be adapted so that details in color images are enhanced while color information is
retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast
enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will
remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed
so that the output can be displayed on a monitor. The proposed technique can for instance be used by
operators monitoring movements of people in order to detect suspicious behavior. To do this effectively,
specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is
sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced
local contrast in color images leads to more effective monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computing architectures to process image data and optimize an objective criterion are identified. One such
objective criterion is the energy in the error function. The data is partitioned and the error function is
optimized in stages. Each stage consists of identifying an active partition and performing the optimization
with the data in this partition. The other partitions of the data are inactive i.e. maintain their current values.
The optimization progresses by switching between the currently active partition and the remaining inactive
partitions. In this paper, sequential and parallel update procedures within the active partition are presented.
These procedures are applied to retrieve image data from linearly degraded samples. In addition, the local
gradient of the error functional is estimated from the observed image data using simple linear convolution
operations. This optimization process is effective when the dimensions of the data and the number of
partitions increase. The purpose of developing such data processing strategies is to emphasize the
conservation of resources such as available bandwidth, computations, and storage in present day Webbased
technologies and multimedia information transfer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is known that the distributions of wavelet coefficients of natural images at different scales and orientations can
be approximated by generalized Gaussian probability density functions. We exploit this prior knowledge within
a novel statistical framework for multi-frame image restoration based on the maximum a-posteriori (MAP) algorithm.
We describe an iterative algorithm for obtaining a high-fidelity object estimate from multiple warped,
blurred, and noisy low-resolution images. We compare our new method with several other techniques including
linear restoration, and restoration using Markov Random Field (MRF) object priors. We will discuss the
performances of the algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is the process of aligning two images taken from different views, at different times, or by different
modalities. In this article, we propose a new framework that incorporates prior deformation knowledge in the
registration process. First, an elastic image registration method is used to obtain deformation fields by modeling the
nonrigid deformations as locally affine and globally smooth flow fields. Next, the estimated geometric transformation
maps are used to train a prior deformation model using two subspace projection techniques, namely principle
component analysis (PCA) and independent component analysis (ICA). A smooth deformation is now guaranteed by
projecting the locally calculated deformation onto a subspace of allowed deformations. One advantage of our approach
is in its ability to guarantee smoothness without the need for iterative regularization. The new algorithms were validated
using the Amsterdam library of images (ALOI). Our experiments demonstrate promising results in terms of mean square
error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous papers have studied the relationship between a bit map digital image and a given object, called the search
object. In particular, to signal that it is likely, or not likely, that the search object appears, at least partially, in the
image. Edges in the search object and in the digital image are then represented as objects, in the object oriented
programming sense. Each edge or segment of an edge is represented as a normalized Bezier cubic parameterized
curve. The normalization process is intended to remove the effect of size in the edge or edge segment. If the edges
match and their orientation is the same, then the system signals that the object is likely to appear in the image and
the coordinates in the image of the object are returned. The functioning of the algorithm is not dependent on scaling,
rotation, translation, or shading of the image. To begin the data mining process, a collection of search objects is
generated. A database is constructed using a number of images and storing information concerning the combination
of search objects that appear in each image, time and space relationships between the various search objects, along
with identifying information about the image. This database would then be subjected to traditional data mining
techniques in order to generate useful relationships within the data. These relationships could then be used to
advantage in supplying information for defense, corporate, or law enforcement intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel method for computing the information content of an image. We introduce the notion
of task-specific information (TSI) in order to quantify imaging system performance for a given task. This
new approach employs a recently-discovered relationship between the Shannon mutual-information and
minimum estimation error. We demonstrate the utility of the TSI formulation by applying it to several
familiar imaging systems including (a) geometric imagers, (b) diffraction-limiter imagers, and (c) projective/
compressive imagers. Imaging system TSI performance is analyzed for two tasks: (a) detection, and
(b) classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their size may give them properties that are very different from bulk materials. Further advancement of nanotechnology cannot be obtained without an increased understanding of nanoparticle properties such as size (diameter) and size distribution frequently evaluated using transmission electron microscopy (TEM). In the past, these parameters have been obtained from digitized TEM images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive.
More recently, computer imaging particle analysis has emerged as an objective alternative by counting and measuring objects in a binary image. This paper will describe the procedures used to preprocess a set of gray scale TEM images so that they could be correctly thresholded into binary images. This allows for a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Several preprocessing methods including pseudo flat field correction and rolling ball background correction were investigated with the rolling ball algorithm yielding the best results. Examples of particle analysis will be presented for different types of materials and different magnifications. In addition, a method based on the results of particle analysis for identifying and removing small noise particles will be discussed. This filtering technique is based on identifying the location of small particles in the binary image and removing them without affecting the size of other larger particles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is
proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible
from a reduced number of random samples, a process known as compressive sampling. Taking advantage
of this realization has radical impact on power consumption and communication bandwidth, crucial in
applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks.
Although the benefits of these compression techniques are self-evident, the reconstruction process requires
the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time
systems. In particular, (1) the power consumption associated with the difficult computations offsets the
power savings afforded by compressive sampling, and (2) limited computational power prevents these
algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based
computers offer low power consumption and high computational capacity, providing a solution to
both problems simultaneously. In this paper, we present an architecture that implements the algorithms
central to compressive sampling in an FPGA environment. We start by studying the computational profile
of the convex optimization algorithms used in compressive sampling. Then we present the design of a
pixel pipeline suitable for FPGA implementation, able to compute these algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thresholding is an image processing procedure used to convert an image consisting of
gray level pixels into a black and white binary image. One application of thresholding is
particle analysis. Once foreground objects are separated from the background, a
quantitative analysis that characterizes the number, size and shape of particles is obtained
which can then be used to evaluate a series of nanoparticle samples.
Numerous thresholding techniques exist differing primarily in how they deal with
variations in noise, illumination and contrast. In this paper, several popular thresholding
algorithms are qualitatively and quantitatively evaluated on transmission electron
microscopy (TEM) and atomic force microscopy (AFM) images. Initially, six
thresholding algorithms were investigated: Otsu, Riddler-Calvard, Kittler, Entropy, Tsai
and Maximum Likelihood. The Riddler-Calvard algorithm was not included in the
quantitative analysis because it did not produce acceptable qualitative results for the
images in the series.
Two quantitative measures were used to evaluate these algorithms. One is based on
comparing object area the other on diameter before and after thresholding. For AFM
images the Kittler algorithm yielded the best results followed by the Entropy and
Maximum Likelihood techniques. The Tsai algorithm yielded the top results for TEM
images followed by the Entropy and Kittler methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The employed distribution for the noise-free data and the accuracy of the involving parameters play key roles in the
performance of estimators, such as maximum a posteriori (MAP). In this paper, we select a proper model for the
distribution of wavelet coefficients and present a new image denoising algorithm. We model the wavelet coefficients in
each subband with a mixture of two bivariate Laplacian probability density functions (pdfs) using local parameters for
the mixture model. This pdf simultaneously allows capturing the heavy-tailed nature of the wavelet coefficients,
exploiting the interscale dependencies in the adjacent scales and modeling the intrascale dependencies of coefficients in
each subband. We propose a MAP estimator for image denoising using this mixture model and the estimated local
parameters. We compare our method with other techniques employing mixture pdfs such as univariate Laplacian mixture
model with local parameters and bivariate Laplacian mixture model without local parameters. Despite the simplicity of
our proposed method, the simulation results reveal that it outperforms these techniques and several recently published
methods both visually and in terms of peak-signal-to-noise-ratio (PSNR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to automatically enhance and restore images, especially those taken from underwater environments where scattering and absorption by the medium strongly influence the imaging results even within short distances, it is critical to have access to an objective measure of the quality of images obtained. This contribution presents an approach to measure the sharpness of an image based on the weighted gray-scale-angle (GSA) of detected edges. Images are first decomposed by a wavelet transform to remove random and part medium noises, to augment chances of true edge detection. Sharpness of each edge is then determined by regression to determine the slope between gray-scale values of edge pixels versus locations, which is the tangent of an angle based on grayscale. The overall sharpness of the image is the average of each measured GSAs, weighted by the ratio of the power of the first level decomposition details, to the total power of the image. Adaptive determination of edge widths is facilitated by values associated with image noise variances. To further remove the noise contamination, edge widths less than corresponding noise variances or regression requirement are discarded. Without losing generality while easily expandable, only horizontal edge widths are used in this study. Standard test images as well as those taken from field are used to be compared subjectively. Initial restoration results from field measured underwater images based on this approach and weakness of the metric are also presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many uses of an image quality measure. It is often used to evaluate the effectiveness of an image processing
algorithm, yet there is no one widely used objective measure. In many papers, the mean squared error (MSE) or peak
signal to noise ratio (PSNR) are used. Though these measures are well understood and easy to implement, they do not
correlate well with perceived image quality. This paper will present an image quality metric that analyzes image
structure rather than entirely on pixels. It extracts image structure with the use of quadtree decomposition. A similarity
comparison function based on contrast, luminance, and structure will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.