PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8558, including the Title Page, Copyright information, Table of Contents, and the Symposium and Conference Committees listing
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive technologies have been greatly developed in recent years, especially in projection field. However, at present,
most interactive projection systems are based on special designed interactive pens or whiteboards, which is inconvenient
and limits the improvement of user experience.
In this paper, we introduced our recent progress on theoretically modeling a real-time interactive projection system. The
system permits the user to easily operate or draw on the projection screen directly by fingers without any other auxiliary
equipment. The projector projects infrared striping patterns onto the screen and the CCD captures the deformational
image. We resolve the finger’s position and track its movement by processing the deformational image in real-time. A
new way to determine whether the finger touches the screen is proposed. The first deformational fringe on the fingertip
and the first fringe at the finger shadow are the same one. The correspondence is obtained, so the location parameters can
be decided by triangulation. The simulation results are given, and errors are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because the forming process of transparent surface is entirely artificial, it is difficult to
meet the requirements of real-time and high precision. In this paper, the laser monitoring system
solution can solve this problem. This system consists of three parts, a laser, a CCD camera, and a
computer. The laser beam is injected to the transparent surface, and received by the CCD camera.
The image is processed by the computer in real-time. By the changes of laser spot during the
forming process, a solution is proposed to calculate the difference between the two spot images,
which determines the change of height. The experimental results show that when the
transparent surface grows 1mm, the effective axial length changes 30 pixels. After multiple
measurements, we obtain the relationship curve between the height of the transparent surface
and effective axial length. According to the curve, we calculate the measurement error is 2.725%.
The processing speed of computer is also measured. It can process 10 pictures per second. The
algorithm reflects superiority in both accuracy and processing speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm based on the histogram of edge difference between MS (model seal) and SS (sample seal) was proposed to
verify Chinese seal imprints in bank checks. Difference between MS and an elaborate fake SS may be slight, while that
between MS and a genuine SS may be not small due to the variety of imprinting conditions. Edge differences wholly
reflect geometrical differences between SS and MS. To evaluate similarities between MS and SS, edge difference was
quantified as two parameters, the distance between non-overlapped corresponding edges and the length of each piece of
non-overlapped seal edges. A histogram on the product of the two parameters is proposed as the input feature vectors of
SVM (support vector machine). SS was verified as true or false by Support Vector Machine (SVM). In Experiments,
4810 (2450 genuine and 2360 fake) seal imprints were verified, and the correct recognition rate is 99.42%. Moreover, the
classification results can be customized according to the requirements of users. When the false-acceptance error rate and
the false-rejection error rate are both required to be close to 0, the rejection rate is about 3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to enhance the imaging speed of the 3D imaging lidar (light detection and ranging) and implement high-speed
3D imaging under static conditions, we propose a new 3D imaging lidar based on a laser diode and a high-speed 2D laser
scanner. The proposed 3D imaging lidar is mainly composed of a transmitter, a laser scanner, a receiver and a processor.
This paper introduces the components and principle of the proposed 3D imaging lidar first. And then some experiments
have been carried out to evaluate the performance of the 3D imaging lidar, in terms of scanning field, measuring
precision, scanning speed, image resolution and etc. The results show that the scanning field of the 3D imaging lidar is
about 26°×12°, the measuring precision is better than 5 cm (4 m distance), the scanning speed is greater than 30 fps
(frame per second) and the image resolution can reach 16×101. In addition, the 3D imaging lidar can obtain both the 3D
image and intensity image for the given target at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to enhance the imaging speed of the 3D imaging lidar (light detection and ranging) and implement high-speed
3D imaging under static conditions, we propose a novel high-speed 2D laser scanner with an asymmetric 16-plane
rotating mirror. Firstly, this paper analyzes the principles and characteristics of common laser scanners used in 3D
imaging lidars, which mainly include a symmetric rotating mirror scanner, a vibrating mirror scanner, a oval line scanner
and a double optical wedge scanner. And then we propose an asymmetric 16-plane rotating mirror with a novel structure,
which can carry out faster scanning in 2D field with only one rotating mirror. The scanning principle and main structure
of the rotating mirror is introduced in detail. Based on the proposed asymmetric rotating mirror, a new high-speed laser
scanner for the 3D imaging lidar is implemented with some advantages: high scanning speed, large scanning field and
high reflectivity. Finally, the laser scanning experiment has been carried out with the proposed laser scanner. The
experimental results show that the scanning speed is above 30 frames per second, the scanning field is about 32°×12°,
the vertical resolution of each frame is 16, and the laser reflectivity is above 0.9. The proposed laser scanner can be
applied to areas such as groundborne, vehicleborne and airborne 3D imaging lidars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A compressive imaging based APT (CI-APT) system is studied for FSO communication. Linear combinations
of object pixels, referred to as features, are measured. Then reconstructed objects are used for target locating.
Because it is implementation friendly, Hadamard projection is employed for CI-APT. Spatial domain andWavelet
domain OMP methods are studied for signal reconstruction. To demonstrate the idea, we use 64 randomly
selected Hadamard features to locate a 3 × 3 target in a 256 × 256 object. The averaged location error is less
than 2 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral imaging technology research is becoming more extensive in the field of examination of material evidence. UV spectral imaging technology is an important part of the full spectrum of imaging technology. This paper summarizes the application of the results of UV imaging technology in the field of evidence examination, explores the common object of potential fingerprints of UV spectra characteristic for the research objectives, which shows the potential traces of criminal using the ultraviolet spectrum imaging method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time people counting system using ranging technology with human head-shoulder profile is discussed in
this work. To obtain the profile, the system is installed above an entrance/exit gate with vertically downward
view. Line structured light is used to detect the height of a person’s head-shoulder. Compared with imaging-
processing approaches, this method is cost-effective, computationally simple, and more accurate. The system
can also detect the walking direction of each person using a second structured illumination light source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fire detection based on video surveillance is a very effective method for large area outdoor fire prevention, but the
unpredictable place and time makes automatic fire detection a difficult problem. This paper adopts a loose color
selection and frame differential to narrow down possible fire regions, where every pixel’s temporal color variations
are analyzed by 3-state Markov modals. One of the Markov modal is used for brightness variation examination and
the other one is used for fire color likeness that is measured by color difference. In order to eliminate false
detections, the fractal dimension calculation and texture match are performed. Experimental results prove the
proposed method is feasible and suitable for outdoor or indoor fire detection in surveillance videos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Instead of considering only the amount of fluorescent signal spatially distributed on the image of milled rice grains this
paper shows how our single-wavelength spectral-imaging-based Thai jasmine (KDML105) rice identification system can
be improved by analyzing the shape and size of the image of each milled rice variety especially during the image
threshold operation. The image of each milled rice variety is expressed as chain codes and elliptic Fourier coefficients.
After that, a feed-forward back-propagation neural network model is applied, resulting in an improved average FAR of
11.0% and FRR of 19.0% in identifying KDML105 milled rice from the unwanted four milled rice varieties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait
based identification focus on recognizing human by his walking video automatically using computer vision and image
processing approaches. As a potential biometric measure, human gait identification has attracted more and more
researchers. Current human gait identification methods can be divided into two categories: model-based methods and
motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis
based human gait identification method is proposed. Using background estimation and image subtraction we can get a
binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait
images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body
moving mode during a person walking. We use the following steps to extract the temporal-space features from the
difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project
every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence
we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal
Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the
maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two
vectors. The performance of our methods is illustrated using the CASIA Gait Database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new algorithm is presented for feature point tracking with multi-view constraint. Dynamic scenes with multiple,
independently moving objects are considered in which objects may temporarily disappear, enter and leave the view field.
Different from most of existing approaches to feature point tracking, 3D spatial geometry constraint is utilized to make
tracking more stable and fit the real 3D motion of objects, linking flags and matching flags is proposed to describe the
complex relationship of feature points observed by multiple cameras. Results for synthetic motion sequences are
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the color level of the rice leaf corresponds to the nitrogen status of rice in the field, farmers use a leaf color chart
(LCC) to identify the color level of the rice leaf in order to estimate the amount of N fertilizer needed for the rice field.
However, the ability of the farmers and degeneration of the LCC color affect the accuracy in reading the rice leaf color
level. In this paper, we propose a mobile device-based rice leaf color analyzer called “BaiKhao” (means rice leaf in
Thai). Our key idea is to simultaneously capture and process the two-dimensional (2-D) data scattered and reflected from
the rice leaf and its surrounding reference, thus eliminating expensive external components and alleviating the
environmental fluctuation but yet achieving a high accuracy. Our field tests using an Android-based mobile phone show
that all important leaf color levels of 1, 2, 3, and 4 can be correctly identified. Additional key features include low cost
and ease of implementation with highly efficient distribution through the internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a very-low-cost fixed interferential polarizing phase contrast scope suitable for the study of translucent objects. Our key design approach is relied on the arrangement of a circular polarizer sheet, a mirror, and a digital camera in a retro-reflective optical structure. The linear polarizer embedded in the circular polarizer sheet acts as both a polarization beam splitter and a polarization beam combiner. Meanwhile the quarter waveplate inside the circular polarizer sheet functions as a fixed phase plate but without narrowing the field of view of the digital camera. The retroreflective configuration amplifies the phase difference between the two orthogonal polarized optical beams twice, thus automatically creating an initial dark background. Experimental demonstration using an off-the-shelf digital microscope
with built-in white light emitting diodes and a specified 400x maximum magnification, a circular polarizer sheet, and a mirror shows that onion cells and Steinernema Thailandense nematodes can be clearly observed with striking color, high contrast, and three-dimensional appearance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a low power and portable highdefinition
(HD) electronic endoscope based on CortexA8
embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera
interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as
ITUR
BT601/
656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by
CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second,
builtin
HDMI interface transmits high definition images to the external display. Image processing procedures
such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are
selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed
videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is
4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium
battery, with the advantages of miniature, low cost and portability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fiber speckle reduction device designed and made based on time coherence theory was introduced in this paper
according to the decisive influence of mode dispersion existed in signal transmission of large diameter step-index
multimode fiber. Through the contrast experinent verification under different exposure time, the device can effectively
suppress speckle noise of fiber and improve signal contrast. The device introduced in this paper will have a good
application prospect in the fields of laser lighting monitoring system, multimode fiber property test system, laser
mapping system, ect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we first introduce the concept of the depth of field (DOF) in machine vision systems, which serves as a
basic building block for our study. Then, related work on the generalization of the fundamental methods and current
status with regard to extending the DOF is presented, followed by a detailed analysis of the principles and performances
of some representative extended depth-of-field (EDOF) technologies. Finally, we make some predictions about the
prospects of EDOF technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally
providing LED-IT convergence services such as visible light communication (VLC) based location awareness and
navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination
is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location
service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the
height exactly. If the position error of the height is greater than the height between floors, it may cause big problem.
Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor
navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing
lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation
combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights
and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than
two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the
validity of the proposed VLC based new positioning system using visible LED light and image sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array
and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such
as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the
digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is
possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED
array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such
as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in
traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid
the optical noises in communication environment. In our vision communication scheme, the data packet is composed of
Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image
snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per
second) of image sensor, we can decode the information data included in each image snapshot based on image
processing and optical wireless communication techniques. Through experiment based on practical test bed system, we
confirm the feasibility of the proposed vision communications based on LED array and image sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although CMOS cameras with USB interface are popular, their sizes are not small enough and working lengths are not
that long enough when used as industrial endoscope. Here we present a small-sized image acquisition system for
high-definition industrial electronic endoscope based on USB2.0 high-speed controller, which is composed of a 1/6 inch
CMOS image sensor with resolution of 1 Megapixels. Signals from the CMOS image sensor are put into computer
through the USB interface using the slave FIFO mode for processing, storage and display. LVDS technology is used for
image data stream transmission between the sensor and USB controller to realize a long working distance, high signal
integrity and low noise system. The maximum pixel clock runs at 48MHz to support for 30 fps for QSXGA mode or15
fps for SXGA mode and the data transmission rate can reach 36 megabytes per second. The imaging system is simple in
structure, low-power, low-cost and easy to control. Based on multi-thread technology, the software system which realizes
the function of automatic exposure, automatic gain, and AVI video recording is also designed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance
systems, embedded medical imaging and industrial vision systems. These image sensors require the integration
in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the
constraints related to the quality of acquired images, speed and performance of embedded processing, as well
as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful
information in the scene directly. For example, edge detection step followed by a local maxima extraction will
facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an
intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like
local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64
pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The
architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of
an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions),
in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is
40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions
of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in
less then one ms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiview video coding (MVC) is essential for applications of the
auto-stereoscopic three-dimensional displays. However, the computational complexity
of MVC encoders is tremendously huge. Fast algorithms are very desirable for the
practical applications of MVC. Based on joint early termination , the selection of
inter-view prediction and the optimization of the process of Inter8×8 modes by
comparison, a fast macroblock(MB) mode selection algorithm is presented.
Comparing with the full mode decision in MVC, the experimental results show that
the proposed algorithm can reduce up to 78.13% on average and maximum 90.21%
encoding time with a little increase in bit rates and loss in PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of iterated function systems (IFS) has been used in generating fractal graphics. In this paper, a method is
proposed to generate fractal Chinese characters with IFS. A Chinese character consists of strokes. In most cases, one
stroke is modeled with one affirn mapping. The finite set of the contractive affirn mappings that model all the strokes
construct the IFS for that Chinese character. Formulas to determine the coefficients of the IFS are deduced in this paper,
and the random iteration algorithm is used to show the fractal Chinese character. The experimental results show that the
generated fractal Chinese characters are self-similar and beautiful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of 3-D imaging techniques, three dimensional point cloud partition becomes one of the key
research fields. In this paper, two data partition algorithms are proposed. Each algorithm includes two parts: data
re-organization and data classification. Two methods for data re-organization are proposed: dimension reduction and
triangle mesh reconstruction. The algorithm of data classification is based on edge detection of depth data. The edge
detection algorithms of gray images are improved for depth data partition. As to the triangulation method, the data
partition is realized by region growing. The simulation result shows that the two methods can achieve point cloud data
partition of standard template and real scene. The result of standard template shows the total error rates of the two
algorithms are both less than 3%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a novel vehicle detection method combined with probability voting based hypothesis generation
(HG) and SVM based hypothesis verification (HV) specialized for the complex background airborne traffic video. In HG
stage, a statistic based road area extraction method is applied and the lane marks are eliminated. Remained areas are
clustered, and then the canny algorithm is performed to detect edges in clustered areas. A voting strategy is designed to
detect rectangle objects in the scene. In HV stage, every possible vehicle area is rotated to align the vehicle along the
vertical direction, and the vertical and horizontal gradients of them are calculated. SVM is adopted to classify vehicle
and non-vehicle. The proposed method has been applied to several traffic scenes, and the experiment results show it’s
effective and veracious for the vehicle detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gait based human identification is very useful for automatic person recognize through visual surveillance and has
attracted more and more researchers. A key step in gait based human identification is to extract human silhouette from
images sequence. Current silhouette extraction methods are mainly based on simple color subtraction. These methods
have a very poor performance when the color of some body parts is similar to the background. In this paper a
cosegmentation based human silhouette extraction method is proposed. Cosegmentation is typically defined as the task
of jointly segmenting “something similar” in a given set of images. We can divide the human gait images sequence into
several step cycles and every step cycle consist of 10-15 frames. The frames in human gait images sequence have
following similarity: every frame is similar to the next or previous frame; every frame is similar to the corresponding
frame in the next or previous step cycle; every pixel can find similar pixel in other frames. The progress of
cosegmentation based human silhouette extraction can be described as follows: Initially only points which have high
contrast to background are used as foreground kernel points, the points in the background are used as background kernel
points, then points similar to foreground points will be added to foreground points set and the points similar to
background points will be added to background points set. The definition of the similarity consider the context of the
point. Experimental result shows that our method has a better performance comparing to traditional human silhouette
extraction methods.
Keywords: Human gait
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel bottom-up attention model only based on C1 features of HMAX model, which is efficient
and consistent. Although similar orientation-based features are commonly used by most bottom-up attention models, we adopt different activation and combination approaches to get the ultimate map. We compare the two different operations for activation and combination, i.e. MAX and SUM, and we argue they are often complementary. Then we argue that for a general object recognition system the traditional evaluation rule, which is the accordance with human fixations, is inappropriate. We suggest new evaluation rules and approaches for bottom-up attention models, which focus on information unloss rate and useful rate relative to the labeled attention area. We formally define unloss rate and useful rate, and find efficient algorithm to compute them from the original labeled and output attention area. Also we discard the commonly adopted center-surround assumption for bottom-up attention models. Comparing with GBVS based on the suggested evaluation rules and approaches on complex street scenes, we show excellent performance of our model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractal graphics, generated with iterated function systems (IFS), have been applied in broad areas. Since the collage
regions of different IFS may be different, it is difficult to respectively show the attractors of iterated function systems in
a same region on a computer screen using one program without modifying the display parameters. An algorithm is
proposed in this paper to solve this problem. A set of transforms are repeatedly applied to modify the coefficients of the
IFS so that the collage region of the resulted IFS changes toward the unit square. Experimental results demonstrate that
the collage region of any IFS can be normalized to the unit square with the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is crucial in various image fusion tasks, like super-resolution. For the success of super-resolution
reconstruction, it is essential to find out high accurate subpixel motion estimation between the input images sequence. This paper proposes a frequency domain-based motion estimation algorithm for the under-sampled infrared images. The proposed algorithm which only considers pure translational motion is based on the phase-only correlation. Due to the discrete Fourier transform and subpixel displacement, the signal peak is not always concentrate in the integer coordinates. Thus, the signals adjacent to peak are utilized to estimate aliasing influence. Excellent results are obtained for subpixel translation estimation. The algorithm is also compared to other algorithms, and the analyses show that this algorithm is more robust and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An illumination and affine invariant descriptor is proposed for registering aerial images with large illumination changes and affine transformation, low overlapping areas, monotonous backgrounds or similar features. Firstly, triangle region is detected by K-nearest neighbors (K-NN) graph of initial matched result by Scale-Invariant Feature Transform (SIFT). In order to improve the accuracy, region growth is applied to boost small and slender triangles. Then illumination and affine invariant descriptor is defined to describe triangle regions and measure their similarity. The descriptor named as IIMSA is the combination of MultiScale Autoconvolution (MSA) and multiscale retinex (MSR). The performance of the descriptor is evaluated with optical aerial images and the experimental results demonstrate that the proposed descriptor IIMSA is more distinctive than MSA and SIFT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time matching method, which is used to locate an object in the binocular stereo vision system and measure the
distance from the object to camera, was proposed in this paper. The 3D image is composited of one left-image and one
right-image captured by the binocular cameras and is displayed in a 3D liquid crystal TV. The target is chosen by
user-controlled cursor constrained in the left-image. Then, the matched point in the right-image with the current position
of mouse in the left-image is detected by an effective point matching algorithm, and the right-cursor is generated on this
matched point. Meanwhile the distance is calculated by the parallax between the pair points. The above algorithm was
realized using the own program compiled by C sharp language. The results show that our method can match selected
pixel accurately and the real-time distance measurement is realized. Moreover, our method is low-cost with little
requirements for hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wet paper code is a complex model which mainly used in the field of image coding. This paper is based on the wet paper
code model and human visual system, and constructs a new wet paper code steganographic method. According to the
regional complexity and other characteristics of the host image, the secret bits are adaptively embedded into wavelet
coefficients of image subbands with wet paper code. Secret information receivers do not need to know the specific
method of secret writing, just do a simple matrix multiplication operation and can extract the secret information, which
in many ways to improve the security of the steganographic algorithm. The experiments show that the method has good
visual invisibility and resistance of active steganalysis attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color fusion technology, one of the typical technologies, has been emphasized all over the world. Multiband images are
fused into a color image. Some effective visible and thermal infrared color fusion algorithms have been proposed now.
We have successfully run a real-time natural sense of visible/infrared color fusion algorithm in DSP and FPGA hardware
processing platforms. However, according to different needs, gray image fusion technology has its own unique
applications.
Based on the natural sense of color image fusion algorithm of the visible and infrared, we have proposed a visible /
infrared gray image fusion algorithm. Firstly we do a YUV color fusion. Then we output the brightness of the fusion as
gray fusion images. This algorithm for image fusion is compared with typical fusion algorithms: the weighted average,
the Laplace Pyramid and the Haar basis wavelet. Several objective evaluation indicators are selected. The results of
objective and subjective comparison show that the algorithm has most advantages. It shows that multiband gray image
fusion in the color space is available.
The algorithm is implemented on a DSP hardware image processing platform real-time with the TI's chip as the kernel
processor. It makes natural sense of color fusion and gray fusion for visible light (low level light) and thermal imaging
integrated. Users are convenient to choose model of the natural sense of color fusion or gray fusion for real-time video
imaging output
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target acquisition is of great importance for ship borne range-gated night vision system which can achieve target finding,
target tracing and ranging. A digital image processing algorithm is developed for the mentioned night vision equipment
above. Target contour is extracted using Canny edge detection algorithm based on self-adapted Otsu threshold
segmentation. Furthermore, edge thinning, edge connection and morphologic methods are implemented to ameliorate the
acquired contour. Pixels inside the contour are collected utilizing horizontal-vertical traverse. After ship targets from
range-gated equipment being all tested, target contour and inner pixels can both be acquired through this algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion method directly affects the stitching effect. If there are moving objects in the images, the same object may
add together after fusion and cause ghosting. A fusion method which can effectively eliminate the exposure difference
and ghosting is essential to image mosaic. The method of optimal seam can eliminate integrated ghosting. But the
optimal seam may include individual error points and its overall strength value is the smallest. The line is taken as the
optimal seam incorrectly. So the ghosting may still exist. In order to avoid the impact of error points an improved
optimal seam based on feature point is used in this paper. The scale invariant feature transform and the random sample
consensus are used to ensure the detected feature points are accurate. The improved optimal seam do not regard the
original optimal seam regards the point with smallest guidelines value as extended direction this paper regards the
feature point as the extended direction. The feature point's strength value should have an appropriate weight value. It
makes the optimal seam contain more feature points. In this way it can avoid the error points and dynamic elements.
Both sides of the optimal seam have exposure differences, an image fusion method should be taken to achieve smooth
and natural mosaic. Poisson fusion method can achieve the synthesis of the fragment. So Poisson fusion is used to
eliminate exposure difference in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generally, brightness of midcourse missile from reflecting sunlight are
dependent on the sunlight direction and observation direction. The calculating method
of direction Correlation Coefficients is different to difform objects. This paper
introduces a simple way through vector integral to calculate the Correlation
Coefficients. As an example, the midcourse missile with cylinder shape is used to
illustrate the process of vector integral method. The results by this way are same as
that indicated in literature. The characteristic of vector integral method is processing
in the right-angle coordinate so that doesn’t be limited by geometric form of the space
objects. The rule of selecting a system of coordinates is ease to calculate. According
the selected system of coordinates and the right-angle integral surface (xoy, xoz or
yoz), the expression of integral is different .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive spectral imaging is a kind of novel spectral imaging technique that combines traditional spectral imaging method with new concept of compressive sensing. Spatial coding compressive spectral imaging realizes snapshot imaging and the dimension reduction of the acquisition data cube by successive modulation, dispersion and stacking of the light signal. It reduces acquisition data amount, increases imaging signal-to-noise ratio, realizes snapshot imaging for large field of view and has already been applied in the occasions such as high-speed imaging, fluorescent imaging and so on. In this paper, the physical model for single dispersion spatial coding compressive spectral imaging is reviewed on which the data flow procession is analyzed and its reconstruction issue is concluded. The existing sparse reconstruction methods are investigated and specific module based on the two-step iterative shrinkage/thresholding algorithm is built so as to execute the imaging data reconstruction. A regularizer based on the total-variation form is included in the unconstrained minimization problem so that the smooth extent of the restored data cube can be controlled by altering its tuning parameter. To verify the system modeling and data reconstruction method, a simulation imaging experiment is carried out, for which a specific imaging scenery of both spatial and spectral features is firstly built. The root-mean-square error of the whole-band reconstructed spectral images under different regularization tuning parameters are calculated so that the relation between data fidelity and the tuning parameter is revealed. The imaging quality is also evaluated by visual observation and comparison on resulting image and spectral curve.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast and accurate TV Logo detection method is presented based on real-time image filtering, noise eliminating and recognition of image features including edge and gray level information. It is important to accurately extract the optical template using the time averaging method from the sample video stream, and then different templates are used to match different logos in separated video streams with different resolution based on the topology features of logos. 12 video streams with different logos are used to verify the proposed method, and the experimental result demonstrates that the achieved accuracy can be up to 99%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image reconstruction is a key step in medical imaging (MI) and its algorithm’s performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel age estimation method by using active appearance model (AAM) combining with local texture
feature is presented, which overcomes the drawbacks of the AAM. Use the multi-scale local binary patterns (MLBP) as
the local texture descriptors to get the rotation invariant texture features. Build the combined AAM model using MLBP
features. In this way, both global face features and local texture features are used. The support vector regression (SVR) is
used to estimate the facial age. The face aging data set FG-NET is used. Experimental results demonstrate the AAM
combined MLBP method performing a lower mean-absolute error (MAE) and high accuracy of estimation comparing to
other method results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usually, we use a single template in traditional template matching. But single template is affected easily by geometric
distortion. The distortion of some pixels would affect the whole template. For solving the problem, in this paper we
divide the original template into five small templates. The small template has fewer pixels than the original template;
geometric distortion just affects the relevant small template. We match the other templates with the image by a new
matching method based on the improved maximum pixel count (MPC) criterion which considers not only the number of
similar points but the error value. Experimental results demonstrate that the method proposed in this paper has better
accuracy, precision, robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration of atmospheric turbulence degraded image is needed to be solved as soon as possible in the field of
astronomical space technology. Owing to the fact that the point spread function of turbulence is unknown, changeable
with time, hard to be described by mathematics models, withal, kinds of noises would be brought during the imaging
processes (such as sensor noise), the image for space target is edge blurred and heavy noised, which making a single
restoration algorithm to reach the requirement of restoration difficult. Focusing the fact that the image for space target
which was fetched during observation by ground-based optical telescopes is heavy noisy turbulence degraded, this paper
discusses the adjustment and reformation of various algorithm structures as well as the selection of various parameters,
after the combination of the nonlinear filter algorithm based on noise spatial characteristics, restoration algorithm of
heavy turbulence degrade image for space target based on regularization, and the statistics theory based EM restoration
algorithm. In order to test the validity of the algorithm, a series of restoration experiments are performed on the heavy
noisy turbulence-degraded images for space target. The experiment results show that the new compound algorithm can
achieve noise restriction and detail preservation simultaneously, which is effective and practical. Withal, the definition
measures and relative definition measures show that the new compound algorithm is better than the traditional
algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For electronic image stabilization process of the image sequence, the target scenes in different depth of image field have
different motion vectors. There is not a compensation amount can stable both close-range and long-range target scene.
This article from the optical imaging model analysis the reasons, aimed at greater depth of field distribution within the
scope of the video sequence, using Harris corner algorithm detection in different depth of field goal of characteristic
points, and feature matching, Calculation of different depth to the target motion vector validation previous theoretical
derivation. Proposed a motion vector compensation method based on electronic image stabilization image quality
assessment. Compensate the image after adjust the distribution of motion vector on the assessment feedback of the image
inter-frame differential map. Results show that the motion vector compensation effect in the optimization of the value is
better than the global motion vector compensation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sun is used as light source for spectrum analysis of atmosphere material with sunlight through atmosphere. The stronger
sunlight enters the detector, the more accuracy can be achieved. However, due to the inhomogeneity of the atmosphere,
the gray image of sun is not only irregular, even the interference of clouds will divide sun into different parts. Thus,
when the light intensity, shape and location of sun in the image keep changing, it is critical for obtaining and following
the position of the strongest sunlight accurately. In the paper, a novel method of sun scene simulation for observation of
the sun through the atmosphere is presented. The method based on the active optical control system, is used in sun scene
simulation with the variation of light intensity, shape and position. The method has a simple theory and is easy to be
realized. This simulation system composing with computer, projection devices, micro deformable mirror and the optical
lens group can be used for simulating optical properties of atmosphere with different density, humidity, and air flow rate,
for ensuring the accuracy and real-time sun scene simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biological inspiration has produced some successful solutions for different imaging systems. Inspired by the compound eye of insects, this paper presents some image process techniques used in the spherical compound eye imaging system. By analyzing the relationship between the system with large field of view (FOV) and each lens, an imaging system based on compound eyes has been designed, where 37 lenses pointing in different directions are arranged on a spherical substrate. By researching the relationship between the lens position and the corresponding image geometrical shape to realize a large FOV detection, the image process technique is proposed. To verify the technique, experiments are carried out based on the designed compound eye imaging system. The results show that an image with FOV over 166° can be acquired while keeping excellent image process quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of science and technology, optical imaging system has been widely used, and the
performance requirements are getting higher and higher such as lighter weight, smaller size, larger field of view and
more sensitive to the moving targets. With the advantages of large field of view, high agility and multi-channels,
compound eye is more and more concerned by academia and industry. In this work, an artificial spherical compound eye
imaging system is proposed, which is formed by several mini cameras to get a large field of view. By analyzing the
relationship of the view field between every single camera and the whole system, the geometric arrangement of cameras
is studied and the compound eye structure is designed. By using the precision machining technology, the system can be
manufactured. To verify the performance of this system, experiments were carried out, where the compound eye was
formed by seven mini cameras which were placed centripetally along a spherical surface so that each camera points in a
different direction. Pictures taken by these cameras were mosaiced into a complete image with large field of view. The
results of the experiments prove the validity of the design method and the fabrication technology. By increasing the
number of the cameras, larger view field even panoramic imaging can be realized by using this artificial compound eye.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel motion target detection method based on the Fourier descriptors of the fractal edge is presented in this paper,
The blanket covering fractal method is used to detect the edge features of targets in the high scale, and get binarization of
the features with the maximum entropy threshold segmentation, through labeling targets and removing small area noise
targets, then extract the elliptic Fourier descriptors of targets shape, cumulating the descriptors of multi-frame targets, the
frequency of Fourier descriptors can decide which one is the moving target and which is not.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the familiar R2 dependence of signal strength on range and overcoming the prohibitively large R4 losses of
conventional passive satellite laser ranging(SLR), the operational distance of echo-transponder satellite laser ranging is
much farther. Echo-transponder SLR can be used for satellite ranging at great distance, and even for deep-space probe
ranging. Echo-transponder SLR uses a laser transponder instead of the retro-reflector on the satellite, and implements
ranging by means of echoing the ranging signal from the master station. However, ranging accuracy of echo-transponder
SLR is reduced, because the jitter of additional delay-time occurs in the process of the transponder’s response. The
principle of laser echo-transponder was introduced, the causes of delay-time and its jitter were analyzed, and the control
technique of delay-time jitter in responsion was studied. An experimental platform of the laser echo-transponder was
constructed, in order to validate the availability of the control technique of delay-time jitter, and to detect control
accuracy of the delay-time jitter. As the experimental result indicated, the delay-time jitter of the laser echo-transponder
could be controlled effectively within 1 ns, thereby it was determined that the corresponding ranging error was less than
15 cm, if the laser transmitter was suitably selected, and the techniques about precise temperature control, automatic
power control, precise Q-switched, and accurate detection of the pulse moment were reasonably put to use in the system
design of the laser echo-transponder. Meanwhile the feasibility of echo-transponder SLR was validated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe in detail the hierarchical model and X (HMAX) model of Riesenhuber and Poggio. The HMAX model, accounting for visual processing and making plausible predictions founded on prior information, is built up by alternating simple cell layers and complex cell layers. We generalize the principal facts about the ventral visual stream and argue hierarchy of brain areas to mediate object recognition in visual cortex. Then, in order to obtain the futures of object, we implement Gabor filters and alternately apply template matching and maximum operations for input image. Finally,according to the target feature saliency and position information, we introduce a novel algorithm for object recognition in clutter based on the HMAX architecture. The improved model is competitive with current recognizing algorithms on standard database, such as the UICI car and the Caltech101 database including a large number of diverse categories. We also prove that the approach combining spatial position information of parts with the feature fusing can further promotes the recognition rate. The experimental results demonstrate that the proposed approach can recognize objects more precisely and the performance outperforms the standard model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronic image stabilization, a new generation of image stabilization technology, obtains distinct and stable image
sequences by detecting inter-frame offset of image sequences and compensating by way of image processing. As a highprecision
image processing algorithm, SIFT can be applied to object recognition and image matching, however, it is the
extremely low processing speed that makes it not applicable in electronic image stabilization system which is strict with
speed. Against the low speed defect of SIFT algorithm, this paper presents an improved SIFT algorithm aiming at
electronic image stabilization system, which combines SIFT algorithm with Harris algorithm. Firstly, Harris operator is
used to extract the corners out of two frames as feature points. Secondly, the gradients of each pixel within the 8x8
neighborhood of feature point are calculated. Then the feature point is described by the main direction. After that, the
eigenvector descriptor of the feature point is calculated. Finally, matching is conducted between the feature points of
current frame and reference frame. Compensation of the image is processed after the calculation of global motion vector
from the local motion vector. According to the experimental results, the improved Harris-SIFT algorithm is less complex
than the traditional SIFT algorithm as well as maintaining the same matching precision with faster processing speed. The
algorithm can be applied in real time scenario. More than 80% match time can be saved for every two frames than the
original algorithm. At the same time, the proposed algorithm is still valid when there are slightly rotations between the
two matched frames. It is of important significance in electronic image stabilization technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A SIFT (Scale Invariant Feature Transform) feature based registration algorithm is presented to prepare for the seal
verification, especially for the verification of high quality counterfeit sample seal. The similarities and the spatial
relationships between the matched SIFT features are combined for the registration. SIFT features extracted from the
binary model seal and sample seal images are matched according to their similarities. The matching rate is used to define
the similar sample seal that is similar with its model seal. For the similar sample seal, the false matches are eliminated
according to the position relationship. Then the homography between model seal and sample seal is constructed and
named HS . The theoretical homography is namedH . The accuracy of registration is evaluated by the Frobenius norm of
H-HS . In experiments, translation, filling and rotation transformations are applied to seals with different shapes, stroke
number and structures. After registering the transformed seals and their model seals, the maximum value of the
Frobenius norm of their H-HS is not more than 0.03. The results prove that this algorithm can accomplish accurate
registration, which is invariant to translation, filling, and rotation transformation, and there is no limit to the seal shapes,
stroke number and structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to obtain high-quality images in a project, modern cameras with wide-bits (12, 14 or 16 bits width) are usually
used to acquire enough original information. Nevertheless, some kinds of data selection and transform processing are
necessary to display such images on PC (8 bits width). Besides, the output image should include both high general
contrast and clear details. This paper proposed a method with two major steps: for the first step is on the basis of partially
overlapped sub-block histogram equalization (POSHE), and change the way of equalizing sub-block image, separate
each sub-block recursively with different gray ranges. The second step is taking a kind of pseudo-color processing based
on HIS space to enhance the visual effects, so that the image has rich layers and consistent with human’s perception.
Experimental results show that the algorithm could keep the local details and the mean of the original brightness at the
same time, enhanced the image effectively. Considered the adaptability on different scenarios, different objectives, and a
reasonable amount of time complexity, this method could adapt the requirements of practical engineering applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA’s Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical equipment is widely used to measure flight parameters in target flight performance test, but the equipment is
sensitive to the sun’s rays. In order to avoid the disadvantage of sun’s rays directly shines to the optical equipment
camera lens when measuring target flight parameters, the angle between observation direction and the line which
connects optical equipment camera lens and the sun should be kept at a big range, The calculation method of the solar
azimuth and altitude to the optical equipment at any time and at any place on the earth, the equipment observation
direction model and the calculating model of angle between observation direction and the line which connects optical
equipment camera lens are introduced in this article. Also, the simulation of the effect on optical equipment caused by
solar position at different time, different date, different month and different target flight direction is given in this article.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking is of great importance in imaging system, which can be applied in surveillance, as well as salvage and rescue where 3D spatial coordinates are used to locate the target. Range-gated imaging system is capable of acquiring range information of targets. However, azimuth is also necessary to provide the spatial coordinates to achieve target tracking. This paper presents a target azimuth estimation method for range-gated imaging system, aiming at obtaining essential information for vision-based automatic tracking. Due to the noise and low contrast of range-gated image, median filter and histogram equalization are used. Then the Otsu method is performed to make the segmentation of target and background. After segmentation, morphologic transformation methods will be taken in order to delete false targets. With pixels of target extracted from the image, the centoid will be derived. Next the pinhole camera model is applied to work out the azimuth coordinate. Since the focus length of camera is needed in the formula, an NC (Numerical Control) zoom module is developed. In this module, a sliding potentiometer is connected to the focus motor in camera, which serves as a feedback of the focus. To read the focus length and control the focus motor, an MCU (with AD converter) is used. Once the target azimuth information is obtained, the pan-tilt control unit can track the target bit by bit automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to models of object recognition in cortex, the brain uses a hierarchical approach in which simple, low-level
features having high position and scale specificity are pooled and combined into more complex, higher-level features
having greater location invariance. At higher levels, spatial structure becomes implicitly encoded into the features
themselves, which may overlap, while explicit spatial information is coded more coarsely. In this paper, the importance
of sparsity and localized patch features in a hierarchical model inspired by visual cortex is investigated. As in the model
of Serre, Wolf, and Poggio, we first apply Gabor filters at all positions and scales; feature complexity and position/scale
invariance are then built up by alternating template matching and max pooling operations. In order to improve
generalization performance, the sparsity is proposed and data dimension is reduced by means of compressive sensing
theory and sparse representation algorithm. Similarly, within computational neuroscience, adding the sparsity on the
number of feature inputs and feature selection is critical for learning biologically model from the statistics of natural
images. Then, a redundancy dictionary of patch-based features that could distinguish object class from other categories is
designed and then object recognition is implemented by the process of iterative optimization. The method is test on the
UIUC car database. The success of this approach suggests a proof for the object class recognition in visual cortex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
mage of objects is inevitably encountered by space-based working in the atmospheric turbulence environment, such as
those used in astronomy, remote sensing and so on. The observed images are seriously blurred. The restoration is
required for reconstruction turbulence degraded images. In order to enhance the performance of image restoration, a
novel enhanced nonnegativity and support constants recursive inverse filtering(ENAS-RIF) algorithm was presented,
which was based on the reliable support region and enhanced cost function. Firstly, the Curvelet denoising algorithm was
used to weaken image noise. Secondly, the reliable object support region estimation was used to accelerate the algorithm
convergence. Then, the average gray was set as the gray of image background pixel. Finally, an object construction limit
and the logarithm function were add to enhance algorithm stability. The experimental results prove that the convergence
speed of the novel ENAS-RIF algorithm is faster than that of NAS-RIF algorithm and it is better in image restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The light curve of space debris can provide the basic target identification information. Photometric data of space target
obtained by ground-based equipment are affected by weather condition, observation equipment, detector performance,
etc. Among these factors, the poor weather condition during the observations could cause worst effects in the
photometric data. The light curves will be smooth and regular under photometric conditions, while irregular in nonphotometric
nights. In addition, we can not distinguish what caused the abnormal brightness variations between weather
and other factors from the traditional photometric data. Our study shows that by obtaining simultaneous light curves of
background stars with an independent telescope close to the main telescope with dedicated observing tactics, we can
verify the influence factors of the photometric variations in non-photometric nights.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Star sensors have been developed to acquire accurate orientation information in recent decades superior to other attitude
measuring instruments. A star camera takes photos of the night sky to obtain star maps. An important step to acquire
attitude knowledge is to compare the features of the observed stars in the maps with those of the cataloged stars using
star identification algorithms. To calculate centroids of the star images before this step, they are required to be extracted
from the star maps in advance. However, some large or ultra large imaging detectors are applied to acquire star maps for
star sensors with the development of electronic imaging devices. Therefore, star image extraction occupies more and
more portions of the whole attitude measurement period of time. It is required to shorten star image extraction time in
order to achieve higher response rate. In this paper, a novel star image extraction algorithm is proposed which fulfill the
tasks efficiently. By scanning star map, the pixels brighter than the gray threshold are found and their coordinates and
brightness are stored in a cross-linked list. Data of these pixels are linked by pointers, while other pixels are neglected.
Therefore, region growing algorithm can be used by choosing the first element in the list as a starting seed. New seeds
are founded if the neighboring pixels are brighter than the threshold, and the last seed is deleted from the list. Next
search continues until no neighboring pixels are in the list. At that time, one star image is extracted, and its centroid is
calculated. Likely, other star images may be extracted, and then the examined seeds are deleted which are never
considered again. A new star image search always begins from the first element for avoiding unnecessary scanning. The
experiments have proved that for a 1024×1024 star map, the image extraction takes nearly 16 milliseconds. When
CMOS APS is utilized to transfer image data, the real-time extraction can be almost achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Echo broadening effect (EBE) is inherent in three-dimensional range-gated imaging (3DRGI). The effect impacts the
range-intensity profile of gate images which is crucial in three approaches of 3DRGI based on depth scanning,
supperresolution depth mapping and gain modulation. In this paper, we give the space-time model of EBE which
illustrates three typical range-intensity profiles under different temporal parameters of laser pulse and gate pulse. A head
zone and a tail zone exist in both sides of the profiles. Our research demonstrates that EBE should be suppressed in depth
scanning and gain modulation methods and utilized in supperresolution depth mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic speckle pattern interferometry has been widely applied to measure vibration or continuously-deformation. As a
promising technique, temporal phase analysis reduces the 2D phase retrieval task to 1D and gives wider measurement
range. In this paper, some classical and recently proposed temporal phase retrieval techniques, such as windowed Fourier
transform, wavelet transform and Hilbert transform, are comparatively studied. The advantages and drawbacks of each
algorithm are discussed and evaluated in simulation experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tracking method based on Co-Training framework considers the object tracking as a semi-supervised learning
problem. This paper proposes a new on-line tracking method based on Co-Training framework. The method fuses two
features to describe the object and do randomizing affine deformation with positive examples to increase the number of
positive examples. Experimental results demonstrate, the on-line tracking method based on Co-training framework can
work robustly in long-term tracking and the drift of tracking can be effectively avoided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel multi-scale cue combination contour detection method is presented. The contour detector is derived from the
local image brightness, color, and texture channels of each image pixel (x, y) . To build contour detector, the brightness,
color and texture gradient of image is defined. Then the posterior probability model of boundary G is introduced by
using learning techniques for multi-scale cue combination. Finally, the experiment shows the performance of the raw
detector and multi-scale cue combination detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging spectrometer is a promising remote sensing instrument widely used in many filed, such as hazard forecasting,
environmental monitoring and so on. The reliability of the spectral data is the determination to the scientific communities.
The wavelength position at the focal plane of the imaging spectrometer will change as the pressure and temperature vary,
or the mechanical vibration. It is difficult for the onboard calibration instrument itself to keep the spectrum reference
accuracy and it also occupies weight and the volume of the remote sensing platform. Because the spectral images suffer
from the atmospheric effects, the carbon oxide, water vapor, oxygen and solar Fraunhofer line, the onboard wavelength
calibration can be processed by the spectral images themselves. In this paper, wavelength calibration is based on the
modeled and measured atmospheric absorption spectra. The modeled spectra constructed by the atmospheric radiative
transfer code. The spectral angle is used to determine the best spectral similarity between the modeled spectra and
measured spectra and estimates the wavelength position. The smile shape can be obtained when the matching process
across all columns of the data. The present method is successful applied on the Hyperion data. The value of the
wavelength shift is obtained by shape matching of oxygen absorption feature and the characteristics are comparable to
that of the prelaunch measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A spatial-temporal detection target method is proposed to detect weak point target with slow velocity in infrared sequences evolving cloud clutter. Frist of all, a temporal filter for detecting point target called triple temporal filter (TTF) is introduced. Since theoretical analysis shows that TTF has a poor performance under temporal noise, a nonlinear spatial-temporal filter by neighbor pixels in prior and posterior frames ,which takes every possible target trace account to suppress noise before coming into recursion. Then TTF output by positive and inverse sequence order form nonlinear spatial-temporal filter fuse with liner principle for detecting weak target is put forward, which called bilateral TTF in this paper. Finally its performance is analysis. The results of experiment shows that compared to original TTF, the proposed method achieves a higher signal-to-clutter ratio gain, which is effectively detecting dim target when target signal-to clutter down to 3 or lower with a low moving velocity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the application background of sea-surface target surveillance based on optical remote sensing image, automatic sea-surface ship target recognition with complicated background is discussed in this paper. The technology this article focused on is divided into two parts, feature classification training and component class discrimination. In the feature classification training process, large numbers of sample images are used for feature selection and classifier determination of ship targets and false targets. Component tree characteristics discrimination achieves extraction of suspected target areas from complicated remote sensing image, and their features are entered to Fisher for ship target recognition. Experimental results show that the method discussed in this paper can deal with complex sea surface environment, and can avoid the interference of cloud cover, sea clutter and islands. The method can effectively achieve ship target recognition in complex sea background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the performance of heterogeneous image matching and registration, the Weighted Voting Accumulation Measure(WVAM) based on the edge feature and image registration algorithm based on the steepest descent of the likelihood function are proposed. The WVAM is capable of resisting the interference of noise and the similarity region and can achieve matching location of template. On this basis, the likelihood function of edge sets registration is established on the basis of Gauss Mixture Model (GMM) of point sets. In order to achieve the registration between the template and matching area, and resolve the optimum transformation parameter by using the steepest descent method, the likelihood function is regarded as objective function and the affine transformation parameter is regarded as the optimization variance. The results of simulation experiments of this algorithm proved that the good performance of template and registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposed proper noise reduction methods for the two different noises (i.e. temporal noise and special
noise) based on the 3-D noise model of scientific-grade CCD. Noise reduction calculations have been made with these
methods. The results show that these methods are effective for the reduction of temporal noise and spatial noise. The
experimental data have verified the effectiveness of these methods for scientific-grade CCD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to accurately fit the unknown probability density functions of background or object. To solve this problem,
the Burr distribution is introduced. Three-parameter Burr distribution can cover a wide range of distribution. The
expectation maximization algorithm is used to deal with the estimation difficulty in the Burr distribution model. The
expectation maximization algorithm starts from a set of selected appropriate parameters’ initial values, and then iterates
the expectation-step and maximization-step until convergence to produce result parameters. The experiment results show
that the Burr distribution could depicts quite successfully the probability density function of a significant class of image,
and comparatively the method has low computing complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of chromatic background on luminance contrast-sensitivity function (CSF) is studied. We selected three
background-grey, orange and yellow-green from CIE 17 color center. The mean luminance of these colors is
approximately equal. We use CRT monitor display the rectangular stripe. Every rectangular stripe has six spatial
frequencies (0.4, 1, 2, 3.5,7 and 14cpd) .The method of limits is used in the experiment, 5 observers, who have normal
vision and test of vision is all over 1.0, participated in the experiment. The results of experiment show that the luminance
contrast sensitivity on chromatic background is lower than the luminance contrast sensitivity under grey background.
Fitting results show that Movshon model is better than Barten model, especially for the chromatic background. Both of
the models have deviation in the high spatial frequency part.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.