PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904501 (2013) https://doi.org/10.1117/12.2052782
This PDF file contains the front matter associated with SPIE Proceedings Volume 9045, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904502 (2013) https://doi.org/10.1117/12.2032845
In the uncooled infrared imaging system by the optical readout method with the micro-cantilever, the images obtained have obvious noise. The noise is generated due to the imaging device technology and stray light effects. The noise has a bad affect for the image quality, and even the continuous noise pixels form holes, which make the target difficult to be identified. Therefore, the novel infrared image enhancement method based on the mask convolution is presented. In the method, the information of convolution mask like size can be obtained by Hough transform, and the convolution mask can be built based on the characteristics of optical readout method in uncooled infrared imaging system. The position of mask on the image is changed pixel by pixel, and convolution is made between mask and pixels in the image. Then, an evaluation parameter threshold is established for eliminating the noise of the image. In the image after denoising, a mean filter is made to fill noise pixels and unit intervals of micro-cantilever. Finally, the Enhanced infrared image can be gained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904503 (2013) https://doi.org/10.1117/12.2034082
This paper describes the development of a motor driving system for circular scanning ultrasonic endoscopic imaging equipment. It was designed to guarantee the motor rotating at a relatively constant speed in load fluctuation conditions, which result from the bending and twisting of the flexible shaft which connects the probe to the motor. A hardware feedback circuit based on Frequency-To-Voltage Converter LM331 and Step-Down Voltage Regulator LM2576-ADJ was designed to ensure steady rotation of motor in load fluctuation conditions, and a D/A module offered by MCU was used to regulate the real-time rotary speed. The feedback response cycle is about 20 μs according to theoretical analysis. Experimental results show that the maximum error is ±1 r/min under the normal running environment (300 ~1500 r/min) and load fluctuation conditions, which means the average instability is reduced to 0.11% as compared with that of the motor drive simply based on MCU which is 0.94%. Both theoretical analysis and experimental results indicate that the motor driving system has high accuracy, fast response, excellent reliability and good versatility and portability, and can precisely guarantee the smooth movement of load-changing PMW (Pulse Width Modulation) motor, so as to ensure the imaging quality, and can effectively improve the efficiency and accuracy of the diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904504 (2013) https://doi.org/10.1117/12.2034090
This paper presented a real-time endoscope ultrasonic digital imaging system, which was based on FPGA and applied for gastrointestinal examination. Four modules, scan-line data processing module, coordinate transformation and interpolation algorithm module, cache reading and writing control module and transmitting and receiving control module were included in this FPGA based system. Through adopting different frequency ultrasound probes in a single insertion of endoscope, the system showed a high speed data processing mechanism capable of achieving images with various display effects. A high-precision modified coordinate calibration CORDIC (HMCC-CORDIC) algorithm was employed to realize coordinate transformation and interpolation simultaneously, while the precision and reliability of the algorithm could be greatly improved through utilizing the pipeline structure based on temporal logic. Also, system real-time control by computer could be achieved through operating under the condition of USB2.0 interface. The corresponding experimental validations proved the feasibility and the correctness of the proper data processing mechanism, the HMCC-CORDIC algorithm and the USB real-time control. Finally, the specific experimental sample, a tissue mimicking phantom, was imaged in real-time (25 frames per second) by an endoscope ultrasonic imaging system with image size 1024×1024. The requirements for clinical examination could be well satisfied with the imaging parameters discussed above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904505 (2013) https://doi.org/10.1117/12.2036943
For the mean shift tracking algorithm, we use the histogram feature which contains little information, but the moving direction and velocity information are not been considered, which leads to the target will be missed easily, what is more, the limitation of traditional algorithm which can not change the size of the window to adapt to the size of target, etc. To overcome those weaknesses, we introduce both the target feature representation idea having the features of adapting to window size adjustment and spatial characteristics and the nature of the kernel function, then, there is no need to estimate the probability for all regions. The results of experiment show that compared with using kalman filtering and mean shift algorithm alone, the weighted mean improved filtering algorithm has greatly improved the instability of target tracking and the robustness of the moving target tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904506 (2013) https://doi.org/10.1117/12.2035028
Newton’s rings pattern always blurs the scanned image when scanning a film using a film scanner. Such phenomenon is a kind of equal thickness interference, which is caused by the air layer between the film and the glass of the scanner. A lot of methods were proposed to prevent the interference, such as film holder, anti-Newton’s rings glass and emulsion direct imaging technology, etc. Those methods are expensive and lack of flexibility. In this paper, Newton’s rings pattern is proved to be a 2-D chirp signal. Then, the fractional Fourier transform, which can be understood as the chirp-based decomposition, is introduced to process Newton’s rings pattern. A digital filtering method in the fractional Fourier domain is proposed to reduce the Newton’s rings pattern. The effectiveness of the proposed method is verified by simulation. Compared with the traditional optical method, the proposed method is more flexible and low cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904507 (2013) https://doi.org/10.1117/12.2038144
We present the use and characterization of a Single Photon Detector (SPD) for active micro-pulse laser imaging. Laser active imaging technology obtains the two dimensional (2D) intensity information of objects by using the active continuous or pulsed laser illumination and an image sensor array. The Maximum range of laser active imaging is limited by the performance of image sensor, whose noise can seriously lower the obtainable SNR and degrade the quality of the reconstructed image. This paper presents a photon counting scheme based micro-pulse laser active imaging method that utilizes the SPD as the receiver and the micro-pulsed laser as the source. In this case, SPD was used to detect the laser echo. By using repeated multi-cycle detection strategies, every detected photon event is treated as an independent measurement of laser echo and thus the intensity information of objects is acquired with the response possibility estimation of laser echo. We chose a Geiger-Mode Avalanche Photodiodes (GM-APD) based approach, extending the methods of micro-pulse laser active imaging. In our implement, the number of TTL pulses output from the GM-APD within the duration of the pixel dwell time was recorder by a LabView pre-programmed instrument and then the laser echo response possibility of GM-APD was established by Full Waveform Analysis algorithm. This approach combined remote imaging with single photon sensitivity and laser active imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904508 (2013) https://doi.org/10.1117/12.2035370
In this paper, we consider the task of hole filling in depth maps, with the help of an associated color image. We take a supervised learning approach to solve this problem. The model is learnt from the training set, which contain pixels that have depth values. Then we apply supervised learning to predict the depth values in the holes. Our model uses a regional Markov Random Field (MRF) that incorporates multiscale absolute and relative features (computed from the color image), and models depths not only at individual points but also between adjacent points. The experiments show that the proposed approach is able to recover fairly accurate depth values and achieve a high quality depth map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904509 (2013) https://doi.org/10.1117/12.2036679
We introduce a new superpixel segmentation algorithm in this paper with a real-time performance that make the practical in the machine vision systems. The algorithm is divided into two steps. First, a simple linear clustering with a O(N) complexity is used for efficient initial segmentation. Second, to further optimize the boundary localizations, a region competition skill is first used on the superpixels’ edge points and then iterates on the unstable edge points. As only the superpixels’ edge points are considered and most edge points become stable quickly, the clustering samples are significantly compressed to speed up the process. Experimental results on the Berkeley BSDS500 dataset show that the segmentation quality of the proposed method is slightly better than the SLIC algorithm, which is a state-of-the-art superpixel segmentation algorithm. In addition, the average speed achieves speedups of about 5X from the original SLIC algorithm, more than 30 frames per second to process 481x321 images in BSDS500.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450A (2013) https://doi.org/10.1117/12.2036877
We present a way to construct a complete set of scaling rotation and translation invariants extract directly from Zernike moments. Zernike moment can be constructed by Radial moment. In our method in order to construct invariant Zernike moment is to achieve invariant Radial moment which is component of Zernike moment. We use matrix form to denote relationship between Radial and Zernike moment, which makes derivation more comprehensible. The translation invariant Radial moment is first introduced, for it is most complicated part of all the three invariant. Rotation and scaling invariant Radial moment is achieved by normalizing the factor caused by rotation and scaling. The form of invariant radial moment is to combine three parts of invariant. Some experiment has done to test the performance of invariance. In this experiment we take an image library containing 23,329 files which are built by translation rotation and zoom in out of one origin Latin character image. Most of the value of standard deviation ratio by mean of proposed moments is nearly 1%. In addition, retrieval experiment is to test the discrimination ability. MPEG-7 CE shape1 - Part A library is taken in this experiment. The recall rate in part A1 is 96.6% and is 100% in part A2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450B (2013) https://doi.org/10.1117/12.2037173
The polarization detection technique provides polarization information of objects which conventional detection techniques are unable to obtain. In order to fully utilize of obtained polarization information, various polarization imagery fusion algorithms have been developed. In this research, we proposed a polarization image fusion algorithm based on the improved pulse coupled neural network (PCNN). The improved PCNN algorithm uses polarization parameter images to generate the fused polarization image with object details for polarization information analysis and uses the matching degree M as the fusion rule. The improved PCNN fused image is compared with fused images based on Laplacian pyramid (LP) algorithm, Wavelet algorithm and PCNN algorithm. Several performance indicators are introduced to evaluate the fused images. The comparison showed the presented algorithm yields image with much higher quality and preserves more detail information of the objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450C (2013) https://doi.org/10.1117/12.2037223
To solve the issue of low precision and poor real-time performance in image registration, this paper presents an algorithm for extracting and matching of image feature points based on an invariant feature algorithm of the complementation between Harris operator corner detection and SIFT algorithm. First, with Harris operator ‘s quick calculating, the algorithm extracts much corner points in the image as original feature points. Then, description goes to the feature vector of pre-selected feature points on the strike of scale-space invariant features transform (SIFT), thus obtaining descriptors of feature points. By calculating the minimum Euclidean distance of two points in vectors of different feature points described by the SIFT algorithm in the two to-be-spliced images, the accurate image matching is then achieved. Experiments demonstrate that the algorithm combines the rapid achieve performance of Harris operator and the scale-space invariance of the SIFT algorithm, which boasts good robustness for translation, whirling and scaling transformation. In the experiments of 100 images, when there occurs the translation, whirling or scaling transformation to the image, the fully consistent ratio between the coordinates of matching points and the actual coordinates with using this algorithm is over 95%. This algorithm can quickly extract with high precision feature points for matching to achieve the seamless images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450D (2013) https://doi.org/10.1117/12.2037345
Subcutaneous vein images are often obtained by using the absorbency difference of near-infrared (NIR) light between vein and its surrounding tissue under NIR light illumination. Vein images with high quality are critical to biometric identification, which requires segmenting the vein skeleton from the original images accurately. To address this issue, we proposed a vein image segmentation method which based on simple linear iterative clustering (SLIC) method and Niblack threshold method. The SLIC method was used to pre-segment the original images into superpixels and all the information in superpixels were transferred into a matrix (Block Matrix). Subsequently, Niblack thresholding method is adopted to binarize Block Matrix. Finally, we obtained segmented vein images from binarized Block Matrix. According to several experiments, most part of vein skeleton is revealed compared to traditional Niblack segmentation algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450E (2013) https://doi.org/10.1117/12.2037553
As the LR images from the plenoptic camera is greatly constrained by the number of micro-lenses, we can apply the multi-frame super resolution methods to enhance the spatial resolution. Multi-frame super resolution reconstruction is a technology which obtains a high resolution image from several low resolution images of the same scene. Among various super resolution methods, the regularized methods are widely used since they have advantages for solving the ill posed problems. In this paper, some regularized super resolution methods are applied to enhance the spatial resolution of the light field image. The reconstruction results of synthetic low resolution images confirm that all the regularized super resolution algorithm can suppress the Gaussian noise and preserve the edge information. The real data experiment results also confirm the effectiveness of the applied algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450F (2013) https://doi.org/10.1117/12.2034076
This paper describes a corrected tracking algorithm which improves the precision and accuracy of Camshift algorithm on tracking vehicle objects. An improved three-frame difference was combined with Camshift algorithm to recognize the exact region of a moving vehicle. Firstly, in order to correct the error introduced by three-frame difference and get the accurate tracking window automatically, a simplified real-time vehicle template was establish through three-frame difference procedure, and then the data of the tracking window was used by Camshift algorithm to track the vehicle object. Our algorithm eliminates the spatial redundancy and time redundancy of inter-frame difference and three-frame difference, and optimizes the precision of vehicle identification as well as the accuracy of vehicle tracking. The algorithm is tested on PC and the data source is from an actual video. Experimental results prove that our algorithm increases vehicle tracking accuracy to 96.15%, compared with 33.33% of inter-frame difference .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450G (2013) https://doi.org/10.1117/12.2034377
Many pedestrian detection research works focused on the improvement of detection performance, without considering the detection speed, making the detection algorithms not applicable for real-world requirement for real-time processing. To explore this problem, we first propose a pre-processing method Hierarchical HOG Matrices to replace the traditional integral histogram of gradients, which stores more data in the pre-processing phase to reduce computation time. A matrix-based detection computation structure is also proposed, which organize the massive data computations in the scanning detection process into matrix operations to optimize the overall speed. We then add multiple instance learning into the fast pedestrian detection algorithm to further enhance its accuracy. Experiments demonstrate that the proposed fast and robust pedestrian detection algorithm based on the multiple instance feature achieves an accuracy comparable to the latest algorithms, with the best speed among the algorithms with an accuracy of the same level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450H (2013) https://doi.org/10.1117/12.2036495
Aiming at the infrared object detection applications, a novel generalized cumulative sum processing is presented. Since in a typical IRST application system, object appearing and vanishing can be regarded as the change-point detection problem in Statistics. One of the effective solutions is the generalized cumulative sum processing (GCUSUM). Analyses are focused on the detection threshold value selection of GCUSUM algorithm and relations among the threshold value and false alarm rate, detection probability and signal-noise rate. The further researches extend a uniform band IRST system into the multiple band IRST system and improve the realization of GCUSUM algorithm. Results of theoretical analysis and simulation show that our modified algorithm has excellent object detection performance in an infrared image sequences from a real IRST system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450I (2013) https://doi.org/10.1117/12.2036686
By transferring of prior knowledge from source domains and synthesizing the new knowledge extracted from the target domain, the performance of learning can be improved when there are insufficient training data in the target domain. In this paper we propose a new method to transfer a deformable part model (DPM) for object detection, using sharable filters from offline-trained auxiliary DPMs of similar categories and new filters learnt from the target training samples to improve the performance of the target object detector. A DPM consists of a collection of root and part filters. The filters of the auxiliary detectors capture the sharable appearance features and can be used as prior knowledge. The sharable filters are employed by the new detector with a coefficient reweighting algorithm to fit the target object much better. Meanwhile the target object still has some distinct local appearance features that the part filters in the auxiliary filter pool can not represent. Hence, new part filters will be learnt with the training samples of the target object and added to the filter pool as complementary. The final learnt model will be an assembly of transferred auxiliary filters and additional target filters. With a latent transfer learning algorithm, appropriate local features are extracted for the transfer of the auxiliary filters and the description of the distinct target filters. Our experiments demonstrate that the proposed strategy precedes some state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450J (2013) https://doi.org/10.1117/12.2037512
Real-time accurate motion detection is a key step for many visual applications, such as object detection, smart video surveillance and so on. Although lots of considerable research efforts have been devoted to it, it is still a challenging task due to illumination variation, etc. In order to enhance the robustness to illumination changes, many block-based motion detection algorithms are proposed. However, these methods usually neglect the influences of different block sizes. Furthermore, they cannot choose background-modeling scale automatically as environment changes. These weaknesses limit algorithm’s flexibility and their application scenes. In the paper, we propose a multi-scale motion detection algorithm to benefit from different block sizes. Moreover, an adaptive linear fusion strategy is designed through analyzing the accurateness and robustness of background models at different scales. At detecting, the ratios of different scales would be adjusted as the scene changes. In addition, to reduce the computation cost at each scale, we design an integral image structure for HOG feature of different scales. As a result, all features only need to be computed once. Different out-of-door experiments are tested and demonstrate the performance of proposed model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450K (2013) https://doi.org/10.1117/12.2037839
Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people’s motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450L (2013) https://doi.org/10.1117/12.2038084
Hand gesture recognition has attracted more interest in computer vision and image processing recently. Recent works for hand gesture recognition confronted 2 major problems. The former one is how to detect and extract the hand region from color-confusing background objects. The latter one is the expensive computational cost by considering the kinematic hand model with up to 27 degrees of freedom. This paper proposes a stable and real-time static hand gesture recognition system. Our contributions are listed as follows. First, to deal with color-confusing background objects, we take the RGB-D (RGB-Depth) information into account, where foreground and background objects can be segmented well. Additionally, a coarse-to-fine model is proposed, which utilizes the skin color and helps us extract the hand region robustly and accurately. Second, considering the principal direction of hand region is random, we introduce the principal component analysis (PCA) algorithm to estimate and then compensate the direction. Finally, to avoid the expensive computational cost of traditional optimization, we design a fingertip filter and detect extended fingers via calculating their distances to palm center and curvature easily. Then the number of extended fingers will be reported, which corresponds to the recognition result. Experiments have verified the stability and high-speed of our algorithm. On the data set captured by the depth camera, our algorithm recognizes the 6 pre-defined static hand gestures robustly with average accuracy about 98.0%. Furthermore, the average computational time for each image (with the resolution 640×480) is 37ms, which can be extended to many real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450M (2013) https://doi.org/10.1117/12.2034253
In high-speed banknote sorting system, to real-time deal with massive data and complex algorithm is required. This paper proposes an embedded processing system, which realizing the high-speed image acquisition and real-time processing of banknote image. The system is a customized and flexible architecture consisting of one large scale FPGA and four high performance DSP chips. The five processors have good communication with each other by RapidIO BUS. After evaluating the system-calculating overhead, the data throughput, and the hardware characteristics, we presents the whole processing program systematically running in FPGA and DSPs. In order to make full use of the advantage of FPGA highly parallelism and DSP deeply pipeline, the FPGA is designed for running parallel algorithms with large amount of calculation but low complexity of control flow, and the rest of algorithms are assigned to the four DSPs relatively. Finally, the whole program of image processing at the speed of 40 frames per second is realized on the embedded processing platform. The system has been successfully used in high-speed banknote sorting device, which has showed stable and reliable properties. And it also has excellent performance in processing ability with the verification of large scale operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450N (2013) https://doi.org/10.1117/12.2038174
The digital holographic method is used to characterize the phase modulation depth of phase-only LCOS. Compared with the conventional ways, the digital holographic method could obtain the information around the whole field of view. Besides, the digital holographic method is a non-contact, lossless, high-fidelity way to achieve the phase distribution. In this paper, the lensless Fourier transform digital holography is employed, due to its simple setup and reconstruction process. In LCOS the phase modulation is controlled by displaying the gray level images on its active area. Usually for the phase modulation characterization, the total of all 255 gray level images are displayed in a step change of 10, for each recording. That is why it takes time for the complete calibration. In this method a mask with the entire range of gray-level i.e. from 0-255 is displayed on the LCOS active area and the hologram is recorded, which on reconstruction gives the depth of phase modulation of LCOS for the entire range of gray level. In order to avoid the aberration a double exposure method is used in which two holograms are recorded, one with the 0-255 and other with the zero gray level masks. Also, the sorting by reliability, following a non-continuous path (SNRCP) phase unwrapping algorithm is used for unwrapping the final result. The main advantages of this method are the less number of required recording holograms, the easy and real time calibration. Results are then compared with the conventional method that is young double slit method, which is widely proposed to obtain the phase modulation depth of the LCOS and they are in good agreement with each other. The efficiency of our method is verified by comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450O (2013) https://doi.org/10.1117/12.2041743
Fluorescence in situ hybridization (FISH) is a modern molecular biology technique used for the detection of genetic abnormalities in terms of the number and structure of chromosomes and genes. The FISH technique is typically employed for prenatal diagnosis of congenital dementia in the Obstetrics and Genecology department. It is also routinely used to pick up qualifying breast cancer patients that are known to be highly curable by the prescription of Her2 targeted therapy. During the microscopic observation phase, the technician needs to count typically green probe dots and red probe dots contained in a single nucleus and calculate their ratio. This procedure need to be done to over hundreds of nuclei. Successful implementation of FISH tests critically depends on a suitable fluorescent microscope which is primarily imported from overseas due to the complexity of such a system beyond the maturity of the domestic optoelectrical industry. In this paper, the typical requirements of a fluorescent microscope that is suitable for FISH applications are first reviewed. The focus of this paper is on the system design and computational methods of an automatic florescent microscopy with high magnification APO objectives, a fast spinning automatic filter wheel, an automatic shutter, a cooled CCD camera used as a photo-detector, and a software platform for image acquisition, registration, pseudo-color generation, multi-channel fusing and multi-focus fusion. Preliminary results from FISH experiments indicate that this system satisfies routine FISH microscopic observation tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450P (2013) https://doi.org/10.1117/12.2042127
We propose an integral imaging in which the micro-lens array in the pickup process called MLA 1 and the micro-lens array in the display process called MLA 2 have different specifications. The elemental image array called EIA 1 is captured through MLA 1 in the pickup process. We deduce a pixel mapping algorithm including virtual display and virtual pickup processes to generate the elemental image array called EIA 2 which is picked up by MLA 2. The 3D images reconstructed by EIA 2 and MLA 2 don’t suffer any image scaling and distortions. The experimental results demonstrate the correctness of our theoretical analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450Q (2013) https://doi.org/10.1117/12.2035100
This paper describes the simulation results of a high performance readout integrated circuit (ROIC) designed for long wave infrared (LWIR) detectors, which has high dynamic range (HDR). A special architecture is used to the input unit cell to accommodate the wide scene dynamic range requirement, thus providing over a factor of 70dB dynamic range. A capacitive feedback transimpedance amplifier (CTIA) provides a low noise detector interface circuit capable of operating at low input currents and a folded cascade amplifier with a gain of 73dB is designed. A 6.4pF integration capacitor is used for supporting a wide scene dynamic range, which can store 80Me. Because of the restriction of the layout area, four unit cells will share an integration capacitor. A sample and hold capacitor is also part of the input unit cell architecture, which allows the infrared focal plane arrays (IRFPA) to be operated in full frame snapshot mode and provides the maximum integration time available. The integration time is electronically controlled by an external clock pulse. The simulation results show that the circuit works well under 5V power supply and the nonlinearity is calculated less than 0.1%. The total power dissipation is less than 150mW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450R (2013) https://doi.org/10.1117/12.2037587
In order to achieve the high-resolution multispectral image, we proposed an algorithm for MS image and PAN image fusion based on NSCT and improved fusion rule. This method takes into account two aspects, the spectral similarity between fused image and the original MS image and enhancing the spatial resolution of the fused image. According to local spectral similarity between MS and PAN images, it can help to select high frequency detail coefficients from PAN image, which are injected into MS image then. Thus, spectral distortion is limited; the spatial resolution is enhanced. The experimental results demonstrate that the proposed fusion algorithm perform some improvements in integrating MS and PAN images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450S (2013) https://doi.org/10.1117/12.2037673
Interferogram obtained by Temporally-Spatially Modulated Fourier Transform Spectrometers should be recovered to spectrum by fast Fourier transform method (FFT). However, the interferogram sometimes is nonuniformly sampled, which cannot use FFT directly. In this paper, we propose a wavelet basis fitting method to interpolate the interferogram onto an equal-spaced grid. Hence, we can utilize FFT to recover spectrum. The simulated result of the recovered spectrum indicates that the proposed interferogram wavelet basis fitting method can interpolate the nonuniformly sampled interferogram effectively. The preliminary results show that this method introduces less errors than the Polynomial fitting method does.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450T (2013) https://doi.org/10.1117/12.2038065
This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450U (2013) https://doi.org/10.1117/12.2037185
As one of the monitoring modes, video monitoring in stationary scenes is widely used nowadays. To improve video SNR(signal to noise ratio) in stationary scenes, an adaptive 3D de-noising scheme based on background subtraction algorithm and blocks judgment method was presented. The multi-frame-average method based on inter-frame difference was applied to estimate the background. The weighted average value of the average frame and the original background frame is used to update the background,and the temporal filtering will be completed while updating background. The moving pixels are detected using background difference algorithm firstly and judged again with blocks judgment method. The proposed algorithm is implemented on the DSP platform. Experimental results of low SNR video show that the noise is reduced obviously, the majority of edges and details are retained simultaneously avoiding ghosting, thus achieving a significant improvement in video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450V (2013) https://doi.org/10.1117/12.2032442
Quantitative phase imaging of cells with high accuracy in a completely noninvasive manner is a challenging task. To provide a proper solution to this important need, interferometric phase microscopy is described which relies on the off-axis interferometry, confocal microscopy and high-speed image capture technology. Phase retrieval from the single interferogram is done by algorithms based on the fast Fourier transform, traditional Hilbert transform and two-step Hilbert transform, respectively. Furthermore, a phase aberrations compensation approach is applied to correct the phase distribution of the red blood cells obtained via the three methods mentioned before without the pre-known knowledge for removing the wave front curvature introduced by the microscope objectives, off-axis imaging, etc., which otherwise hinders the phase reconstruction. The improved results reveal the better inner structures of the red blood cells. The development of quantitative phase imaging technique is shedding light on their future directions and applications for basic and clinical research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450W (2013) https://doi.org/10.1117/12.2032837
Retina_like sensor is characterized by a space-variant resolution mimicking the distribution of photoreceptors in the human retina. It is devided into two areas-the central area and the peripheral area. Density of pixels is highest in the center and decreases monotonically toward the periphery area. Such space-varant image allows high-resolution tasks using the central region while maintaining a lower resolution part providing relevant information about the background. In high speed forward motion field, because the image system is approaching or leaving the object in a high speed, the recorded image will be blurred radially. However, this kind of radial blur can be reduced by changing the pixel layout of retina_like sensor. So image quality assessment studies carried out on the different structures of the retina_like sensor output image can provide theoretical guidance for the establishment of the optimal layout of the retina_like sensor. This paper first analyzes the distortion process of the retina_like sensor output image in high-speed forward motion and find that such a distortion of the image including under-sampled distortion and radial blur distortion. Based on the characteristics of the distorted image, the author puts forward a full reference image quality assessment method, named Qretina, for such images. The method is according to the distortion process. Do under-sampling evaluation and radial fuzzy evaluation firstly, then weigh the two parts to get the final image quality evaluation results. The experimental results show that Qretina has better performance than the SSIM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450X (2013) https://doi.org/10.1117/12.2033054
To accurately discern the parameters of high frequency vibration blur model on a single TDI image, the research analyzes the imaging function when high frequency vibration occurs in TDI mode. The method of simplifying the vibration model is offered and verified, which promises the MTF will be only related with motion angle and vibration amplitude. Three algorithms for motion direction discerning are compared with one another, which are Radon transform, autocorrelation analysis and cepstral method. The conclusion reveals that cepstral method can measure the most accurate motion angle. Four algorithms for vibration amplitude discerning are compared, which are the quadratic Radon transform, cepstral analysis, autocorrelation analysis and direct analysis on frequency spectrum. It reveals that direct analysis on Log frequency spectrum is the most accurate for vibration amplitude. The research suggests that composition of cesptral method and direct analysis on log frequency spectrum could obtain the highly accurate parameters in high frequency vibration model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450Y (2013) https://doi.org/10.1117/12.2033168
Eye detection plays a vital role in intelligent recognition system. In this paper, a novel of robust and fast eye detection algorithm is proposed. The process of eye detection can be divided into two stages. Firstly, coarse eye candidates are extracted based on multi-neighborhood blocks with weight; Secondly, eye candidates validation is testified on support vector machine (SVM).Data from different face databases such as self-built, JAFFE, FERET are used to demonstrate the effectiveness and robustness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90450Z (2013) https://doi.org/10.1117/12.2033179
The optical triangulation probe (OTP), which consists of a light spot projector and a camera, has found widespread applications for three-dimensional (3D) measurement and quality control of products in the industrial manufacturing. The OTP calibration is an extremely important issue, since the performances such as high accuracy and repeatability are crucially depended on the calibration results. This paper presents a flexible approach for modeling and calibration of the OTP, which only requires planar patterns observed from a few different orientations and light spots projected on the planes as well. For the calibration procedure, the structure parameters of the OTP are calculated, such as the camera extrinsic and intrinsic parameters which include the coefficients of the lens distortion, and the directional equation for the light axis of the projector. For the measuring procedure, the formulations of 3D computation are concisely described using the calibration results. Experimental tests of the real system confirm the suitable accuracy and repeatability. Furthermore, the technique proposed here is easily generalized for the OTP integration in robot arms or Coordinate Measuring Machines (CMMs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hong-Dong Zhao, Yi-Yang Yao, Fei Sun, Qin Zhang, Xiao-Hui Yang
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904510 (2013) https://doi.org/10.1117/12.2033411
The chromaticity diagram is also needed in the instrument of a non-contact system for measuring color of printed material. The purpose of this paper is to design the color chromaticity diagram identical with the CIE 1931 and its program in MATLAB with digital image processing is realized. The chromaticity diagram in a binary format representation as black and white is used and the boundary for every color is confined by a closed black real line. More than 20 kinds of colors are selected by the psychophysiology of vision according to the CIE 1931 and their values in RGB are also are given. After every region colors are put in, the closed black real lines are wiped away and their values of RGB are updated according to the value for the nearest color region. The program including the filters in RGB space run until the all steps between every two colors up to the psychophysiology of vision, the chromaticity diagram is obtained. The values of RGB in every position in the chromaticity diagram can be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904511 (2013) https://doi.org/10.1117/12.2034176
Traditional median filtering algorithm is mainly designed for stationary noise density, which realizes the image smooth but leads to edge fuzzy. The noise density of Electron Multiplying CCD (EMCCD) image varies with the gain. In this paper, a new noise detection and fuzzy adaptive median filter (NDFAMF) is proposed to overcome such drawbacks. First, the noise pixels in the center of the filter window were identified. Secondly, the thresholds were introduced for the detected “noise points”. Based on the thresholds and median of the filtering window, the fuzzy membership function of noise points was put forward, using the fuzzy membership function to filter the noise points. Finally, according to the density of noise in the filtering window the filtering window can change the size adaptive. Simulation and experimental results show that the new algorithm is able to remove noise pixels effectively and protect the details well in the image. The performance is better than the other median filters under the condition of low noise density and relatively stable under the condition of high noise density.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904512 (2013) https://doi.org/10.1117/12.2034301
Electron Multiplication Charge Couple Device (EMCCD) has an outstanding performance in the low-light imaging field for its high sensitivity, high quantum efficiency, and low noise characteristics. Generally we obtain clear low-light images by increasing the multiplication gain of EMCCD. However, with the gain improved the noise will increase rapidly at the same moment, which makes a big influence on EMCCD imaging quality. At present the noise parameter estimation algorithms of EMCCD mainly have maximum likelihood estimation method and expectation maximization estimation method, etc. These algorithms are complicated and the requirement for initial value is high which make them more difficult to achieve. On the other hand, the moment estimation method applied in this paper has a lower complexity and a wider application. So in this paper we have made a study of the particularity and complexity of EMCCD noise distribution model and then established a suitable noise distribution model for image processing. We calculated the EMCCD noise parameter estimation by using the moment estimation method, and obtained a higher accuracy of noise parameter estimates. Then we used the wavelet semi-soft threshold algorithm into EMCCD image noise filtering processing while the image was added the mixed Poisson-Gaussian noise generated by the simulation of moment estimation. At the end, the simulation results show that the algorithm we used can filter out noise effectively, restore clear images, and can retain details and edge information of image at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904513 (2013) https://doi.org/10.1117/12.2034854
Super resolution image restoration algorithm proposed in this paper uses the method of maximum likelihood estimation (MLE) to take image restoration processing of 2D steps image scanned by differential confocal imaging system, assuming that the image is based on Poisson distribution. For optical imaging system, this paper puts forward the more accurate point spread function (PSF) and the concept of image interval matching, and introduces automatic acceleration method and the iteration terminating standard. Experiments on the 2-D image of standard steps indicated that a lateral resolution of 0.1μm has achieved and the recovery time has been obviously shorten.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904514 (2013) https://doi.org/10.1117/12.2035110
In this paper, a method of recognition of multi-modal biometrics for palmprint and hand vein based on the feature layer fusion is proposed, combined with the characteristics of an improved canonical correlation analysis (CCA) and two dimensional principal component analysis (2DPCA). After pretreatment respectively, feature vectors of palmprint and hand vein images are extracted using two dimensional principal component analysis (2DPCA),then fused in the feature level using the improved canonical correlation analysis(CCA), so identification can be done by a adjacent classifier finally. Using this method, two biometric information can be fused and the redundancy of information between features can effectively eliminated, the problem of the high-dimensional and small sample size can be overcome too. Simulation experimental results show that the proposed method in this paper can effectively improve the recognition rate of identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904515 (2013) https://doi.org/10.1117/12.2035635
More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904516 (2013) https://doi.org/10.1117/12.2035686
Aerial images captured using time delay and integration (TDI) charge-coupled devices (CCDs) could be blurred by three types of motion: forward image motion, turbulence disturbance and high frequency vibration. This work proposes a method to separately construct the three deterministic models by discerning or calculating the parameters from a single image blurred by all the three ones. Based on these models, we catch and separate the features existing in the power spectrum diagram, and select the methods with the best identification accuracy to the parameters. The results show the approach we mention can promise the accuracy of the determined parameters, which is helpful to improve the result of blind restoration algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904517 (2013) https://doi.org/10.1117/12.2035796
Face recognition in surveillance is a hot topic in computer vision due to the strong demand for public security and remains a challenging task owing to large variations in viewpoint and illumination of cameras. In surveillance, image sets are the most natural form of input by incorporating tracking. Recent advances in set-based matching also show its great potential for exploring the feature space for face recognition by making use of multiple samples of subjects. In this paper, we propose a novel method that exploits the salient features (such as eyes, noses, mouth) in set-based matching. To represent image sets, we adopt the affine hull model, which can general unseen appearances in the form of affine combinations of sample images. In our proposal, a robust part detector is first used to find four salient parts for each face image: two eyes, nose, and mouth. For each part, we construct an affine hull model by using the local binary pattern histograms of multiple samples of the part. We also construct an affine model for the whole face region. Then, we find the closest distance between the corresponding affine hull models to measure the similarity between parts/face regions, and a weighting scheme is introduced to combine the five distances (four parts and the whole face region) to obtain the final distance between two subjects. In the recognition phase, a nearest neighbor classifier is used. Experiments on the public ChokePoint dataset and our dataset demonstrate the superior performance of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904518 (2013) https://doi.org/10.1117/12.2035938
With smaller pits and lands in multi-level optical disks using signal waveform modulation than those in DVD disks, the ISI and nonlinear attenuation of the read-out signal become more serious. One ordinary way is using an equalizer at sample rate 1/T, we proposed one method of designing the equalizer in fixed sample rate with digital interpolation. According to the analysis of the multi-level optical disk channel, we get the target frequency-response cure and implement it with seven order FIR filter. From the result of the read out experiment with multi-level optical disk, the clock of the RF signal could be recovered with the proposed equalizer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904519 (2013) https://doi.org/10.1117/12.2036249
A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451A (2013) https://doi.org/10.1117/12.2036546
Detecting enemy’s targets and being undetectable play increasingly important roles in modern warfare. Hyperspectral images can provide large spectral range and high spectral resolution, which are invaluable in discriminating between camouflaged targets and backgrounds. As supervised classification requires prior knowledge which cannot be acquired easily, unsupervised classification usually is adopted to process hyperspectral images to detect camouflaged target. But one of its drawbacks—low detecting accuracy confines its application for camouflaged target detecting. Most research on the processing of hyperspectral image tends to focus exclusively on spectral domain and ignores spatial domain. However current hyperspectral image provides high spatial resolution which contains useful information for camouflaged target detecting. A new method combining spectral and spatial information is proposed to increase the detecting accuracy using unsupervised classification. The method has two steps. In the first step, a traditional unsupervised classifier (i.e. K-MEANS, ISODATA) is adopted to classify the hyperspectral image to acquire basic classifications or clusters. During the second step, a 3×3 model and spectral angle mapping are utilized to test the spatial character of the hyperspectral image. The spatial character is defined as spatial homogeneity and calculated by spectral angle mapping. Theory analysis and experiment shows the method is reasonable and efficient. Camouflaged targets are extracted from the background and different camouflaged targets are also recognized. And the proposed algorithm outperforms K-MEANS in terms of detecting accuracy, robustness and edge’s distinction. This paper demonstrates the new method is meaningful to camouflaged targets detecting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Guo-ming Xu, Meng-zi Zhang, Guo-chun Zhu, Lei-ji Lu
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451B (2013) https://doi.org/10.1117/12.2036656
The different polarization phase angle (orientation) low-resolution images of the same scene have much redundant and complementary information which can be used to construct a high-resolution image. In this paper, we propose a super-resolution (SR) algorithm via sparse and redundant representation with considering the non-local self-similarity in different polarization orientation images. As the redundant over-complete dictionary has many irrelevant atoms which not only reduce the computational efficiency in sparse coding but also reduce the representation accuracy, we learn a local dictionary by applying the principal component analysis (PCA) technique. For an image patch to be coded, the best fitted sub-dictionary is adaptively selected by an adaptive sparse domain selection strategy. To improve the stability and accuracy of sparse coding, the centralized sparse coding algorithm is used. The extensive experimental results demonstrated that the proposed method can effectively reconstruct the polarization image with edge structure preserved and detailed information obtained in terms of PSNR, SSIM and visual perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451C (2013) https://doi.org/10.1117/12.2036660
To get better denoising results, the prior knowledge of nature images should be taken into account to regularize the ill-posed inverse problem. In this paper, we propose an image denoising algorithm via non-local similar neighbor embedding in sparse domain. Firstly, a local statistical feature, namely histograms of oriented gradients of image patches is used to perform the clustering, and then the whole training data set is partitioned into a set of subsets which have similar local geometric structures and the centroid of each subset is also obtained. Secondly, we apply the principal component analysis (PCA) to learn the compact sub-dictionary for each cluster. Next, through sparse coding over the sub-dictionary and neighborhood selecting, the image patch to be synthesized can be approximated by its top k neighbors. The extensive experimental results validate the effective of the proposed method both in PSNR and visual perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451D (2013) https://doi.org/10.1117/12.2036665
It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451E (2013) https://doi.org/10.1117/12.2036759
During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451F (2013) https://doi.org/10.1117/12.2036963
Retina-like image sensor is based on the non-uniformity of the human eyes and the log-polar coordinate theory. It has advantages of high-quality data compression and redundant information elimination. However, retina-like image sensors based on the CMOS craft have drawbacks such as high cost, low sensitivity and signal outputting efficiency and updating inconvenience. Therefore, this paper proposes a retina-like image sensor based on space-variant lens array, focusing on the circuit design to provide circuit support to the whole system. The circuit includes the following parts: (1) A photo-detector array with a lens array to convert optical signals to electrical signals; (2) a strobe circuit for time-gating of the pixels and parallel paths for high-speed transmission of the data; (3) a high-precision digital potentiometer for the I-V conversion, ratio normalization and sensitivity adjustment, a programmable gain amplifier for automatic generation control(AGC), and a A/D converter for the A/D conversion in every path; (4) the digital data is displayed on LCD and stored temporarily in DDR2 SDRAM; (5) a USB port to transfer the data to PC; (6) the whole system is controlled by FPGA. This circuit has advantages as lower cost, larger pixels, updating convenience and higher signal outputting efficiency. Experiments have proved that the grayscale output of every pixel basically matches the target and a non-uniform image of the target is ideally achieved in real time. The circuit can provide adequate technical support to retina-like image sensors based on space-variant lens array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451G (2013) https://doi.org/10.1117/12.2036987
We demonstrated a single-shot quasi-on-axis digital holography which is capable of simultaneously capturing two-step phase-shifting interferences. A dual-channel interferometer was employed to monitor the Gouy phase-shifting between two orthogonal polarized references which was introduced by two confocal lenses. A new algorithm was derived for reconstruction the complex field of the object’s wavefront according to the feature of Gouy phase-shifting. Simulation was carried out and recover software was also made. The proposed approach can also be applied to single-shot quasi-onaxis digital holography for real time measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451H (2013) https://doi.org/10.1117/12.2037282
An improved anomaly detection and classification algorithm based on high-order statistics is presented. In order to solve some challenging problems, such as initializing projection, quantifying of anomaly classes and evaluating the performances. Firstly, initialize the projection vectors used by the idea of global RX algorithm. It gives priority to the detection of the anomalies with powerful energy. Secondly, analyze the current data whether have anomaly information or not so that it determines the terminal conditions and the quantities of anomaly classes. Thirdly, use two methods to evaluate the classification performance quantitatively. One is to match the results in the condition of reference images to evaluate the effects of anomaly detection and background suppression, the other is to segment the resultant images to calculate some features such as the classification rate, the number of detected anomalies and the number of false alarms. Simulated and Experimental results show that the improved algorithm has the capability of robustness and better anomaly detection performances under complex unknown background than traditional algorithm does.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451I (2013) https://doi.org/10.1117/12.2037456
The real-time holographic display encounters heavy computational load of computer-generated holograms and precisely intensity modulation of 3D images reconstructed by phase-only holograms. In this study, we demonstrate a method for reducing memory usage and modulating the intensity in 3D holographic display. The proposed method can eliminate the redundant information of holograms by employing the non-uniform sampling technique. By combining with the novel look-up table method, 70% reduction in the storage amount can be reached. The gray-scale modulation of 3D images reconstructed by phase-only holograms can be extended either. We perform both numerical simulations and optical experiments to verify the practicability of this method, and the results match well with each other. It is believed that the proposed method can be used in 3D dynamic holographic display and design of the diffractive phase elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451J (2013) https://doi.org/10.1117/12.2037473
Traditional visualization algorithms based on three-dimensional (3D) laser point cloud data consist of two steps: stripe point cloud data into different target objects and establish the 3D surface models of the target objects to realize visualization using interpolation point or surface fitting method. However, some disadvantages, such as low efficiency, loss of image details, exist in most of these algorithms. In order to cope with these problems, a 3D visualization algorithm based on space-slice is proposed in this paper, which includes two steps: data classification and image reconstruction. In the first step, edge detection method is used to check the parametric continuity and extract edges to classify data into different target regions preliminarily. In the second stage, the divided data is split further into space-slice according to coordinates. Based on space-slice of the point cloud data, one-dimensional interpolation methods is adopted to get the curves connected by each group of cloud point data smoother. In the end, these interpolation points obtained from each group are made by the use of getting the fitting surface. As expected, visual morphology of the objects is obtained. The simulation experiment results compared with real scenes show that the final visual images have explicit details and the overall visual result is natural.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451K (2013) https://doi.org/10.1117/12.2037489
In the optical imaging system, deep depth of focus brings larger imaging space, thereby obtaining more information from object space, but also correcting defocus error caused by variety reasons. Thus deep depth of focus has profound significance in the practical application. The information optical imaging system based on the wave-front coding is a general interest among the current research of focal depth extension area. A special designed phase mask been added in the optical system, which could encode object information obtained from the designed focal depth range. By this mean, the OTF and MTF become insensitive to defocussing. Thus equal blurred middle images could be obtained, being processed by phase mask decided by the optical system which was known both by designing and testing and digital image process technology, the final clear image with extended depth of focus could be acquired. In this paper, the detail of a novel image restoration algorithm for the wave-front coding system was discussed. We aim at a specific designed wave-front coding imaging lens, using the edge condition and wavelet transform for an improved Wiener filtering processing. The result of simulation and experimental shown this algorithm could quickly decode the obtained blurred middle image. In the premise of retain more details, this method could product a good image restoration within with whole range of the designed depth of focus. The peak of signal-to-noise (SNR) ratio and the information entropy have been promoted. So does the control of the blurred edge and the ringing effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451L (2013) https://doi.org/10.1117/12.2037677
Precision glass molding process (GMP) is a promising process to manufacture small precision optical elements in large volume. In this paper, we report on the fabrication of a molded chalcogenide glass lens as an optical element. A set of mold was designed and manufactured with silicon carbide material for the molding test. The structure of the mold set was semi-closed and detachable which can make the molded lens easy releasing with non-invasion. The surfaces of the mold cores are coated with thin protecting DLC film to relieve adhesion problem and increase the working life. Experiments were also performed using a precision glass molding machine Toshiba GMP-311V to determine the molding parameters i.e. molding temperature, pressure and cooling rate. The glass lens breakage during precision molding process was analyzed according to the glass property and the molding parameters. By modifying the mold design and optimization the processing parameters, ultimately achieve the desired molded lens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451M (2013) https://doi.org/10.1117/12.2038018
In the field of automatic target recognition, research on automatic target recognition in water becomes increasingly important due to its value in maritime security and defense. Sea-land-line extraction under complicated sea-land-sky background plays an important role in water surface target recognition. The weighted optimum neighborhood algorithm is proposed by the feature of complicated sea-land-sky background images. Firstly pretreatment operations and Hough transform are taken in the image to find the potential sea-land-lines. There are several false sea-land-lines and a true sealand- line in these sea-land-lines. In the next step, the weighted values of the fitted sea-land lines’ neighborhood are calculated and the fitted line, which has the biggest weighted value, is the correct sea-land-line. The experimental results show that the algorithm can detect the sea-land-line under complicated sea-land-sky background correctly and effectively, and has advantages such as strong robustness, more accurate and high practical value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451N (2013) https://doi.org/10.1117/12.2038027
With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451O (2013) https://doi.org/10.1117/12.2038072
The accuracy of the Shack-Hartmann wavefront sensor (SHWS) for measuring the distortion wavefront is mainly dependent upon the measurement accuracy of the centroid of the focal spot. Many methods have been presented to improve the accuracy of the wavefront centroid measurement, but most of them are based on a point of improvement. Propose a complete centroid optimization method. Based on analyzing background noise of the focal spot image, the method of adaptive threshold denoising is introduced, then the sub-aperture detection windows are optimized by modified watershed algorithm, and the centroid of the focal spot is calculated by the higher moment centroid algorithm in an optimized window, where the linear interpolation have been used. Simulation and experimental results showed that the centroid detection window could automatically adjust its size, matching perfect with spot distribution area, and the proposed method could reach high precision and repeatability of focal spot centroid in some certain SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451P (2013) https://doi.org/10.1117/12.2038085
Quartz crystal chip discussed in this paper is a semitransparent crystal with thickness of 0.1~0.2 mm. Generally these chips are packaged into one block with 100 or 200 pieces. Mostly, the counting job is accomplished by weighing the chips, however, thickness difference of each crystal will lead to the inaccurate counting results. A new counting method with imaging and signal processing is proposed in this paper. At first, the edge images of crystal are acquired, thus edge information will be turned into edge signals, then the signal will be enhanced, the noise will be decreased. At last the accurate amount will be get from these edge signals. This method has good practical value because of contact less, high efficiencies and high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451Q (2013) https://doi.org/10.1117/12.2038089
In this paper, we propose a real-time action recognition algorithm, based on 3D human skeleton positions provided by the depth camera. Our contributions are threefold. First, considering that skeleton positions in different actions at different time are similar, we adopt the Naive-Bayes-Nearest-Neighbor (NBNN) method for classification. Second, to avoid different but similar actions which would decrease recognition rate obviously, we present a hierarchical model and increase the recognition rate significantly. Third, for a real-time application, we apply the sliding window to buffer the input and the threshold presented by the ratio of the second nearest distance and the nearest distance to smooth the output. Our method also rejects undefined actions. Experimental results on the Microsoft Research Action3D dataset demonstrate that our algorithm outperforms other state-of-the-art methods both in recognition rate and computing speed. Our algorithm increases the recognition rate by about 10% at the speed of 30fps averagely (with resolution 640×480).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451R (2013) https://doi.org/10.1117/12.2038090
Quartz crystal in oscillator is the basic element in modern electronic technology. The main specification of crystal is frequency, but the surface defect will also affect the stability and working life. At present, the defect inspection of crystal is mostly accomplished with human vision inspection. A new crystal defect inspection method with machine vision is proposed in this paper. The crystal image is acquired with special angle annular dark field illumination. The relationship between the physical feature and the vision feature are discussed. Then, defect inspect algorithm of each kind of defect are designed based those relationship. A large amount of inspection experiments are executed with this algorithm, the results indicate that this method has good practical value because of high efficiencies and high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451S (2013) https://doi.org/10.1117/12.2038111
The spatial-temporally modulated imaging spectrometer obtains two dimensional spatial images modulated by interference fringe, whole interference fringe of the given target extracted by plenty of images, so it is highly sensitive to the stability of platform. The corner cube mirror could improve the spectral range of the imaging spectrometer, especially the mid and far infrared. The blur of interference fringe by corner cube mirror vibration across and along its axis is analyzed, displacement of interference fringe is mainly caused by corner cube mirror moving along its axis, the stability of installation in this direction must be ensure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451T (2013) https://doi.org/10.1117/12.2038139
Basing on the measurement of pulse time-of-flight, 3D imaging LADAR have ability to obtain the profile of target surface. As the convolution results of impulse response function of footprint and probe pulse, the reflected pulse will distort if the footprint contain a variety of distances. After discriminate process, this distort brings in a time error for distance measurement, which becomes the anamorphose on system imaging further more. According to the discussion of time-dependent scattering cross section, this kind of anamorphose is mainly decided by the slope of target surface when the system parameters and target distance are determined. A compensation method for height error based on detected slope has been put forward. First of all, the slope distribution of detected surface could be calculated from the point cloud data by two-way difference method. Then, the approximate compensation height error will be obtained, according to the slope-error relationship by assuming that each footprint be a tilting plane. After adding to the detected data, the first approximate target surface data has been acquired. The compensation result will approaching the real value by repeating the three steps above. As an example, simulation analysis of Gaussian pulse imaging detection has been given. The result shows that this compensation method is effective and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451U (2013) https://doi.org/10.1117/12.2038150
Spectral calibration of imaging spectrometer plays an important role for acquiring target accurate spectrum. There are two spectral calibration types in essence, the wavelength scanning and characteristic line sampling. Only the calibrated pixel is used for the wavelength scanning methods and he spectral response function (SRF) is constructed by the calibrated pixel itself. The different wavelength can be generated by the monochromator. The SRF is constructed by adjacent pixels of the calibrated one for the characteristic line sampling methods. And the pixels are illuminated by the narrow spectrum line and the center wavelength of the spectral line is exactly known. The calibration result comes from scanning method is precise, but it takes much time and data to deal with. The wavelength scanning method cannot be used in field or space environment. The characteristic line sampling method is simple, but the calibration precision is not easy to confirm. The standard spectroscopic lamp is used to calibrate our manufactured convex grating imaging spectrometer which has Offner concentric structure and can supply high resolution and uniform spectral signal. Gaussian fitting algorithm is used to determine the center position and the Full-Width-Half-Maximum(FWHM)of the characteristic spectrum line. The central wavelengths and FWHMs of spectral pixels are calibrated by cubic polynomial fitting. By setting a fitting error thresh hold and abandoning the maximum deviation point, an optimization calculation is achieved. The integrated calibration experiment equipment for spectral calibration is developed to enhance calibration efficiency. The spectral calibration result comes from spectral lamp method are verified by monochromator wavelength scanning calibration technique. The result shows that spectral calibration uncertainty of FWHM and center wavelength are both less than 0.08nm, or 5.2% of spectral FWHM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451V (2013) https://doi.org/10.1117/12.2038158
Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451W (2013) https://doi.org/10.1117/12.2038170
The radiator grille is an important feature to distinguish the style of vehicle, and also helpful to the automatic recognition of vehicle type. The radiator grille image which is split from the vehicle face is treated as a texture image. By analyzing the Fourier spectrum of the radiator grille image , visual features are extracted. According to the visual features, radiator grilles are classified into different sorts, such as longitudinal or transverse. Compared with other feature extraction methods, the test result shows that the proposed method is effective for the accuracy reaches more than 80%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451X (2013) https://doi.org/10.1117/12.2038287
Since the traditional block matching method in the larger search window tend to fall into local optimum to be unsuitable solving large pan shake image stabilization problem, a new image stabilization algorithm is proposed, which is still based on diamond search, but the pattern of large template is modified to ring, search domain is partitioned and sorted by size, global motion vector is extracted through maximum sample statistics and compensated through adaptive mean filter. A numerical example is calculated to show that the method has a higher search efficiency than exhaustive search, three-step search, four-step search and traditional diamond search; maximum sample statistics avoid the prospect’s interference effectively when global motion vectors is extracted; adaptive filter compensation has better real-time and smoothness than a fixed size one; PSNR of images after processed is higher significantly than the one before processed. Therefore the results above verify the accuracy of the algorithm and feasibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451Y (2013) https://doi.org/10.1117/12.2042075
This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 90451Z (2013) https://doi.org/10.1117/12.2042224
A phase shifting digital holography with pre-magnification is designed. In order to fully utilize the bandwidth of the camera, a four-step phase-shifting digital holography is adopted to retrieve the complex distribution of the object. To further enhance the resolution of the reconstructed image without phase aberration, two microscope objectives (MOs) are placed in front of the object and the reference mirror. The MO in the reference arm provides parallel beam at the PZT plane thus improve the precision of the phase shifting. A 1951 USAF negative resolution target is used as the sample. Experiment result demonstrates the feasibility of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904520 (2013) https://doi.org/10.1117/12.2042323
This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904521 (2013) https://doi.org/10.1117/12.2042371
As one of the most popular optical remote sensor images, MODIS (Moderate Resolution Imaging Spectroradiometer) image are widely used in many areas. However, the processing of MODIS image data is considered as a cumbersome, time-consuming work, especially for the long time series earth observation research. Automatic processing technology is specially needed here. But because of the complex procedure of image matching and the high requirement of location calibration, these images are manual processed in most of the researches. This paper presents an automatic processing method for MODIS image products (mainly for Level 1 B, can be applied on 8-day snow observation image product and daily snow cover optical image data as well). By using the automatic processing system, the efficiency of optical remote sensing image processing is sharply increased while the accuracy in calibration remains the same in comparing with traditional processing method. The working flowchart of the processing system is introduced for those who will deal with mass of MODIS data in their research. Finally, an automatic processing system of snow cover monitoring model based on MODIS L1B image data in ENVI/IDL environment is discussed as the practical application of processing method in long time series snow cover monitoring over Northeast China with MODSI images. The performance shows that the time spent in data processing can be saved from 48 manual working days to 2 working days( 10.41 hours) by computer automatic processing, which proves that processing efficiency of long time series remote sensing data, especial MODIS L1B data, can be greatly increased by saving processing time from months to days and researchers will have more free time from burdensome and automatic work by using the auto processing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2013 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 904522 (2013) https://doi.org/10.1117/12.2043147
The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.