This paper presents an automatic method for the defogging process from a single haze image. To recover a foggy image, an accurate depth map is estimated from a multi-level estimation method, which fuses depth maps with different sizes of patches by dark channel prior. Markov random field (MRF) is applied to label the depth level in adjacent region for the compensation of wrong estimated regions. The accurate estimation of scene depth provides good restoration with respect to visibility and contrast but without oversaturating. The algorithm is verified by a handful of foggy and hazy images. Experimental results demonstrate that the defogging method can recover high-quality images through accurate estimation of depth map.
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a
large display for the application of photo capture and management. The wearable vision system is implemented with
embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore
processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size
but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software
functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a
color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To
improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and
geometrical features of fingertip's shape are matched to recognize user's gesture commands finally.
In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with
400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of
the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99%
recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective
gesture interface with real-time performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.