Grayscale difference sensitivity reflects visual characteristics of human eyes, and people often watch images and videos on LCD screen now, so the research of grayscale difference sensitivity on LCD (liquid crystal display) is interesting and meaningful. Up to now, the experiment on sensitivity towards brightness change for human eyes is mainly divided into two categories: one focuses on rods or the other specific structures of retina, which requires high precision in controlling illumination devices; the other just selects some contrasts in different spatial resolution, and the experiment regards the eye as a whole instead of the specific structures. After analyzing advantages and disadvantages of the two kinds of experiments, we propose an interesting experiment based on Weber law to measure the grayscale difference sensitivity on LCD screen, which is helpful to distinguish the selected change and the region of interest. And under the actual illumination, the experiment is conducted on all gray levels. The experiment process are as follows: firstly, present an image in a certain gray range as the background on the LCD screen; secondly, select an area as the foreground randomly; thirdly, the tester adjusts foreground grayscale gradually until he can perceive the difference, and then the foreground position is marked clearly to verify the result given by tester; finally, the background and foreground grayscale are recorded simultaneously. The experiment is conducted under indoor illumination, 100 student volunteers who have normal or corrected visual acuity attend the test. To verify the experimental results, an image grayscale compression algorithm is proposed. The experimental results show that the distribution of grayscale difference sensitivity data is regular, and the experiment conforms to Weber law.
Nowadays it is one of the main focuses to recognize objects in a digital image. The texture is an important valuable feature in describing the coarseness and the regularity pattern in the surface of an object. We present an interesting and effective technique for segmentation of different texture by integrating color information and Laws’ texture energy. The first step is to convert an image from RGB to HSV color space to obtain hue channel as the basic feature. The second step is to calculate Laws’ texture energy in each pixel by exploring statistical approaches including mean and variance in the serial of multi-scale windows by moving window, in this step several variances can be produced to form a vector, and the vector can be used as an additional feature. This work utilizes threshold of difference between neighborhood vectors as an alternative to distinguish coarseness in a region after segmentation by using the basic feature. In addition, this work calculates the difference mean of hue each color in a region which contain many colors in 5 × 5 window size and utilizes threshold of mean to distinguish the similarity mean between colors. This work examined images from Berkeley Segmentation Dataset (BSDS) which have several textures by using a threshold of difference (70) between neighborhood vectors and threshold mean (10) of hue. The results show that 70.6% of the texture segmentation can be accepted after combining color information and Laws’ texture energy and provide a favorable result for texture segmentation.
The traditional method of recognizing an answer sheet is to use optical mark reader (OMR). A kind of OMR only recognizes a certain answer sheet with fixed format, which results in the poor universality of OMR. We propose a recognition method for answer sheet with arbitrary format. After designing the new answer sheet or using the existing ones, the printed answer sheets will become images by high-definition (HD) scanning after being filled in an exam. And the images of answer sheets will be recognized automatically by image processing techniques. According to the positioning cross found in answer sheets, the images will be corrected if they are tilted. Then candidate number recognition, option recognition and page number recognition will be carried out in the order specified by users. The method of maximum between-cluster variance will be used for candidate number recognition and option recognition. On the other hand, the page number of answer sheet will be recognized by template matching. Experimental results show that the accuracy can reach 100%. And this method can be realized easily, the cost is low, and it has good universality.
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points’ backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.