It has a widely applications in robot vision and 3D measurement for binocular stereoscopic measurement technology. And the measure precision is an very important factor, especially in 3D coordination measurement, high measurement accuracy is more stringent to the distortion of the optical system. In order to improving the measurement accuracy of imaging points, to reducing the distortion of the imaging points, the optical system must be satisfied the requirement of extra low distortion value less than 0.1%, a transmission visible optical lens was design, which has characteristic of telecentric beam path in image space, adopted the imaging model of binocular stereo vision, and imaged the drone at the finity distance. The optical system was adopted complex double Gauss structure, and put the pupil stop on the focal plane of the latter groups, maked the system exit pupil on the infinity distance, and realized telecentric beam path in image space. The system mainly optical parameter as follows: the system spectrum rangement is visible light wave band, the optical effective length is f ’=30mm, the relative aperture is 1/3, and the fields of view is 21°. The final design results show that the RMS value of the spread spots of the optical lens in the maximum fields of view is 2.3μm, which is less than one pixel(3.45μm); the distortion value is less than 0.1%, the system has the advantage of extra low distortion value and avoids the latter image distortion correction; the proposed modulation transfer function of the optical lens is 0.58(@145 lp/mm), the imaging quality of the system is closed to the diffraction limited; the system has simply structure, and can satisfies the requirements of the optical indexes. Ultimately, based on the imaging model of binocular stereo vision was achieved to measuring the drone at the finity distance.
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
Traditional three-dimensional (3D) calibration targets consist of two or three mutual orthogonal planes (each of the planes contains several control points constituted by corners or circular points) that cannot be captured simultaneously by cameras in front view. Therefore, large perspective distortions exist in the images of the calibration targets resulting in inaccurate image coordinate detection of the control points. Besides, in order to eliminate mismatches, recognition of the control points usually needs manual intervention consuming large amount of time. A new design of 3D calibration target is presented for automatic and accurate camera calibration. The target employs two parallel planes instead of orthogonal planes to reduce perspective distortion, which can be captured simultaneously by cameras in front view. Control points of the target are constituted by carefully designed circular coded markers, which can be used to realize automatic recognition without manual intervention. Due to perspective projection, projections of the circular coded markers’ centers deviate from the centers of their corresponding imaging ellipses. Colinearity of the control points is used to correct perspective distortions of the imaging ellipses. Experiment results show that the calibration target can be automatically and correctly recognized under large illumination and viewpoint change. The image extraction errors of the control points are under 0.1 pixels. When applied to binocular cameras calibration, the mean reprojection errors are less than 0.15 pixels and the 3D measurement errors are less than 0.2mm in x and y axis and 0.5mm in z axis respectively.
Scale Invariant Feature Transform (SIFT) has been proven to perform better on the distinctiveness and robustness than other features. But it cannot satisfy the needs of low contrast images matching and the matching results are sensitive to 3D viewpoint change of camera. In order to improve the performance of SIFT to low contrast images and images with large 3D viewpoint change, a new matching method based on improved SIFT is proposed. First, an adaptive contrast threshold is computed for each initial key point in low contrast image region, which uses pixels in its 9×9 local neighborhood, and then using it to eliminate initial key points in low contrast image region. Second, a new SIFT descriptor with 48 dimensions is computed for each key point. Third, a hierarchical matching method based on epipolar line and differences of key points’ dominate orientation is presented. The experimental results prove that the method can greatly enhance the performance of SIFT to low contrast image matching. Besides, when applying it to stereo images matching with the hierarchical matching method, the correct matches and matching efficiency are greatly enhanced.
At least three stellar images are needed from different points in space with different orientations of the camera and calibration is realized by finding correspondent stars in stellar images. The method doesn't need any knowledge of orientations of the camera and the calibration is only based on the stellar image correspondences. In this method, homography between stellar images induced by stars (called star-homo for short) is used to approximate the infinite homography (called inf-homo for short). It is well known that the inf-homo provides constraints on image of absolute conic (IAC) which is related to camera internal parameters. Therefore, we use star-homo to replace of inf-homo to compute IAC. When IAC is computed, we can decompose camera internal parameters from IAC. When computing IAC, an unknown scale factor exists, this makes the constraints on IAC nonlinear. In order to transform nonlinear equations to linear equations, we precompute the scale factor by initial principal point estimate. The advantage brought by linear equations is that it is easier to calculate and the results are more accurate and robust. The experimental results show that the proposed method is feasible and can calibrate the space camera with high precision. Under 1 pixel star points extraction error, the relative errors of camera internal parameters are below 0.7%.
In order to improve the robustness and real time performance of SURF based image matching algorithms, a constructing method of SURF descriptor based on sector area partitioning in a circular region was proposed and the dimension of descriptors was reduced from 64 to 32. We compute the new descriptor in a circular local region (the radius set to 10s). Firstly, the local region is divided into 8 equal sector areas according to the dominant orientation in inverse time order. Secondly, Define the dominate orientation and its orthogonal orientation as x and y axis of the key-point’s local frame. Thirdly, compute the Haar wavelet response in x and y directions within the key-point local region. In order to reduce the boundary effect and outer noise, Haar wavelet response in the same Grid of different triangle is both assigned to each sector in different weight, and then a gaussian weighting function is used. Compute the histogram of Haar wavelet response and absolute Haar wavelet response, so each sector sub-region constitutes a vector with 4 dimensions. Finally, a descriptor with 32 dimensions is constituted and the descriptor is normalized to achieve illumination invariance. The experimental results indicate that the average matching speed of the new method increase of about 31.18.
In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor
In order to improve the robustness and real time performance of SIFT based image registration algorithms, a new
descriptor is proposed. We compute the new descriptor for a log-polar location grid with 3 bins in radial direction (the
radius set to 3, 6 and 8) and 12 in angular direction, which results 36 location bins. Firstly, the 3×12 log-polar location
grid is rotated to align its dominant orientation to a canonical direction in a different way with SIFT with less
computational complexity. The keypoint dominant orientation and its orthogonal orientation is defined as the x and y
directions of the descriptor’s local coordinate system. After that a 3×3 grid is rotated to align the dominant
orientation ,and then the rotated 3×3 grid is translated to each pixel within the local 3×12 log-polar location grid.
Therefore the local 3×12 log-polar location grid is rotated to align the dominant orientation and the rotation invariance is
achieved. Secondly, compute the gradients in x and y directions using the rotated 3×3 grid for each pixel within the
keypoint local neighborhood, so each pixel constitutes a gradient vector with 2 dimensions. Thirdly, use the distance
from each pixel to the corresponding keypoint and a Gaussian function to assign a weight to the gradients, and then
compute the gradients location histogram for the 36 location bins in the 3×12 log-polar grid. Finally, a descriptor with 72
dimensions is constituted and the descriptor is normalized to achieve illumination invariance. The experimental results
show that the computational complexity of the new descriptor is reduced about 30% compared with the standard SIFT
descriptor while the performance is favorably compared to the standard SIFT descriptor .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.