This paper proposes a polynomial-fitting based calibration method for an active 3D sensing using a dynamic light section method. In the dynamic light section method, the relative position of the line laser is dynamically changed at high speed to extend the measurement area with a low computational cost. To conduct 3D sensing, it is necessary to find the equation of the laser plane. In the proposed calibration method, a part of the line laser is irradiated on a reference plane fixed in the 3D sensing system, and correspondences between the normal vectors of the line laser and the image coordinates of the bright point on the reference plane are obtained. Then, the correspondences are regressed to a polynomial function. As a result, the plane equation of the line laser can be obtained at any given moment without considering the complicated system model. Through the measurement accuracy evaluation of the dynamic light section method calibrated by the polynomial fitting, we showed that a measurement target at a distance of 800 mm can be measured with an accuracy of an average of -5.94 mm and a standard deviation of 13.19 mm by rotating the line laser at 210 rpm.
This paper proposes an efficient imaging sonar simulation method based on 3D modeling. In underwater scenarios, a forward-looking sonar, which is also known as an acoustic camera, outperforms other sensors including popular optical cameras, for it is resistant to turbidity and weak illumination, which are typical in underwater environments, and thus able to provide accurate information of the environments. For those underwater tasks highly automated along with artificial intelligence and computer vision, the development of the acoustic image simulator can provide support by reproducing the environment and generating synthetic acoustic images. It can also facilitate researchers to tackle the scarcity of real underwater data in some theoretical studies. In this paper, we make use of the 3D modeling technique to simulate the underwater scenarios and the flexible automated control of the acoustic camera and objects in the scenarios. The simulation results and the comparison to real acoustic images demonstrate that the proposed simulator can generate accurate synthetic acoustic images efficiently and flexibly.
In this research, we propose the 3D measurement system combining structured light and speckle based pose estimation by introducing two different setting cameras. The proposed system consists of two lasers, spot laser and line laser, and two cameras, with and without lens, which can obtain both focused and defocused images at once. Local shapes are measured using focused images by a structured light method. 3D positions of points projected by laser are calculated by triangulation. Pose changes are estimated from speckle information using defocused images. Displacements of speckle patterns are detected as optical ow by Phase Only Correlation (POC) method. Pose changes are estimated from speckle displacements by solving equations derived from the physical nature of speckle. The target shape as a whole is reconstructed by integrating local shapes into common coordinates using estimated pose changes. In the experiment, the texture-less at board was measured with motion. From the experimental results, it is confirmed that the shape of the board was reconstructed correctly by the proposed 3D measurement system.
With the advantage of having a large field of view, fisheye cameras are widely used in many applications. In order to generate a precise view, calibration of the fisheye cameras is very important. In this paper, we propose a method of extrinsic parameters calibration of multiple fisheye cameras working in man-made structures. A Manhattan Worlds space assumption is used, which describes man-made structures as sets of planes that are either orthogonal or parallel to each other. The orientation of the cameras is obtained by extracting vanishing points that denote orthogonal principal directions in different images captured by the cameras at the same time. With the proposed method, the calibration of extrinsic parameters is very convenient and the system can be recalibrated remotely.
There are few moving object detection techniques dealing with severely distorted video imagery, such as one taken from above the wavy water surface. In this paper, a method that identifies image frames containing a moving object from a video taken from above the wavy water surface is proposed. Considering the difficulty to apply common video processing techniques to such a video suffering from the severe distortion, the proposed method utilizes dynamic mode decomposition, a data-driven method for analysis of dynamical systems, to develop an algorithm that extracts information of a moving object from a video stream. The experimental evaluation shows that the proposed method is able to identify image frames containing a moving object from a severely distorted video stream.
This paper proposes a novel approach that performs extrinsic parameter estimation of a camera installed in a man-made environment using a single image. The problem of extrinsic parameter calibration is identical to 6DoF (six-degrees of freedom) localization problem of the camera. We take advantage of line information that is usually present in the man-made environment such as inside of the building. Our approach only requires a flat surface map for a 3D environment model which can be easily obtained from the blueprint of the artificial environment (e.g., CAD data). In order to manage the complicated 6DoF search problem, we propose a novel image descriptor defined in quantized Hough space to perform 3D-2D matching process between line features from the 3D flat surface model and the 2D single image. The proposed method can robustly estimate the complete extrinsic parameters of the camera, as we demonstrate experimentally.
In this research, we propose a novel distortion-resistant visual odometry technique using a spherical camera, in order to provide localization for a UAV-based, bridge inspection support system. We take into account the distortion of the pixels during the calculation of the 2-frame essential matrix via feature-point correspondences. Then, we triangulate 3D points and use them for 3D registration of further frames in the sequence via a modified spherical error function. Via experiments conducted on a real bridge pillar, we demonstrate that the proposed approach greatly increases the accuracy of localization, resulting in an 8.6 times lower localization error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.