KEYWORDS: Video, Cameras, Motion estimation, Calibration, 3D modeling, Algorithm development, 3D image processing, Video surveillance, 3D acquisition, Navigation systems
This paper presents experimental methods and results for 3D environment reconstruction from monocular video augmented with inertial data. One application targets sparsely furnished room interiors, using high quality handheld video with a normal field of view, and linear accelerations and angular velocities from an attached inertial measurement unit. A second application targets natural terrain with manmade structures, using heavily compressed aerial video with a narrow field of view, and position and orientation data from the aircraft navigation system. In both applications, the translational and rotational offsets between the camera and inertial reference frames are initially unknown, and only a
small fraction of the scene is visible in any one video frame. We start by estimating sparse structure and motion from 2D feature tracks using a Kalman filter and/or repeated, partial bundle adjustments requiring bounded time per video frame. The first application additionally incorporates a weak assumption of bounding perpendicular planes to minimize a tendency of the motion estimation to drift, while the second application requires tight integration of the navigational data to alleviate the poor conditioning caused by the narrow field of view. This is followed by dense structure recovery via graph-cut-based multi-view stereo, meshing, and optional mesh simplification. Finally, input images are texture-mapped onto the 3D surface for rendering. We show sample results from multiple, novel viewpoints.
High-quality transformer winding requires precise measurement and control of the gapping between adjacent wires. We take a vision-based approach to the measurement subtask of determining the gaps between copper wires wound onto an oval transformer. The oval core shape, which can have an eccentricity as high as 2-to-1, leads to significant variations in surface normal and viewing distance. We use special lighting, a secondary mandrel shape sensor, and the specular reflection off the wires to give us an accurate model of the experimental geometry. We further exploit the vertical symmetry of the viewed region to condense our 2D image to a simple 1D signal containing reflectance peaks. after utilizing pattern recognition and some additional safety features to separate the wire peaks from background noise, we perform a least squares curve fit of the peaks to determine the subpixel maxima. the final algorithm is computationally fast and yields the desired wire gap in an absolute metric.
We report observations of frequency chirping and phase of a free electron laser amplifier operating in the Raman regime. The FEL is driven by a mildly relativistic electron beam (750 kV, 25 ns) subjected to a combined axial magnetic field and a helical wiggler field. The input into the FEL amplifier is provided by a high power magnetron tuned to a frequency of 33.39 GHz. Phase and frequency shifts are measured both as a function of time and interaction length. It is found that in the Group I regime of FEL operation the output frequency is upshifted by approximately 100 MHz, but smaller upshifts are observed in the Group II or the reversed field configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.