Precise and robust 3D model reconstruction is required in various outdoor scenarios such as aircraft inspection, rapid prototyping, and documentation of an archaeological site. This paper is focused on the development of a mobile structured light 3D scanner for online reconstruction. We use a structured light projection for fast data acquisition. We use deep learning based pattern matching to improve accuracy up to 0.1 pixels. Our FringeMatchNet is based on the U-Net architecture. The network produces an estimated shift heatmap with subpixel accuracy. Each pixel (x, y) of the heatmap represents the probability that the shift between the two image patches is equal to (x, y). We generated a large dataset that combines synthetic and real image patches to train our FringeMatchNet. We compare the accuracy of our FringeMatchNet with other stereo matching algorithms both hand-crafted (SGM, LSM) and modern deep learning-based. The evaluation proves that our network outperforms hand-crafted methods and competes with modern state-of-the-art deep learning based algorithms. Our scanner provides the measuring volume of 400 × 300 × 200 mm for an object distance of 600–700 mm. It combines portability with an object point resolution between 0.2 and 0.1 mm. We made our FringeMatchNet and the training dataset publicly available. Deep learning-based stereo matching using our FringeMatchNet facilitates subpixel registration and allows our scanner to achieve sub-millimeter accuracy in the object space.
|