Paper
16 October 2019 A study on depth map generation using a light field camera and a monocular RGB camera based on deep learning
Makoto Takamatsu, Makoto Hasegawa
Author Affiliations +
Proceedings Volume 11205, Seventh International Conference on Optical and Photonic Engineering (icOPEN 2019); 112050T (2019) https://doi.org/10.1117/12.2542653
Event: Seventh International Conference on Optical and Photonic Engineering (icOPEN 2019), 2019, Phuket, Thailand
Abstract
A depth map and an RGB image taken by a light field camera for training data are arranged in a dataset pair; the datasets are learnt through a deep learning method called pix2pix, which is a type of conditional generative adversarial network. We can generate depth maps using only a monocular mobile camera without the light field camera based on our proposed method. Low accuracy on depth is a technical issue for the light field camera; however, the proposed method improves the depth accuracy due to the generalization ability of neural networks.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Makoto Takamatsu and Makoto Hasegawa "A study on depth map generation using a light field camera and a monocular RGB camera based on deep learning", Proc. SPIE 11205, Seventh International Conference on Optical and Photonic Engineering (icOPEN 2019), 112050T (16 October 2019); https://doi.org/10.1117/12.2542653
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

RGB color model

Optical filters

Image filtering

Image sensors

Sensors

Back to Top