When projecting onto a non-white surface, the projected image is distorted or color mixing by complex luminance and chrominance information, which makes the projection result different from the visual perception of the human eye. The purpose of projection image correction is to remove these effects, and traditional solutions usually estimate parameters from the collected projection samples, compute an inverse model of the projection imaging process, and try to fit a correction function. In this paper, a deep neural network-based projection image correction network (PICN) is designed to implicitly learn complex correction functions. PICN consists of a U-shaped backbone network, a convolutional neural network that extracts projected surface features, and a perceptual loss network that optimizes the correction results. Such a structure can not only extract the deep features and surface interference features of the projected image, but also make the corrected projected image more in line with human visual perception. In addition, we built a projector-camera system under the condition of a fixed global illumination environment for verification experiment, and proved the effectiveness of the proposed method by calculating the evaluation metrics of projected images before and after correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.