Low-light images are generally produced by shooting in a low light environment or a tricky shooting angle, which not only affect people's perception, but also leads to the bad performance of some artificial intelligence algorithms, such as object detection, super-resolution, and so on. There are two difficulties in the low-light enhancement algorithm: in the first place, applying image processing algorithms independently to each low-light image often leads to the color distortion; the second is the need to restore the texture of the extremely low-light area. To address these issues, we present two novel and general approaches: firstly, we propose a new loss function to constrain the ratio between the corresponding RGB pixel values on the low-light image and the high-light image; secondly, we propose a new framework named GLNet, which uses the dense residual connection block to obtain the deep features of the low-light images, and design a gray scale channel network branch to guide the texture restoration on the RGB channels by enhancing the grayscale image. The ablation experiments have demonstrated the effectiveness of the proposed module in this paper. Extensive quantitative and perceptual experiments show that our approach obtains state-of-the-art performance on the public dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.