Recent advances in integral imaging three-dimensional (3D) displays have been hindered by challenges such as low resolution and a narrow field of view. To overcome these limitations, a promising approach involves using multi-angle directional backlighting, which controls light direction by projecting it at specific angles onto the display panel. While this method can enhance display resolution through time multiplexing, the practicality of the system is compromised by the large size of the backlight module. To address this issue, we propose a novel design that integrates integral imaging technology with a compact directional backlight module. The innovative design incorporates holographic optical elements (HOEs) to enable multi-angle light reuse within a time-multiplexed backlight module. These specially designed and fabricated HOEs ensure that the backlight source emits collimated light rays, thus achieving a more compact form factor. Furthermore, a HOE stitching structure is introduced to accommodate larger display screens. This new design reduces the volume of the backlight module by 90% while generating three angles of collimated backlight, effectively tripling the display resolution. The integration of directed backlight module, display screen, and microlens array into a single, glasses-free 3D display system represents a significant advancement. By optimizing the design parameters of each component, this approach facilitates the development of a more compact and practical integral imaging 3D display system. This approach significantly enhances the portability and usability of integral imaging systems, paving the way for future innovations in 3D display technology.
Holographic displays are widely regarded as the pinnacle of three-dimensional (3D) visualization technology. In these displays, real objects must be either photographed or converted into 3D models, which are then processed through neural networks or sophisticated algorithms to generate 3D holograms. To address this challenge, we propose an end-to-end 3D hologram generation strategy that integrates the Transport of Intensity Equation (TIE) phase retrieval technique with the Double Phase-Amplitude Coding (DPAC) method. Under coherent light illumination, phase-only holograms containing depth information can be directly generated by capturing out-of-focus amplitude maps of object light waves propagating to the holographic plane via a camera. The TIE module processes the two out-of-focus amplitude maps to resolve the phase and subsequently generates a phase-only hologram through DPAC. We further conduct simulations to validate the phase retrieval capability of the TIE on complex holograms and demonstrate the feasibility of our proposed strategy.
In holographic near-eye displays, enhancing the user experience by expanding the eyebox without compromising the field of view (FOV) is crucial. Current technologies face limitations due to optical etendue, making it difficult to simultaneously achieve a large eyebox and a wide FOV. This paper presents a novel portable augmented reality holographic near-eye display system that expands the exit pupil without reducing the FOV, using exit pupil scanning technology. The system replaces conventional eyepieces and beam splitters with holographic optical elements, employs point light source illumination instead of collimated illumination, and utilizes an off-axis angular spectrum diffraction propagation model between parallel planes tailored to human visual characteristics. This approach effectively mitigates the trade-off between FOV and eyebox. Compared to traditional systems, the proposed design resolves this trade-off in simulations and reduces the form factor, offering a promising new approach for practical holographic near-eye display applications.
Learning-based computer-generated holography (CGH) has great potential for real-time, multi-depth holographic displays. However, most existing algorithms only use the amplitude of the target image as a dataset to simplify the algorithmic process. This does not adequately consider the incorporation of angular spectrum (ASM) method into neural networks that can compute multiplanar attributes. Here, we propose a multi-depth diffraction model-driven neural network (MD-Holo). MD-Holo utilizes the weights of the pre-trained ResNet34 as initialization in the encoder stage of the complex amplitude generating network to extract more general features. Motion blur, Gaussian filtering, lens blur and low-pass filtered images are added to accommodate a wider range of images. Compared to the super-resolution DIV2K dataset alone, the use of the enhanced dataset allows both the generation of high-fidelity super-resolution images and the generalization of a wider variety of images. Simulations and optical experiments show that MD-Holo can reconstruct multi-depth images with high quality and fewer artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.