Open Access Paper
28 June 2023 3D integral imaging sensing and visualization: an undergraduate project-based learning
Author Affiliations +
Proceedings Volume 12723, Seventeenth Conference on Education and Training in Optics and Photonics: ETOP 2023; 127230N (2023) https://doi.org/10.1117/12.2666978
Event: Seventeenth Conference on Education and Training in Optics and Photonics: ETOP 2023, 2023, Cocoa Beach, Florida, United States
Abstract
We have developed a project-based learning approach with the aim of teaching, education, and undergraduate research in optics and photonics. The proposed project-based learning process is focused on the development of hands-on experiments with 3D light field integral imaging technologies. The research projects enable our undergraduate engineering school students with different levels and majors to gain a deep understanding to optics and photonics through early research experience and student-faculty engagement.

1.

INTRODUCTION

Light field integral imaging is an auto-stereoscopic 3D imaging technology [1] that provides solutions for multi-perspective 3D sensing, optical / digital information processing, and visualization. It was originally proposed by Dr. Lippman in 1908. Due to the rapid development of electronic devices and tools, this technology has been dramatically promoted and related research work have been conducted worldwide [2]. Compared with other 3D imaging approaches, light field integral imaging can be implemented with simple and low-cost commercial devices and it has be developed and integrated with a wide rage of applications such as remote sensing, 3D TV, bio-medical imaging for cell identification, as well as computer vision and machine vision applications [3][4][5].

As a doctoral / professional university that undergraduate education takes an important role at the University of Harford, students in our college are various majors such as Electrical Engineering, Biomedical Engineering, and Mechanical Engineering to name a few. We currently do not have a specific major on Optics, however, students in the department of Electrical and Computer Engineering get opportunities to learn optics and photonic in their senior year classes and design project. In this paper, we show our developed approach and progress to integrate the light field integral imaging 3D imaging research with student learning and project-oriented activities for undergraduate learning on optics and photonics.

This paper is organized as follows:

The 3D light filed integral imaging sensing and visualization will be introduced in Section 2 and 3 respectively, along with our related work and developments. Student work for light filed integral imaging 3D information process and applications will be discussed in Section 4. Conclusions will be shown in Section 5.

2.

LIGHT FIELD INTEGRAL IMAGING BASED 3D SENSING

2.1

Optical 3D Sensing

The original idea of light field integral imaging 3D sensing proposed by Dr. Lippman was to use an array of lenset and film to record the 3D scene, the recorded 2D image contains multiple perspective images from each of the lenslet and is named as an elemental image array (EIA). Fig. 1(a) illustrates the concept of the light field integral imaging 3D optical sensing. To overcome the limitation of the optical configurations, synthetic aperture integral imaging [6] was then proposed by using a moving camera to capture the 3D sensing. In our lab, we have set up a moving camera with a manually controlled translation stage. The camera holder was designed and 3D printed for hook with the translation stage. The translation stage has three-degree of freedoms that can be shifted along the x-y-z axis. We are able to conduct the 3D optical sensing process of light field integral imaging with adjustable systematic parameters such as camera array configuration (x – y – z), pitch (distance between adjacent camera positions), focal length, number of imaging perspectives, etc. Such parameters will determine the performance of light filed integra imaging 3D sensing, such as the Field of View, longitudinal (depth) resolution, depth of focus, etc. Fig. 2 is the assembled camera array in the lab environment.

Figure 1.

Concept of light field integral imaging (a) optical sensing, and (b) optical display. [5]

00055_PSISDG12723_127230N_page_2_1.jpg

Figure 2.

A camera array with a translation stage and 3D printed holder in the lab environment.

00055_PSISDG12723_127230N_page_2_2.jpg

2.2

Computer Software-Based 3D Sensing

Integral imaging 3D sensing can be implemented digitally by using advanced 3D rendering software such as 3ds Max® and Blender®. We find that the Blender® software is open source, and students are able integrate Python programming language to design their camera array distributions and corresponding image sensor parameters. On the other hand, the 3ds Max® is developed by the Autodesk®, which provides comprehensive function package and toolbox that may provide detailed parameters and design for unique purpose. As the university has purchased the license of the Autodesk® products, students were able to practice both of the tools with experimental and compressional analysis.

An example of the student-designed virtual image sensor array with 3D objects of (a) road sign, (b) skull, (c) donut, under Blender® are shown in Fig. 3. For both the optical sensing and computer-based 3D sensing, a series of 2D perspective images (corresponds to each camera position) are recorded and generated, they will then form an elemental image array that furthers for 3D information process and visualization.

Figure 3.

An example of computer software based light field integral imaging 3D sensing using the Blender® software.

00055_PSISDG12723_127230N_page_2_3.jpg

3.

LIGHT FIELD INTEGRAL IMAGING BASED 3D VISUALIZAITON

Corresponding to Section 2, 3D visualization can be implemented either optically and computationally, which will be discussed in this section.

3.1

Lenslet Based Optical Display

Fig. 1(b) illustrates the concept of 3D optical display. The captured elemental image array is displayed with a display panel, light illuminated from the display panel and then pass through a lenslet array so that it can be repropagated into the 3D space to form a 3D image which is real and can be observed by the viewers without need any additional glasses. Such advantage provides potentials for eye fatigue free 3D display system [7]. Note that the systematic parameters of the display device and lenslet array used the display stage does not need to match with the image sensor and lenslet array used in the 3D optical sensing stage (see Section 2.1), a format matching [8] approach can adjust the mis-match parameters between the optical sensing and display which enhances the flexibility of this technique.

In the project, student researchers were encouraged to use monitors from their own devices (smartphone, iPad, laptop) for 3D optical display, and a comprehensive analysis between their display device parameters and format matching optical display were conducted.

3.2

Computational Reconstruction

Beside the optical visualization, an optional approach is computational volumetric reconstruction that utilizes the captured elemental images to computationally reconstrued the 3D scene in various depth positions. The algorithm has been proposed and developed previously [9]. In our approach, students were introduced the reconstruction code which was written in Matlab script. Students will then learn to run the code to get the results. This helps our students understand the whole process with the input and output of the method. Fig. 4 shows the computational reconstruction results for the road sign object [see Fig. 3(a)], with various reconstructed depths, for both in-focused 3D image [Fig. 4(c)], and out-of-focused 3D images, [Fig. 4(b)&(d)].

Figure 4.

3D computational reconstruction results for computer generated road sign object. (a) reference, and 3D computationally reconstructed images at (b) 500mm, (c) 1000mm, and (d) 2500mm.

00055_PSISDG12723_127230N_page_3_1.jpg

We then introduced one group of students to be focused on understanding the code and algorithm, so that we can continue perform the analysis and development of the algorithm. The other group took efforts to develop the reconstruction code under python and C++ programming languages with implementation of the Object-Oriented Programming (OOP) concept, their further research project is on the applications. We found that it is important to let students focus on their interest for theoretical analysis and applicable tasks. Although the students may focus on different tasks, to continue the research work, they will all learn and understand the corresponding Optics concepts which are fundamental and necessary for their tasks.

The group-1 students then applied the computational reconstruction algorithm and further developed a 3D object detection algorithm with Matlab script; the group-2 students developed a Graphic User Interface for the idea of commercialization of the light field integral imaging 3D display.

4.

3D INFORMATION PROCESSING

In this section, we will discuss two student projects based on light filed 3D integral imaging.

4.1

Light field Integral Imaging based Object Detection

We presented a depth detection algorithm by comparing the light field integral imaging computational reconstruction results with a reference 2D perspective image [10]. Fig. 5 shows a schematic of the 3D depth detection algorithm [10]. A 2D image is compared with 3D reconstructed images with various depth positions by using the SSIM and PSNR image similarity and noise analysis metrics [11]. If the 3D image is in-focus, the 3D reconstructed image will be as sharp as the reference 2D reference images. On the other hand, if the 3D image is reconstructed at a wrong depth position, the 3D images will be out-of-focus, so that the pixel will be blurred. By comparing the reference image with 3D images along a reconstruction depth range, A curve of SSIM and PSNR show the comparison results along the depth, and the comparison will reach to a peak value where the corresponding depth is the best estimated object location.

Figure 5.

Light field integral imaging-based object detection algorithm [10].

00055_PSISDG12723_127230N_page_4_1.jpg

4.2

Light Field Integral Imaging based Graphic User Interface

Another student project is to develop commercialized light filed integral imaging applications and graphic user interface so that the user will not need to worry about the technique details, and the source code will be protected from potential unexpected error editing. The students created the code with Object-oriented programming concept and capsulized the functions with classes and models. This further promote student learning with practical applications. Fig. 6 shows two examples of the graphic user interface design for light field integral imaging information processing. Fig. 6(a) is a page for light filed integral imaging functions and input of systematic parameters. Fig. 6(b) is a page for the reconstruction parameters and visualization result. In the upcoming semesters, the students will continuously work on their projects and integrate the topics with their course project and senior design, we also plan to recruit new student research members for each semester.

Figure 6.

Prototype of graphic user interface design for light field integral imaging information processing. (a) Page for functions and input of systematic parameters. (b) Page for reconstruction results and visualization.

00055_PSISDG12723_127230N_page_5_1.jpg

5.

CONCLUSION

In this paper, we present our recent progress on light field integral imaging-based research for student project-based learning in optics and photonics. The proposed process is focused on the development of hands-on experiments with 3D light field integral imaging technologies, which include 3D sensing, 3D visualization and information processing. Student researchers also applied the corresponding technologies and methods for their project and interests on 3D depth detection and human computer interface design. This research experience enables our undergraduate engineering school students with different background to gain a deep knowledge on fundamentals Optics through early research experience and hand-on practice.

Acknowledgement

This work is supported partly by the NASA CT Space Grant, and CETA Faculty-Student Engagement Research Grant.

REFERENCES

[1] 

G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl., 7 (1), 821 –825 (1908). https://doi.org/10.1051/jphystap:019080070082100 Google Scholar

[2] 

B. Javidi, “Roadmap on 3D integral imaging: sensing, processing, and display,” Opt. Express, 28 32266 –32293 (2020). https://doi.org/10.1364/OE.402193 Google Scholar

[3] 

Hong Hua and Bahram Javidi, “A 3D integral imaging optical see-through head-mounted display,” Opt. Express, 22 13484 –13491 (2014). https://doi.org/10.1364/OE.22.013484 Google Scholar

[4] 

Adam Markman, Xin Shen, Hong Hua, and Bahram Javidi, “Augmented reality three-dimensional object visualization and recognition with axially distributed sensing,” Opt. Lett., 41 297 –300 (2016). https://doi.org/10.1364/OL.41.000297 Google Scholar

[5] 

B. Javidi, X. Shen, A. Markman, P, Latorre-Carmona, A. Martinez-Uso, “Multidimensional optical sensing and imaging system (MOSIS): from macroscales to microscales,” in Proceedings of the IEEE, 850 –875 (2017). Google Scholar

[6] 

Jang, J. S., & Javidi, B., “Three-dimensional synthetic aperture integral imaging,” Optics letters, 27 (13), 1144 –1146 (2002). https://doi.org/10.1364/OL.27.001144 Google Scholar

[7] 

Lambooij, Marc, “Visual discomfort and visual fatigue of stereoscopic displays: a review,” Journal of imaging science and technology, (2009). https://doi.org/10.2352/J.ImagingSci.Technol.2009.53.3.030201 Google Scholar

[8] 

X. Shen, X. Xiao, M. Martinez-Corral, & B. Javidi, “Format matching using multiple-planes pseudoscopic-to-orthoscopic conversion for 3D integral imaging display,” In Three-Dimensional Imaging, Visualization, and Display, 9495 188 –192 SPIE(2015).2015). Google Scholar

[9] 

Shin, Dong-Hak, and Eun-Soo Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” Journal of the Optical Society of Korea, 12 (3), 131 –135 (2008). https://doi.org/10.3807/JOSK.2008.12.3.131 Google Scholar

[10] 

N.Green, L. Slomski, and X. Shen, “Integral Imaging Based 3D Light Field Sensing and Depth Estimation,” in presented at the MIT IEEE Undergraduate Research Technology Conference, (2022). Google Scholar

[11] 

Setiadi, D.I.M., “PSNR vs SSIM: imperceptibility quality assessment for image steganography,” Multimed Tools Appl, 80 8423 –8444 (2021). https://doi.org/10.1007/s11042-020-10035-z Google Scholar
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xin Shen and Takafumi Asaki "3D integral imaging sensing and visualization: an undergraduate project-based learning", Proc. SPIE 12723, Seventeenth Conference on Education and Training in Optics and Photonics: ETOP 2023, 127230N (28 June 2023); https://doi.org/10.1117/12.2666978
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D image processing

Integral imaging

3D image reconstruction

Visualization

3D displays

Three dimensional sensing

Algorithm development

Back to Top