(a) Experimental setup; (b) Integral image obtained experimentally; (c) Collection of synthetic elemental images ready to produce a real orthoscopic image; (d) Reconstruction of the orthoscopic, floating 3-D image through an MP4 device.
Stereoscopic or auto-stereoscopic television monitors usually produce visual fatigue among viewers due to the convergence-accommodation conflict. An attractive alternative to these techniques is the so-called integral photography approach (also known as integral imaging, or InI) that was proposed by Lippmann in 19081–the notion that one can record in a 2-D matrix sensor many elemental images of a 3-D scene, each of which stores information from a different perspective. When this information is projected onto a display device placed in front of an array of micro lenses, any pixel generates a conical ray-bundle. The intersection of these bundles produces the local concentration of light density that permits the 3-D reconstruction of the scene—without special goggles, with full parallax and without visual fatigue.2
The Lippmann concept was resurrected about two decades ago due to the fast development of optoelectronic image sensors and displays. One important challenge of the approach is figuring out how to address the structural differences between the capture setup and the display monitor. To tackle this, we developed an algorithm, called the smart pseudoscopic-to-orthoscopic conversion (SPOC), which permits the calculation of new sets of synthetic integral images that are fully adapted to the display monitor characteristics. Specifically, this global pixel-mapping algorithm permits one to select the display parameters such as the pitch, focal length and size of the microlens array (MLA), the depth position and size of the reconstructed images, and even the geometry of the MLA.3
The algorithm is the result of applying three processes in cascade: a simulated display, a synthetic capture and the homogeneous scaling. To demonstrate the utility of the algorithm, we generated the synthetic integral image ready to be displayed on a monitor whose parameters are very different from the ones used in the capture. For the capture of the elemental images, we prepared the 3-D scene shown in (a) and picked up the elemental images with only one digital camera that was mechanically translated. In (b), we show the captured image, which was composed of 31 x 21 elemental images with 51 x 51 pixels each.
Our aim was to produce a synthetic integral image for display in an MP4 device that had a matrix display with 900 x 600 pixels of 79.0 µm each. We calculated a matrix of 75 x 50 elemental images with 12 x 12 pixels each. In (c), we show the calculated integral image. Finally, we displayed the synthetic integral image in the matrix display of the MP4 device over which we placed the microlens array. In order to avoid a braiding effect, we took care to ensure that the distance between the screen and the microlenses was equal to their focal length.4 As we show in (d), the display produced an orthoscopic floating 3-D reconstruction of the 3-D scene that can be observed with full parallax.
This research has been supported in part by the Grant FIS2009-9135, Ministerio de Ciencia e Innovación, Spain.
Manuel Martínez-Corral, Héctor Navarro, Raúl Martínez-Cuenca and Genaro Saavedra are with the department of optics, University of Valencia, Burjassot, Spain. Bahram Javidi is with the electrical and computer engineering department at the University of Connecticut, Storrs, Conn., U.S.A.
References and Resources
1. M.G. Lippmann. J. Phys. (Paris) 7, 821 (1908).
2. J.-H. Park et al. Appl. Opt. 48, H77 (2009).
3. H. Navarro et al. Opt. Express 18, 25573 (2010).
4. H. Navarro et al. J. Disp. Technol. 6, 404 (2010).