Open Access
21 February 2013 Special Section Guest Editorial: Mobile Computational Photography
Todor G. Georgiev, Andrew Lumsdaine, Sergio R. Goma
Author Affiliations +
Abstract
For many photographers today, mobile photography is photography. According to statistics available on the photo-sharing site CrossRef

XSLOpenURL/

, over the last year the most popular camera used to take photos on the site has been an Apple iPhone. In fact, during the summer of 2012, the iPhone 4 and the iPhone 4S occupied the number one and number two spots. The success of mobile photography perhaps goes hand in glove with the rise of social networking sites. Mobile photography has enabled any individual with a cell phone (the ubiquitous mobile camera) to quickly and easily share visual images of their lives. Mobile cameras are cheap, small, lightweight, and connected—pictures can be taken anytime and anywhere—and shared immediately.

For many photographers today, mobile photography is photography. According to statistics available on the photo-sharing site flickr.com, over the last year the most popular camera used to take photos on the site has been an Apple iPhone. In fact, during the summer of 2012, the iPhone 4 and the iPhone 4S occupied the number one and number two spots. The success of mobile photography perhaps goes hand in glove with the rise of social networking sites. Mobile photography has enabled any individual with a cell phone (the ubiquitous mobile camera) to quickly and easily share visual images of their lives. Mobile cameras are cheap, small, lightweight, and connected—pictures can be taken anytime and anywhere—and shared immediately.

Mobile photography has been enabled by the same miniaturization process that is responsible for the fantastic advances we have seen in computing in general, resulting in significantly smaller sensors, pixels, and camera optics relative to traditional cameras (the 35-mm film camera being the de facto reference standard for camera quality). However, many of the features of mobile photography that make it so popular also put pressure on the quality of the images that can be taken. The effect of the simultaneous miniaturizations in mobile platforms has been a decrease in technical and artistic quality of the pictures these devices can take. Small pixels are noisy and distraction-limited. A camera with a small aperture has an almost infinite depth of field, precluding, for example, artistic effects based on shallow depth of field.

Yet, there is a growing expectation in users of mobile devices that if their cell phones are going to become their primary cameras, then those devices should provide the same quality and capabilities as the devices they are replacing. (One is reminded of the comparison Bob Thaves made between Fred Astaire and Ginger Rogers: “Ginger Rogers did everything [Fred Astaire] did backwards...and in high heels!” In the same way, we expect our mobile cameras to do the same things as traditional cameras, but also to fit in a shirt pocket, be able to upload photos immediately to a sharing site, and cost $10.)

The effects of miniaturization have caused some compromises and limitations to the capabilities of mobile cameras. However, the same miniaturization has resulted in staggering computational power in handheld devices—power that will only continue to increase. This power can be put use via computational photography, whereby optics are replaced by computation that is not limited by (but rather is enhanced by) increasing miniaturization. With computational photography applied in a mobile setting—i.e., mobile computational photography—the capabilities of traditional cameras can be had in a mobile form factor.

Replacing optics with computation requires a computational representation of the light in a scene. Capturing the light in a scene, or taking its fingerprint, such that it is amenable to computational representation requires capturing the 4-D radiance, i.e., the intensity of all rays as a 4-D array. This representation of light is then manipulated and transformed in purely digital form. Focusing can now be done with a digital lens algorithmically rather than optically, and bulky camera optics can be completely eliminated. This model of computational photography has been realized recently in the form of plenoptic (or integral) cameras, although the ideas originated at the beginning of the 20th century. Plenoptic cameras have already demonstrated that optical camera settings such as focus and aperture can be applied computationally—after the original image has been captured—and in infinite variety. The power and capabilities of mobile computational photography thus depend on the power and capabilities of computing devices, which portends an exciting future for these devices as they become smaller yet more capable.

Advances in mobile computational photography will be fueled by the enormous market for cameras in mobile devices and will enable new and coordinated advances in technologies including optics, sensors, electronics, image processing, computational approaches, and more. Accordingly, the Conference on Mobile Computational Photography was initiated for the 2013 Electronic Imaging Symposium in order to publicly recognize the importance of this new field and to provide a forum for practitioners and researchers in constituent fields to mobile computational photography to share their results. In addition to the usual conference presentations, the 2013 Mobile Computational Photography conference includes a “focal track” of peer-reviewed papers that appear in a special section of the Journal of Electronic Imaging.

Many of the capabilities of mobile computational photography will likely leverage plenoptic (i.e., lightfield) camera capabilities. In the mobile setting, these will need to be built using micro-optic techniques, either arrays of miniaturized cameras or with arrays of microlenses. Wafer-level cameras, built using semiconductor processes, will become a key sensor technology. In their paper “Resolution and sensitivity of wafer-level multi-aperture cameras,” Oberdörster and Lensch1 present an analysis of some of the ensemble optical properties of wafer-level cameras, with particular attention to controlling aberrations.

Algorithmically, obtaining large-camera capabilities out of mobile computational platforms (particularly those based on plenoptic camera ideas) will require new processing approaches and algorithms. As advances in plenoptic rendering continue to be made, being able to effectively estimate depth (disparity) in a scene is emerging as a critical need. Krishnamurthy and Rastogi2 develop an approach to depth estimation that is particularly well-suited to plenoptic imagery in their paper “Refinement of depth maps by fusion of multiple estimates.”

On the one hand, mobile computational photography is about cameras. But these devices are much more than simply cameras: they are multipurpose mobile computing platforms that include technological features such as GPS, accelerometers, touch screens, etc. Many of these technologies can be leveraged and brought to help provide higher quality (and innovative) photographic capabilities. One such application is presented by Šindelář and Šroubek.3 Their paper “Image deblurring in smartphone devices using built-in inertial measurement sensors” uses the accelerometers and gyroscopes in a smartphone to determine the motion trajectory while a photo is taken, allowing the blur caused by that motion to be removed from the picture.

Finally, in considering a hand-held device as a powerful computational imaging platform one can also consider other capabilities to add to the device to provide a more compelling user experience, such as a projector. In the paper “Compensating specular highlights for non-Lambertian projection surfaces,” Kao et al.4 describe a portable platform that includes both camera and projector. With these two devices in the same platform, the camera can be used in closed-loop fashion to correct (and augment) the projected image. In this paper, Kao et al. address the issue of compensating for specular highlights in particular.

We began this editorial with the observation that today photography is mobile photography. We conclude by predicting that in a few years, photography will be mobile computational photography. The Mobile Computational Photography conference (and the corresponding special section in JEI) devoted to this important field will grow and flourish with it, and we can look forward to many exciting innovations in many different areas. As miniaturization of optics continues, there will need to be different approaches to dealing with noise and the diffraction limit, both optically and computationally. Increasing sensor density, sensor size, and wafer-level optics will allow new modes and mechanisms for lightfield capture. The ever-increasing availability of computational power on mobile platforms (in the forms of CPUs, GPUs, and FPGAs) will enable new modes of image processing. The mobile platform will also provide new opportunities for tighter integration of capture and processing. As increasingly sophisticated photographic capabilities are placed in the hands of ever more people, the creative impact of mobile computational photography will likely be even more profound than the technological impact. We look forward to future years of exciting new results in these (and many other) areas in mobile computational photography.

References

1. 

A. OberdörsterH. P. A. Lensch, “Resolution and sensitivity of wafer-level multi-aperture cameras,” J. Electron. Imag., 22 (1), 011001 (2013). http://dx.doi.org/10.1117/1.JEI.22.1.011001 JEIME5 1017-9909 Google Scholar

2. 

B. KrishnamurthyA. Rastogi, “Refinement of depth maps by fusion of multiple estimates,” J. Electron. Imag., 22 (1), 011002 (2013). http://dx.doi.org/10.1117/1.JEI.22.1.011002 JEIME5 1017-9909 Google Scholar

3. 

O. ŠindelářF. Šroubek, “Image deblurring in smartphone devices using built-in inertial measurement sensors,” J. Electron. Imag., 22 (1), 011003 (2013). http://dx.doi.org/10.1117/1.JEI.22.1.011003 JEIME5 1017-9909 Google Scholar

4. 

C.-T. KaoT.-H. HuangH. LeeH. H. Chen, “Compensating specular highlights for non-Lambertian projection surfaces,” J. Electron. Imag., 22 (1), 011004 (2013). http://dx.doi.org/10.1117/1.JEI.22.1.011004 JEIME5 1017-9909 Google Scholar
© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Todor G. Georgiev, Andrew Lumsdaine, and Sergio R. Goma "Special Section Guest Editorial: Mobile Computational Photography," Journal of Electronic Imaging 22(1), 010901 (21 February 2013). https://doi.org/10.1117/1.JEI.22.1.010901
Published: 21 February 2013
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Computational imaging

Photography

Sensors

Cell phones

Wafer-level optics

Mobile devices

Back to Top