Open Access
1 April 2010 Special Section Guest Editorial: Digital Photography
Author Affiliations +

Digital cameras have revolutionized photography. Never before in the history of photography has it been so easy to capture, display, and share images. It is no surprise that consumer and professional photographers alike have quickly adopted digital camera systems, and that digital photography has seen an explosive growth over the past decade. More than one-billion digital cameras are now being sold every year. In addition to the sheer volume of digital cameras for consumer and professional use, digital photography offers unique opportunities and challenges for imaging scientists and system designers.

An exciting development in recent years has been the integration of digital cameras with mobile-communication devices, such as cellular phones, portable electronic organizers, and laptops. Sales of cellular phone cameras already dwarf those of all other digital camera systems combined. Integrated cameras have become one of the most ubiquitous imaging devices. Most of us carry a cell phone at all times, and the presence of a camera during our daily routines is starting to change the way we think about photography. From a developer’s point of view, integrated digital cameras present an exciting opportunity, which comes with challenges in terms of system footprint and computational-processing limitations.

Another development in digital photography is the emergence of computational photography. Computational photography provides the capability to record much more information and offers the possibility of processing this information afterwards. In essence, it blurs the line between digital-image capture and subsequent image processing. Using modified digital-imaging systems, it enables features such as digital refocusing or extended depth of field.

This special section on digital photography highlights state-of-the-art research in imaging-component technologies, optical-imaging systems, and image-processing techniques. The papers in this special section are extended versions of papers presented at Digital Photography Conferences IV–V (2009–2010) at the IS&T/SPIE Electronic Imaging Symposium. This conference has been very successful in bringing together academic and industry experts in all the technical fields associated with digital photography, including optics, image-sensor design, color and image processing, and image quality.

At the center of any digital-photography system is a solid-state image sensor, i.e., the light-sensitive element of a digital camera. Digital photography would not be possible without it. Charge-coupled device technology was the long-time incumbent in image-sensor technology. More recently, complementary metal-oxide semiconductor (CMOS) technology has been a great enabler and cost driver in solid-state imaging, especially for cell-phone cameras. The sensing and sampling methods in the image sensor have a large influence on subsequent image reconstruction and thus ultimately on image quality. All papers in this special section revolve around the sensor design, architecture, and initial image reconstruction. Research is very active in this area, and we are pleased to provide here five state-of-the-art articles on these important topics.

Color images are traditionally acquired through a color filter array (CFA), a mosaic of red (R), green (G), and blue (B) color filters affixed to the image sensor. Thus, only one color is captured at each spatial position. To reconstruct the missing colors and obtain full RGB values at every pixel location, an algorithm called demosaicking has to be applied. M. Guarnera, G. Messina, and V. Tamaselli propose a new adaptive demosaicking method for the Bayer CFA, which analyzes the local neighborhood and applies different interpolation depending on the detection and orientation of gradients. The authors also propose a false color-removal algorithm to eliminate residual color errors as a postprocessing step.

With the advent of cell-phone cameras that have very limited processing power, the computational complexity of imaging algorithms has become even more of an issue. Chung and Chan propose an efficient decision-based demosaicking method using a new edge-sensing algorithm. The proposed integrated gradient method simultaneously extracts gradient information on both color intensity and color-difference domains. Their algorithm thus avoids re-estimation of local gradients based on intermediate interpolation.

Tamburrino present a new CMOS image-sensor design where the blue and red filters of the RGB Bayer CFA are replaced by a magenta filter. Under each of those filters they place two stacked, pinned photodiodes; one absorbs mostly blue light and the other mostly red. To complement this sensor design, they implement a suitable demosaicking algorithm and show that their approach outperforms the de mosaicking of the Bayer pattern in terms of image quality.

Image sensors for scientific and industrial applications often require image sensors with high sensitivity and high speed over a wide range of illumination conditions. In their contribution, Stern and Cole perform a detailed design study for a solid-state focal plane array consisting of silicon avalanche photodiodes. Each detector in the array is capable of operating with wide dynamic range in linear or in Geiger mode. Linear mode allows the sensor to operate with high quantum efficiency and speed. In Geiger mode, the sensor performs as a single-photon detector. In a noise analysis, the authors predict imaging performance at ultralow illuminance ( 104 lux) with a signal-to-noise ratio greater than seven at near room temperature.

Conventional digital cameras capture the spatial information (intensity) of a scene. Plenoptic cameras are capable of capturing both spatial and angular information (radiance). Such architectures enable, for example, refocusing of the image or extension of the depth of focus after the image has been captured. This is achieved by employing an internal microlens array, which trades off spatial information (resolution) for angular information. To improve over current designs, Georgiev and Lumsdaine developed the focused plenoptic camera. Here, a microlens array is used as an imaging system focused on the image plane of the main camera lens. It enables rendering of final images with significantly higher resolution. In their paper, they analyze the focused plenoptic camera in optical phase space; present basic, blended, and depth-based rendering algorithms that produce high-quality high-resolution images; and demonstrate GPU-based implementations that render full-screen refocused images in real time.

These five papers cover a broad spectrum of current hardware and software investigations in digital photography. They show how active research in this field remains, how new image capture modalities give rise to novel algorithm development, and how they open up new disciplines in digital photography. With the above mentioned (r)evolution of cell-phone imaging, the emergence of computational photography, and the broad and ever-growing dissemination of image-capture devices, we expect that digital-photography science and technology will continue to evolve rapidly in all application areas.

Biography

021101_1_m1.jpg

Peter B. Catrysse is an engineering research associate in the E. L. Ginzton Laboratory at Stanford University. He received a PhD and MSc in electrical engineering from Stanford University. In his doctoral research, he pioneered the integration of nanoscale metal optics in deep-submicron CMOS technology. In his current work, he aims at elucidating the physics of nanophotonic structures and at applying them to optical sensing devices. He has published over 75 refereed papers, holds four U.S. patents, and has given more than 20 invited talks. He has served on the program committee of the Digital Photography Conference at the IS&T/SPIE Electronic Imaging Symposium since 2008. He is a member of SPIE, OSA, and a senior member of the IEEE. He is a Brussels Hoover Fellow of the Belgian American Educational Foundation (1994) and the recipient of a Hewlett-Packard Labs Innovation Research Award (2008).

021101_1_m2.jpg

Sabine Süsstrunk is a professor for images and visual representation in the School of Communication and Computer Sciences at the École Polytechnique Fédérale in Lausanne, Switzerland, since 1999. Her main research areas are in computational photography, color imaging, image-quality metrics, image indexing, and archiving. Sabine has authored or coauthored over 80 peer-reviewed papers and holds 5 patents. She is an associate editor for the IEEE Transactions on Image Processing and served as chair or committee member in many international conferences on color imaging, digital photography, and image-systems engineering. She was a cochair of the 2009 Digital Photography Conference at the IS&T/SPIE Electronic Imaging Symposium 2009, the EI symposium’s cochair in 2010, and the chair in 2011. She is a senior member of IS&T and IEEE.

©(2010) Society of Photo-Optical Instrumentation Engineers (SPIE)
Peter B. Catrysse and Sabine Süsstrunk "Special Section Guest Editorial: Digital Photography," Journal of Electronic Imaging 19(2), 021101 (1 April 2010). https://doi.org/10.1117/1.3459944
Published: 1 April 2010
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Digital photography

Cameras

Digital cameras

Imaging systems

Image sensors

Computational imaging

Photography

RELATED CONTENT


Back to Top