LiDAR sensors in applications like autonomous driving, human-robot-collaboration, or logistics have to be robust, lowcost, and reliable. While several LiDAR architectures and methods are currently tested in the field, the improvement of individual system components, including photon detectors and laser sources, are part of ongoing scientific work.
The detector presented here is a CMOS integrated SPAD (single photon avalanche diodes) array device employing a new groundbreaking technology. Backside illuminated SPADs are fabricated and bonded wafer-to-wafer onto a smart ROIC (read-out IC), combining state-of-the-art circuitry and algorithms in a single device.
With 64x48 pixels the novel detector test vehicle paves the way for near-future LiDAR devices. The detector comprises a state-of-the-art time-to-digital converter (TDC) architecture for accurate time-of-flight (ToF) measurements with one TDC shared among 4 pixels. The TDC provides a time resolution of 312.5 ps and has a measurement range up to 192 m. Furthermore, the sensor supports switching of the acquisition modes between timing, counting, and time gating modes. The integrated background-light rejection algorithm, presented earlier in a previous device, allows about 66% higher maximum measurement ranges in environments with a high level of ambient light.
The large pixel pitch of 125 μm is actually limited by the ROIC manufactured in a 350 nm CMOS process. Thus, with smaller CMOS feature sizes for the ROIC, the pixel count can be scaled up drastically in future devices without changing the detection principle or the architecture of the SPAD detector array.
Advances in laser diode technology enable the generation of eye-safe laser pulses with short pulse duration and high peak power. This opens up new opportunities for Light Detection and Ranging (LiDAR)-systems based on the direct time-of-flight (dTOF) principle because their range performance is mainly limited by the requirement of eye-safe laser pulse energy. Another limiting factor for dTOF LiDAR is the sensitivity to background noise. Shorter pulse width enables better parasitic light suppression inside the LiDAR system for improved performance in high background flux scenarios. With the improvements caused by using short laser pulses, new challenges emerge. Shorter pulse duration and limited achievable timing resolution of time discrimination circuits inside of dTOF detectors lead to histogram data distributions in which the laser originated time stamps can only fill few time bins. The time stamp histogram of the detected and clocked laser photons shows a sharp exponential decline. The slope is strongly dependent on the occurring laser event rate inside the system. In an extreme case, all laser generated events fall into one time bin. Because of the coarse discrete arrangement of those laser generated events, a need for new algorithmic approaches arises. This work illustrates the dependency between the occurring laser photon rate in the system and its distribution inside the measurement data. Influence of the time discrimination circuit's time bin width is discussed with regards to resulting histogram shapes.
A fast and reliable three dimensional monitoring of the environment is indispensable for robotics, automation engineering or autonomous driving. For these applications LiDAR is a key sensor technology. Normally a light source in the near infrared range is used, which is invisible to human eyes. High ambient light compared to the laser source intensity is a major problem for these systems. Therefore, a measurement concept to reduce the impact of ambient light is necessary. In this paper we present a measurement concept in which the full distance range is scanned and the probability to detect events from far objects is improved. The general problem is that a photon of the background illumination can be detected instead of the reflected laser signal which stops the measurement. The concept allows us to detect the received laser pulse buried in the superimposed background light easier and improve the measurement quality. This is possible due to the delayed start of the measurement and thus the selection of different measurement windows in which an earlier detection of the laser generated events is accessible. In consequence, the probability for receiving an unwanted ambient photon is reduced. For this technique no prior information about the object conditions or its rough distance is required and it can be applied in all situations of the direct time-of-flight measurement to cope with high ambient light. Hence it allows a reliable distance measurement at various ambient and target conditions.
LiDAR is a key sensor technology for future driving. For autonomous vehicles a fast and reliable three dimensional monitoring of the environment is essential for managing a wide variety of common traffic situations. Since these kinds of systems use typically light in the near infrared range, ambient light of the sun is a serious problem due to its high intensity compared to the laser source. Therefore, reducing the influence of ambient light on the distance measurement is very important. In this paper we present a 2 × 192 pixel SPAD-based direct time-of-flight line sensor for flash LiDAR applications with high ambient light rejection integrated in standard CMOS technology. Two commercially available 905 nm laser diodes emitting short pulses are employed for scene illumination. For time measurement an in-pixel timeto- digital-converter with a resolution of 312.5 ps and full range of 1.28 μs has been implemented. Each pixel uses four vertically arranged single SPADs for background light rejection based on the detection of temporal correlated photons. This technique allows the discrimination of the received laser pulse buried in the superimposed background light and, hence, to improve the measurement quality. Additionally, different parameters of the coincidence detection circuit, such as coincidence depth and time, can be varied during operation to enable a real time adjustment to the present ambient light condition, which is measured between each laser shot by operating the sensor in photon counting mode. By using this technique the sensor allows a reliable distance measurement at various ambient and target conditions.
The integration of silicon photomultiplier (SiPM) and frontend electronics in a suitable optoelectronic CMOS
process is a promising approach to increase the versatility of single-photon avalanche diode (SPAD)-based singlephoton detectors. By integrating readout amplifiers, the device output capacitance can be reduced to minimize the waveform tail, which is especially important for large area detectors (>10 × 10mm2). Possible architectures include a single readout amplifier for the whole detector, which reduces the output capacitance to 1:1 pF at minimal reduction in detector active area. On the other hand, including a readout amplifier in every SiPM cell would greatly improve the total output capacitance by minimizing the influence of metal routing parasitic capacitance, but requiring a prohibitive amount of detector area. As tradeoff, the proposed detector features one readout amplifier for each column of the detector matrix to allow for a moderate reduction in output capacitance while allowing the electronics to be placed in the periphery of the active detector area. The presented detector with a total size of 1.7 ♦ 1.0mm2 features 400 cells with a 50 μm pitch, where the signal of each column of 20 SiPM cells is summed in a readout channel. The 20 readout channels are subsequently summed into one output channel, to allow the device to be used as a drop-in replacement for commonly used analog SiPMs.
Optical inspection systems require fast image acquisition at significantly enhanced resolution when utilized for advanced machine vision tasks. Examples are quality assurance in print inspection, printed circuit board inspection, wafer inspection, real-time surveillance of railroad tracks, and in-line monitoring in flat panel fabrication lines. Ultra-highspeed is an often demanded feature in modern industrial production facilities, especially, where it comes to high volume production. A novel technology in this context is the new high-speed sensor for line-scan camera applications with unmatched line rates up to 200 kHz (tri-linear RGB) and 600 kHz (b/w), presented in this paper. At this speed, the multiline- scan sensor provides full color images with, e.g., a spatial resolution of 50 μm at a transport speed of 10 m/s. In contrast to conventional Bayer pattern or three-chip approaches, the sensor presented here utilizes the tri-linear principle, where the color filters are organized line-wise on the chip. With almost 100% fill-factor, the tri-linear technology assures high image quality because of its robustness against aliasing and Moiré effects leading to improved inspection quality, less false positives and thus less waste in the production lines.
We present our latest results concerning CMOS Single-Photon Avalanche Diode (SPAD) arrays for high-throughput parallel single-photon counting. We exploited a high-voltage 0.35 μm CMOS technology in order to develop low-noise CMOS SPADs. The Dark Count Rate is 30 cps at room temperature for 30 μm devices, increases to 2 kcps for 100 μm SPADs and just to 100 kcps for 500 μm ones. Afterpulsing is less than 1% for hold-off time longer than 50 ns, thus allowing to reach high count rates. Photon Detection Efficiency is > 50% at 420 nm, > 40% below 500 nm and is still 5% at 850 nm. Timing jitter is less than 100 ps (FWHM) in SPADs with active area diameter up to 50 μm.
We developed CMOS SPAD imagers with 150 μm pixel pitch and 30 μm SPADs. A 64×32 SPAD array is based on pixels including three 9-bit counters for smart phase-resolved photon counting up to 100 kfps. A 32x32 SPAD array includes 1024 10-bit Time-to-Digital Converters (TDC) with 300 ps resolution and 450 ps single-shot precision, for 3D ranging and FLIM. We developed also linear arrays with up to 60 pixels (with 100 μm SPAD, 150 μm pitch and in-pixel 250 ps TDC) for time-resolved parallel spectroscopy with high fill factor.
The performance of a fabricated CMOS line sensor based on the lateral drift-field photodiode (LDPD)1 concept is described. A new pixel structure was designed to decrease the charge transfer time across the photoactive area. Synopsys TCAD simulations were performed to design a proper intrinsic lateral drift-field within the pixel. The line sensor was fabricated in the 0.35 μm CMOS technology, and further characterized using a tailored photon-transfer method2 and the EMVA 1288 standard3. The basic parameters such as spectral responsivity, photo-response non-uniformity and dark current were measured at fabricated sensor samples. A special attention was paid to charge transfer time characterization4 and the evaluation of crosstalk between neighboring pixels – two major concerns attained during the development. It is shown that the electro-optical characteristics of the developed line sensor are comparable to those delivered by CCD line sensors available on the market, which are normally superior in performance compared to their CMOS based counterparts, but offering additional features such as the possibility of time gating, non-destructive readout, and charge accumulation over several cycles: approaches used to enhance the signal-to-noise ratio (SNR) of the sensor output.
We designed and characterized Silicon Single-Photon Avalanche Diodes (SPADs) fabricated in a high-voltage 0.35 μm
CMOS technology, achieving state-of-the-art low Dark Counting Rate (DCR), very large diameter, and extended Photon
Detection Efficiency (PDE) in the Near Ultraviolet. So far, different groups fabricated CMOS SPADs in scaled
technologies, but with many drawbacks in active area dimensions (just a few micrometers), excess bias (just few Volts),
DCR (many hundreds of counts per second, cps, for small 10 μm devices) and PDE (just few tens % in the visible
range). The novel CMOS SPAD structures with 50 μm, 100 μm, 200 μm and 500 μm diameters can be operated at room temperature and show DCR of 100 cps, 2 kcps, 20 kcps and 100 kcps, respectively, even when operated at 6 V excess
bias. Thanks to the excellent performances, these large CMOS SPADs are exploitable in monolithic SPAD-based arrays
with on-chip CMOS electronics, e.g. for time-resolved spectrometers with no need of microlenses (thanks to high fillfactor).
Instead the smaller CMOS SPADs, e.g. the 10 μm devices with just 3 cps at room temperature and 6 V excess
bias, are the viable candidates for dense 2D CMOS SPAD imagers and 3D Time-of-Flight ranging chips.
Combined 2D imaging and 3D ranging sensors provide useful information for both long (some kms) and short (few tens of m) distance, in security applications. To this aim, we designed two different monolithic imagers in a 0.35 μm costeffective CMOS technology, based on Single Photon Avalanche Diodes (SPADs), for long-range time-of-flight (TOF)
and short-range phase-resolved depth ranging. The single pixel consists of a SPAD (30 μm diameter), a quenching
circuit, and a Time-to-Digital Converter (TDC) for TOF measurements or three up/down synched counters for phaseresolved depth assessments. Such smart pixels operate in two different modalities: single photon-counting for 2D “intensity” images; while either photon-timing or phase-resolved photon-counting for 3D “depth” images. In 2D
imaging, each pixel has a counter that accumulates the number of photons detected by the SPAD in the pixel, thus
providing single-photon level sensitivity and high (100 kframe/s) frame-rate. In the TOF 3D imager, each pixel measures
the photon arrival time with a 312 ps resolution, thanks to a two-stage TDC (with 6 bit coarse counter plus a 4 bit fine
interpolator), with a 320 ns full-scale range. The resulting spatial resolution is 9 cm within a 50 m range, centered at any user-selectable distance (e.g. 100 m – 5 km), with linearity of DNLrms=4.9% LSB and INLrms=11.7% LSB, and 175 ps
precision. In the phase-resolved 3D imager, the in-pixel electronics measures the phase difference between the
modulated light emitted by a laser and the back-reflected light, with both continuous-wave and pulsed-light modulation techniques.
The growing interest for fast, compact and cost-effective 3D ranging imagers for automotive applications has prompted to explore many different techniques for 3D imaging and to develop new system for this propose. CMOS imagers that exploit phase-resolved techniques provide accurate 3D ranging with no complex optics and are rugged and costeffective. Phase-resolved techniques indirectly measure the round-trip return of the light emitted by a laser and backscattered from a distant target, computing the phase delay between the modulated light and the detected signal. Singlephoton detectors, with their high sensitivity, allow to actively illuminate the scene with a low power excitation (less than 10W with diffused daylight illumination). We report on a 4x4 array of CMOS SPAD (Single Photon Avalanche Diodes) designed in a high-voltage 0.35 μm CMOS technology, for pulsed modulation, in which each pixel computes the phase difference between the laser and the reflected pulse. Each pixel comprises a high-performance 30 μm diameter SPAD, an analog quenching circuit, two 9 bit up-down counters and memories to store data during the readout. The first counter counts the photons detected by the SPAD in a time window synchronous with the laser pulse and integrates the whole echoed signal. The second counter accumulates the number of photon detected in a window shifted with respect to the laser pulse, and acquires only a portion of the reflected signal. The array is readout with a global shutter architecture, using a 100 MHz clock; the maximal frame rate is 3 Mframe/s.
A 3D CMOS imager based on time-of-flight (TOF) has been developed and successfully tested. It uses an active pulsed class 1 laser operating at 905nm to illuminate a 3D scene. The scene depth is determined by measurement of the travel time of reflected pulses by employing a fast on-chip synchronous shutter. A so-called “Multiple Double Short Time Integration” (MDSI) enables suppression of the background illumination and correction for reflectivity variations in the scene objects. The sensor chip contains 2 pixel lines with each pixel containing twin photodiodes, thus the chip contains 4×64 sensors. The chip allows two operating modes; the first is the binning mode (mode0 and mode1 are activated), where the twin pixels are short-circuited (tow lines on the die) and the average signal is measured. The second mode is the high-resolution mode (either mode0 or mode1 is activated). In this mode the pixels operate separately (four lines on the die). The chip has been realized in 0.5μm n-well standard CMOS process. The pixel pitch is 130μm. To get a good fill factor, the readout circuitry is located at the sides of the chip.
All optoelectronic detectors today are based either on integrating or demodulating readout approaches. In this paper we introduce an optical detector that in-situ combines these two readout methods thus enabling simultaneous demodulation and image acquisition of the impinging light signal. Measurement results of a realized chip for optical storage systems with six demodulating paths and a 5x5 matrix of integrating pixels are presented. Furthermore, approaches for system integration of the novel architecture in the field of optical storage and fibre transmission systems are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.