KEYWORDS: Light sources and illumination, Dark current, Pulse signals, Sensors, Interference (communication), Photocurrent, Logic, Linear filtering, Information visualization, Contour extraction
Biologically inspired event-based vision sensors (EVS) are growing in popularity due to performance benefits including ultra-low power consumption, high dynamic range, data sparsity, and fast temporal response. They efficiently encode dynamic information from a visual scene through pixels that respond autonomously and asynchronously when the per-pixel illumination level changes by a user-selectable contrast threshold ratio, θ. Due to their unique sensing paradigm and complex analog pixel circuitry, characterizing Event-based Vision Sensor (EVS) is non-trivial. The step-response probability curve (S-curve) is a key measurement technique that has emerged as the standard for measuring θ. Though the general concept is straightforward, obtaining accurate results requires a thorough understanding of pixel circuitry and non-idealities to correctly obtain and interpret results. Furthermore, the precise measurement procedure has not been standardized across the field, and resulting parameter estimates depend strongly on methodology, measurement conditions, and biasing – which are not generally discussed. In this work, we detail the method for generating accurate S-curves by applying an appropriate stimulus and sensor configuration to decouple 2nd-order effects from the parameter being studied. We use an EVS pixel simulation to demonstrate how noise and other physical constraints can lead to error in the measurement, and develop two techniques that are robust enough to obtain accurate estimates. We then apply best practices derived from our simulation to generate S-curves for the latest generation Sony IMX636 and interpret the resulting family of curves to correct the apparent anomalous result of previous reports suggesting that θ changes with illumination. Further, we demonstrate that with correct interpretation, fundamental physical parameters such as dark current and RMS noise can be accurately inferred from a collection of S-curves, leading to more accurate parameterization for high-fidelity EVS simulations.
Neuromorphic cameras, or Event-based Vision Sensors (EVS), operate in a fundamentally different way than conventional frame-based cameras. Their unique operational paradigm results in a sparse stream of high temporal resolution output events which encode pixel-level brightness changes with low-latency and wide dynamic range. Recently, interest has grown in exploiting these capabilities for scientific studies; however, accurately reconstructing signals from the output event stream presents a challenge due to physical limitations of the analog circuits that implement logarithmic change detection. In this paper, we present simultaneous recordings of lightning strikes using both an event camera and frame-based high-speed camera. To our knowledge, this is the first side-by-side recording using these two sensor types in a real-world scene with challenging dynamics that include very fast and bright illumination changes. Our goal in this work is to accurately map the illumination to EVS output in order to better inform modeling and reconstruction of events from a real-scene. We first combine lab measurements of key performance metrics to inform an existing pixel model. We then use the high-speed frames as signal ground truth to simulate an event stream and refine parameter estimates to optimally match the event-based sensor response for several dozen pixels representing different regions of the scene. These results will be used to predict sensor response and develop methods to more precisely reconstruct lightning and sprite signals for Falcon ODIN, our upcoming International Space Station neuromorphic sensing mission.
Dynamic vision sensors (DVS) represent a promising new technology, offering low power consumption, sparse output, high temporal resolution, and wide dynamic range. These features make DVS attractive for new research areas including scientific and space-based applications; however, more precise understanding of how sensor input maps to output under real-world constraints is needed. Often, metrics used to characterize DVS report baseline performance by measuring observable limits but fail to characterize the physical processes at the root of those limits. To address this limitation, we describe step-by-step procedures to measure three important performance parameters: (1) temporal contrast threshold, (2) cutoff frequency, and (3) refractory period. Each procedure draws inspiration from previous work, but links measurements sequentially to infer physical phenomena at the root of measured behavior. Results are reported over a range of brightness levels and user-defined biases. The threshold measurement technique is validated with test-pixel node voltages, and a first-order low-pass approximation of photoreceptor response is shown to predict event cutoff temporal frequency to within 9% accuracy. The proposed method generates lab-measured parameters compatible with the event camera simulator v2e, allowing more accurate generation of synthetic datasets for innovative applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.