Neuromorphic cameras, or Event-based Vision Sensors (EVS), operate in a fundamentally different way than conventional frame-based cameras. Their unique operational paradigm results in a sparse stream of high temporal resolution output events which encode pixel-level brightness changes with low-latency and wide dynamic range. Recently, interest has grown in exploiting these capabilities for scientific studies; however, accurately reconstructing signals from the output event stream presents a challenge due to physical limitations of the analog circuits that implement logarithmic change detection. In this paper, we present simultaneous recordings of lightning strikes using both an event camera and frame-based high-speed camera. To our knowledge, this is the first side-by-side recording using these two sensor types in a real-world scene with challenging dynamics that include very fast and bright illumination changes. Our goal in this work is to accurately map the illumination to EVS output in order to better inform modeling and reconstruction of events from a real-scene. We first combine lab measurements of key performance metrics to inform an existing pixel model. We then use the high-speed frames as signal ground truth to simulate an event stream and refine parameter estimates to optimally match the event-based sensor response for several dozen pixels representing different regions of the scene. These results will be used to predict sensor response and develop methods to more precisely reconstruct lightning and sprite signals for Falcon ODIN, our upcoming International Space Station neuromorphic sensing mission.
Dynamic vision sensors (DVS) represent a promising new technology, offering low power consumption, sparse output, high temporal resolution, and wide dynamic range. These features make DVS attractive for new research areas including scientific and space-based applications; however, more precise understanding of how sensor input maps to output under real-world constraints is needed. Often, metrics used to characterize DVS report baseline performance by measuring observable limits but fail to characterize the physical processes at the root of those limits. To address this limitation, we describe step-by-step procedures to measure three important performance parameters: (1) temporal contrast threshold, (2) cutoff frequency, and (3) refractory period. Each procedure draws inspiration from previous work, but links measurements sequentially to infer physical phenomena at the root of measured behavior. Results are reported over a range of brightness levels and user-defined biases. The threshold measurement technique is validated with test-pixel node voltages, and a first-order low-pass approximation of photoreceptor response is shown to predict event cutoff temporal frequency to within 9% accuracy. The proposed method generates lab-measured parameters compatible with the event camera simulator v2e, allowing more accurate generation of synthetic datasets for innovative applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.