PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13045, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This invited talk is a survey of the author's experiences over 40 years of imaging in low-light environments, starting with image converter tubes and image intensifiers and transitioning to silicon sensors. The talk will give an overview of the best-practices lessons learned around system design, sensor comparison, data collection, active target illumination, scene irradiance measurements and the generation of ambient lighting environments in controlled laboratory spaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2023, Richards and Hübner proposed silux as a new standard unit of irradiance for the full 350-1100 [nm] band, specifically addressing the mismatch between the photopic response of the human eye and spectral sensitivity of new low-light, Silicon, CMOS sensors with enhanced NIR response. This spectral mismatch between the response of the human eye and the spectral sensitivity of the sensor can lead to significant errors in measuring the magnitude of the signal available to a different camera system with the traditional lux unit. In this correspondence, we demonstrate a per-pixel calibration of a camera to create the first imaging siluxmeter. To do this, we developed a comprehensive per-pixel model as well as the experimental and data reduction methods to estimate the parameters. These parameters are then combined to an updated NVIPM measured system component that now provides the conversion factor from device units of DN to silux, lux, and other radiometric units. Additionally, the accuracy of the measurements and modeling are assessed through comparisons to field observations and validating/transferring calibration from one low light camera to another. Following this process, other low-light cameras can be calibrated and applied to scenes such that they may be accurately characterized using silux as the standard unit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interest in the eSWIR band is growing due to focal plane array technology advancements with mercury cadmium telluride and type-II superlattice materials. As design and fabrication processes improve, eSWIR detector size, weight, and power can now be optimized. For some applications, it is desirable to have a smaller detector size. Reduced solar illumination in the 2 to 2.5 μm spectral range creates a fundamental limit to passive imaging performance in the eSWIR band where the resolution benefit of small detectors cannot out-compete the reduced SNR in photon-starved environments. This research explores the underlying theory using signal-to-noise ratio radiometry and modeled target discrimination performance to assess the optimal detector size for eSWIR dependent upon illumination conditions. Finally, we model continuous-wave laser illumination in the eSWIR band to compare the effect of detector size on active and passive imaging for long-range object discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-limited search modeling has been an important aspect of sensor design for over two decades. In past work, we introduced a model which incorporated camera matrix theory into a pre-existing time-limited model for moving sensor scenarios, for the purpose of optimizing sensor orientation for a given platform speed and height. During the introduction of this model, it was established that optimization in this way required the determination of a balance between sensor range to target and time on target. In this study, we further explore the capabilities of this new model by optimizing sensor configuration in a few selected scenarios, with focus in how sensor orientation, platform speed, and platform height interact with one another.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the primary activities in emissive infrared imager design is the trade on whether to use midwave infrared (MWIR) or longwave infrared (LWIR) in the application. Applications include target acquisition (both target search and target identification), threat warning, aircraft detection, and pilotage. There has been a great deal of work in the characterization of MWIR versus LWIR target signatures. There has been much less work in the characterization of scene (sometimes called background) contrast. The scene contrast of the background can be just as important in the performance of the sensor in an application. A few examples are: 1) a high scene contrast with high clutter contrast can make target search much more difficult, 2) a high scene contrast with image-based navigation can enhance the performance of location estimation, and 3) high scene contrast with mobility sensors can enhance the flying performance of a rotorcraft pilotage system.
This paper discusses the differences observed in scene contrast between mid-wave infrared (MWIR) and long-wave infrared (LWIR) bands. This provides a scene contrast characterization for emissive infrared applications. Radiometrically calibrated imagery is acquired with MWIR and LWIR cameras in various environments and the measured MWIR and LWIR scene contrast is compared. The radiometric comparison is performed in terms of the standard deviation of the scene equivalent blackbody temperature. Comparisons are provided under different conditions such as rural versus urban and day versus night. This comparison enables the infrared system designer with the means to perform detailed engineering trades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This manuscript presents a systematic approach for developing new measurements and evaluation techniques through
modeling and simulation. A proposed sequence of steps is outlined, starting with defining the desired measurable(s), going
through model development and exploration, conducting experiments, and publishing results. This framework, based on
the scientific method, provides a structured process for creating robust, well-defined measurement procedures before
experiments are performed. The approach is demonstrated through a case study on measuring camera-to-display system
latency. A simulation tool is described that enables exploration of how different experimental parameters like camera
temporal response, display properties, and source characteristics impact the measurement and associated uncertainties.
Several examples illustrate using the tool to establish notional guidelines for optimizing the experimental design. The
simulation-driven process aims to increase confidence in new measurement techniques by incrementally refining models,
identifying assumptions, and evaluating potential error sources prior to costly physical implementation. In support of the
reproducible research effort, the tools developed for this effort are available on the MathWorks file exchange.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent illumination of an optically rough surface creates random phase variations in the reflected electric field. Free-space propagation converts these phase variations into irradiance variations in both the pupil and image planes, known as pupil- and image-plane speckle. Infrared imaging systems are often parameterized by the quantity Fλ/d, which relates the cutoff frequencies passed by the optical diffraction MTF to the frequencies passed by the detector MTF. We present both analytical expressions and Monte-Carlo wave-optics simulations to determine the relationship between image-plane speckle contrast and the first-order system parameters utilized in Fλ/d (focal length, aperture size, wavelength, and detector size). For designers of active imaging systems, this paper provides input on speckle mitigation using Fλ/d to consider in system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer vision has become crucial to autonomous systems, helping them navigate complex environments. Combining this with geospatial data further provides capability to geolocate the system when GPS is not available or trusted. A test bed was built to characterize the visibility of radio and cellular towers from a ground-vehicle across all atmospheric transmission bands. These targets are exemplary features because of their visibility over long distances and surveyed geolocation. Contrast measurements of targets were characterized and compared in each spectral window under different environmental conditions. Utilizing human perception to build NVIPM models provided predictable range performance for each band.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of a synthetic observer model has shown promise for range performance analysis of novel imaging systems. This has many advantages over traditional analytical range models, chiefly stemming from the fact that it determines performance from (real or simulated) imagery directly, rather than from a pre-specified list of parameters. Our synthetic observer approach operates over a Triangle Orientation Discrimination (TOD) target and observer task, using a template correlator for target identification. The synthetic observer performance is taken as a proxy for human target identification performance, enabling expedient evaluation of image processing pipelines, sensor configurations, environmental conditions, etc. In prior work we have explored how the template-correlator-based synthetic observer performs on flat background, flat target imagery. In this work, we apply the same synthetic observer design to natural backgrounds. Performance is compared to that of human observers on the same perception task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of the development of different models to determine the range performance of infrared imaging systems. It starts with the grassroots of the motivation of these models to be able to compare the detection, recognition and identification ranges of different infrared imaging systems. With the development of these imaging systems further progress of the performance models were needed and will be described. The rapidly evolving complexity of imaging systems leads to a more divers approach to the comparison of these new systems. I will supply some examples to conquer the new challenges in the development of image enhancement procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the improvement of infrared detector technology over the last several decades, traditional design trades regarding the use of different wavebands in optical systems are becoming less and less applicable. New detector technology is allowing for extended and even bridged wavebands, where previous detectors were limited to only a single waveband. These bridged waveband cameras, or superband cameras, contain detectors with response over large spectral spans, allowing them to take advantage of the unique properties of multiple wavebands. This type of system is especially of interest when the superband contains both the short-wave infrared (SWIR) waveband – where most of the signal comes from reflected light – and midwave infrared (MWIR) waveband – where most of the signal comes from emitted light. Such a superband system allows the combination of reflected and emitted light on a single detector, opening new system level optical design trades across many fields and disciplines. Presented is a comparison of reflected and emitted radiometric signal levels for four filtered wavebands using a 1.5 μm to 5.4 μm superband imaging system: (1) with a 1.9 μm SWIR shortpass filter, (2) with a 2 μm to 2.5 μm extended SWIR (eSWIR) bandpass filter, (3) with a 3 μm MWIR longpass filter, and (4) with no filter (i.e., full superband response). The comparison in each of the four wavebands is repeated under four solar illumination conditions: full daylight, clouds, dusk, and night.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For various applications, such as handheld imaging systems and cameras mounted on vehicles or flying platforms, unavoidable motion and vibrations of the camera may result in smeared or blurred images or shaky videos.
This study aims at evaluating metrics for the measurement of camera vibrations in image sequences considering triangles and bars as test patterns. The focus are objective metrics for video stabilization, which are designed to objectively evaluate whether video stabilization was able to eliminate objectionable visual movement.
The metrics are evaluated for simulated image sequences captured by an artificially moved camera. The sequences vary in different properties such as the sensor noise of the camera, as well as the and temporal frequency of the camera vibrations. We analyze the effect of these properties on the metrics behavior. First results using recorded data of thermal imagers are presented as well. The findings will provide insights into the efficacy of different video stabilization metrics on simulated sequences with varying properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional targeting tasks consist of detection, recognition, and identification (DRI). Increasingly, sensing systems are being asked to go beyond these traditional categories for the purpose of distinguishing targets from decoys. The difficulty of this task is dependent both on the sensing system used and the fidelity of the decoy. In this paper we examine how the task of distinguishing target from decoy with imaging sensors fits within the traditional task difficulty description in models such as the Night Vision Integrated Performance Model (NVIPM). We discuss the types of decoys an imaging sensor might encounter. We introduce the idea of interrogation as a task. Using NVIPM and the tracked vehicle target identification task as a baseline, we examine the space of task difficulty for possible insights into the task difficulty of interrogation for imaging sensors. Examining several sensors spanning visible through thermal infrared, we calculate the performance as a function of task difficulty. From this, we discuss the implications and possible limitations of using imaging sensors for interrogation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Models for triangle orientation discrimination (TOD) have been proposed for performance evaluation of thermal imaging devices. For thermal imager assessment, human visual systems for TOD have been modeled and rigorously validated for a wide variety of image distortions through observer studies. As the conduct of observer trials is time-consuming and costly, also AI-based TOD models for imager assessment have been presented. Recently, camera systems with embedded automatic target recognition (ATR) are becoming increasingly important. So far it is an open question if the simple TOD task, as a classification problem with 4 classes, is suitable for providing similar evaluations and rankings for these thermal imaging devices as algorithms for more complex and slower tasks like object detection, e.g. for ATR. A widely used framework for object detection is “You Only Look Once” (YOLO).
In this work, performance assessments for TOD models and YOLO-based models are compared. Known image databases as well as synthetic images with triangles and natural backgrounds are degraded according to a unified device description with blur and image noise. The blur caused by optical diffraction and detector footprint is varied by multiple aperture diameters and detector sizes through the application of modulation transfer functions, while the image noise is varied by multiple noise error levels as Gaussian sensor noise. The TOD models are evaluated for the degraded images with triangles, while the YOLO models are applied to the degraded variants of the image databases. For different degradation parameters, the model precisions of the TOD models are compared to figures of merit of the YOLO models such as the mean average precision (mAP). Statistical uncertainties of the performance ranking for different degradation parameters of cameras and both TOD and YOLO models are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Jones Detectivity, denoted D*, metric is commonly used to compare thermal camera focal plane arrays. D* projects the thermal noise back into time (1 second) and area (1cm^2), thereby normalizing its bandwidth. This makes it easier to compare the sensitivity for different thermal detectors. Here we extend the basic idea of this bandwidth normalization to low light cameras, by using a signal to noise ratio (SNR), denoted 𝑆N𝑅𝐷∗ . One 𝑆N𝑅𝐷∗ goal is to compare the performance of the low light sensor in the darkest of conditions, and therefore a dark version is defined using the absolute noise floor of the camera. The signal and noise are normalized by projecting it back to the scene (through the optics) to an angular space. It is argued that projecting the SNR back to the scene makes it capable of comparing complete low light camera systems, including the lens. We also explore the SNR defined and specified by image intensifier tubes, and show why it is not a good prediction for the performance of low light cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will take an initial look at the effect of variations in a sensor’s Fλ/d metric value (FLD) on the performance of Yolo_v3 (You Only Look Once) algorithm for object classification. The Yolo_v3 algorithm will initially be trained using static imagery provided in the commonly available Advanced Driver Assist System (ADAS) dataset. Image processing techniques will then be used to degrade image quality of the test data set, simulating detector-limited to optics-limited performance of the imagery. The degraded test set will then be used to evaluate the performance of Yolo_v3 for object classification. Results of Yolo_v3 will be presented for the varying levels of image degradation. An initial summary of the results will be discussed along with recommendations for evaluating an algorithm’s performance using a sensors FLD metric value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical turbulence in the atmosphere causes defocus, blur, and wander of images captured over long distances, which can significantly degrade their quality. Turbulence is a manifestation of variations in the index of refraction, which are caused by local variations in air temperature, pressure, humidity, gas content, and other factors. Turbulence can be quantified by the refractive index structure function parameter C2n. Simulation of images after propagation through an atmosphere of a specific C2n, along with measurement of the observed C2n from images, is thus of interest for a variety of agricultural, environmental, and defense applications. We discuss the generation of simulated imagery after propagation through an atmosphere of a defined C2n using various algorithms, then examine methods to determine the observed C2n from the generated images. Finally, we choose and test an algorithm to generate images and another to estimate C2n, then compare and contrast the observed C2n to the defined C2n in each case to observe how the simulation method and measurement method perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional helmet-mounted devices (HMDs), such as Night Vision Googles, are direct view systems where parallax, or image offset, is only along the line of sight and the impact to user performance is minimal. As HMDs transition to adding digital cameras while maintaining direct view capabilities, the sensor must be placed outside of the user’s line of sight. These offsets create more significant parallax and can greatly impact a user’s ability to navigate and to interact with objects at close distances. Parallax error can be easily corrected for a fixed distance to an object, but the error progressively increases when viewing objects that are closer or farther than the selected distance. More complicated methods can be employed, such as ground plane re-projection or correction based on a depth sensor, but those methods each have their own disadvantages. Factors such as alignment accuracy across the field of view and nauseating effects must also be considered. This paper describes the development of an image simulation representing parallax error in a virtual reality headset with the ability to apply different correction techniques with varying parameters. This simulation was used with a group of observers who were asked to move around a scene and qualitatively evaluate the effectiveness of each correction method with different combinations of sensors. Questions focused on their ability to complete certain tasks and their subjective experiences while using each method. Results from this evaluation are presented and recommendations are made for optimal settings and future studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Continuous Wave (CW) and Laser Range Gated (LRG) are two widely used and effective system design techniques pertaining to active imaging systems for long-range target discrimination and acquisition. Two more recent system design methods are Continuous Wave Time-of-Flight (CWToF) and Laser Range Gated with Range Resolve (LRGRR) active imaging systems. While these two techniques involve a higher degree of complexity in
terms of system design, they also aim to provide the user with higher resolution and more sensitive imaging capabilities. In this study, we will quantify the sensitivity and range resolution benefits of these more complex methods in comparison to their more fundamental counterparts (CW and LRG). We will provide a performance model comparing these methods and discuss some environmental and situational circumstances in which any of these approaches would prove to be superior to the others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared pilotage sensors enhance pilots’ situational awareness, aiding in obstacle avoidance and providing visibility during night flights or under degraded visual environments. Pilotage with a rotorcraft tends to be more difficult than with fixed-wingcraft as rotorcraft generally fly at a lower altitude, which increases the angular velocities of the ground below and other features nearby with respect to the pilotage sensor. This increase in the scene’s angular velocity increases motion blur in the imagery. This increased blur due to integration time, or time constant, lowers the system’s Modulation Transfer Function (MTF), thus reducing the Target Task Performance (TTP) metric. This lower TTP also corresponds to a decrease in the system’s pilotage performance. There has not been a straightforward method for including motion blur in the degradation of the TTP metric for pilotage performance. In this study, data is collected from a helicopter from different perspectives, using well-characterized cameras to capture the two levels of motion blur in pilotage imagery from two different looking angles. The imagery and Inertial Measurement Unit (IMU) data is then used to calculate the amount of angular motion and jitter in milliradians, which is then used to calculate a new degraded MTF and TTP metric. The addition of motion blur into the TTP metric for pilotage is essential in accurately predicting and evaluating the pilotage performance of systems on platforms that are moving quickly and have significant sensor time constant blur.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient means to determine a camera’s location in a volume is to incorporate fiducials at known locations in the volume and triangulate their found locations. ArUco markers are an efficient means to use in this model because there are many pre-defined means to locate ArUco markers through open source software like OpenCV. The algorithms that are used to determine if an ArUco marker is found is not always well characterized, yet there will always be an output from the algorithm that states in a definitive that either an ArUco target was found or not and if it is found it was found at this location. There are many parameters that affect the results of an accurate detection and calculated pose estimation, including system blur, image entropy, input illumination, additional camera attributes, the size of marker, its orientation, and its distance. Because each of these variables impacts the detection algorithm, each variable space must be tested to determine the operating bounds for a given set of ArUco markers. This correspondence demonstrates a method to quantify the ArUco detection performance based upon a simulation that separates each of the previously defined variables. Using virtually constructed imagery that simulates these effects, it is possible to create a sufficiently large data set that can give a definitive performance for ArUco target detection as a function of the OpenCV algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This author wrote a paper in 2003 for the SPIE conference summarizing the development of the procedures used to test and verify the performance of Forward-Looking Infrared (FLIR) systems. In the 20 years since the presentation of that paper, FLIR technology has significantly changed, including many of the key premises and rationale for testing and verification, along with the development of new techniques.
This paper will review the early development of the test techniques that coincided with historic modeling and field testing. Some of the basic theories that served as the cornerstone for early testing and modeling (Johnson Criteria, for instance) have been lost and are not familiar to the test engineers of today. Over the years, new testing techniques have been developed, and new FLIR technologies have emerged (e.g. sampled systems, image processing, etc.). This paper will review the early days of testing and their relation to modeling and field testing data, and will examine new techniques that are paving the way for providing advanced understanding of how the systems of today will work in the field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neuromorphic sensors (also known as event based cameras) behave differently than traditional imaging sensors as they respond only to changes in stimuli as they occur. They typically have higher dynamic range and frame rates than traditional imaging systems while using less power than other imaging systems because a pixel only outputs data when a stimulus occurs at that pixel. There are a variety of uses for neuromorphic sensors from temporal anomaly detection to autonomous driving. While the information in the output of the neuromorphic sensor correlates to a change in stimuli, there has not been a defined means to characterize neuromorphic sensors in order to predict performance from a given stimuli. This study focuses on the measurement of the temporal and spatial response of a neuromorphic sensor with additional discussion on model performance based upon these measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report explores how various mechanisms effect the response time of event-based cameras (EBCs). EBCs are based on unconventional electro-optical IR vision sensors, which are only sensitive to changing light. Because their operation is essentially “frameless,” their response time is not dependent to a frame rate or readout time, but rather the number of activated pixels, the magnitude of background light, local fabrication defects, and analog configuration of the pixel. A test apparatus was devised using a commercial off-the-shelf EBC to extract the sensor latency’s dependence on each parameter. Under various illumination levels, results show a mean latency and temporal jitter can increase by a factor of 10 via configuring bias parameters. Furthermore, worst-case latency can exceed 1–2 ms even when 0.005% of the array is activated simultaneously. These and many other findings in the report hope to inform use of event-based sensing technology when latency is a critical component of successful application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Latency in augmented vision systems can be defined as the total delay imposed on information propagating through a device with respect to a direct path. Latency is critically important in vision systems as it imposes a delay on reaction time. With the emergence of headborne augmented vision systems for dismounted soldiers and widespread usage of embedded digital processing in vision systems, latency becomes most critical in dynamic operational scenarios. As consequence, latency has been characterized in the recent years for various technologies including AR headsets, VR headsets and pilot helmets with integrated symbology overlay and night vision. These efforts have led to latency requirements that vary according to the application. However, as there is no standardized definition and testing methodology for latency in vision devices, it is difficult to compare latency values across devices and as stated by different manufacturers. We propose that latency be characterized as a set and not as a single value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent decades, wildfires have become increasingly widespread and hazardous. Dryer, hotter weather combined with more frequent heat waves leave forest areas susceptible to sudden, intense, and fast-growing forest fires. To protect private property and mitigate the damage, Hot Shot fire fighters are deployed into these dangerous situations. Extensive satellite and aerial platforms possess optical techniques for monitoring wildfire risks and boundary tracking. sUAS (small unmanned aerial system) based EO/IR systems provide a solution for real-time, high resolution, targeted response to acquire information critical to the safety and efficacy of wildfire mitigation. Real-time imagery from a sUAS of the position of Hot Shots and the progression of the fire boundary would be easily obtained and offer a method of ensuring safe deployment. An ideal sensor system for situational awareness in this environment would be able to image the ambient terrain and firefighters with good contrast while also detecting fire signatures and imaging through the smoke. The longer wavelength infrared bands have demonstrated imaging through the smoke of forest fires. However, near the wildfire where the Hot Shots work, they also receive strong radiometric signal from the temperature of the smoke. The emitted signal of the smoke can obscure the line of sight similarly to the scattering effect of wildfire smoke in the visible spectrum. The reflective and emissive components of a wildfire scene are studied and compared in the visible (VIS, 0.4 – 0.7μm), shortwave infrared (SWIR, 1.0-1.7μm), extended SWIR (eSWIR, 2.0-2.5μm), and longwave infrared (LWIR, 8-14μm). Both a radiometric model and calibrated field measurements find a band that has the highest probability for a continuous line of sight for terrain, firefighters, and fire signatures in a wildfire scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System level modeling and testing of SWIR, MWIR and LWIR infrared cameras are presented. Automated camera testing is performed by Santa Barbara Infrared's IRWindows software. Some of the standard tests are NETD, MTF, 3D noise, and Ensquared Energy. An Excel spreadsheet has been developed to model camera system performance which allows relatively easy addition of new camera types. Each camera type includes information on the ROIC (number and format of pixels, pixel readout rate, pixel size, well size, read noise), sensor material (spectral QE, fill factor, dark current vs temperature), lens, window and warm filter spectral transmission, cold filter temperature and spectral transmission, and cold aperture configuration (diameter and distance from sensor). Atmospheric temperature, turbulence, spectral absorption and spectral emission are included in the Excel model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Direct measurement of F-number presents a known challenge in characterizing electro-optical and infrared imaging systems. Conventional methods typically require the sensor to be evaluated separately from the lens, indirectly calculating F-number from measurements of effective focal length and entrance-pupil diameter. When a focal plane array is positioned behind the optics and cannot be removed, some potential options could be to quantify signal-to-noise ratio or depth of field using incoherent light. In either of these cases, the result is subject to extraneous camera parameters and sensitive to noise, aberrations, etc. To address these issues, we propose an alternative measurement routine that utilizes a coherent point source at the focus of an off-axis Newtonian collimator to generate collimated light. This allows us to place the system under test at optical infinity, where retroreflections from its focal plane depend solely on angle of incidence, wavelength of illumination and F-number. Thus by measuring retroreflected power as a function of incidence angle, we can back out the system’s F-number with a high degree of confidence. We demonstrate this concept through numerical simulation and laboratory testing, along with an unconventional knife-edge technique for gauging the entrance-pupil diameter in situ. Together these two measurements enable us to calculate effective focal length (and in turn pixel pitch by measuring instantaneous field of view) for a comprehensive system description. We further show that a working F-number and effective image distance are attainable through this method for finite-conjugate systems. These tools improve our ability to update existing system models with objective measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Testing of Night Vision Devices (NVD) and I2 tubes are regulated by a long series of US Defense standards (often called military standards or MIL standards). These standards set mandatory testing conditions to be fulfilled. Among others, the radiation source used in the tests shall be a tungsten filament lamp operated at a color temperature of 2856 kelvins (K), ±50 K. In recent years, we have noticed that those tungsten filament lamp with a sufficient spectral shape accuracy and stability have been harder to procure. In this paper, we present our characterization efforts to determine if a commercially available LED-based light source is suitable to replace a tungsten filament lamp for NVDs and I2 tubes testing. A LED-based light source is compared to a 2856 K filament lamp in terms of spectral shape, output power linearity, dynamic range and relative intensity noise (RIN). We also present the pros and cons of the two sources in a perspective of evaluating NVD performance in a controlled environment emulating different representative night sky irradiances in support of military and law enforcement operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern electro-optical systems increasingly incorporate multiple electro-optical sensors, each adding unique wavebands, alignment considerations, and distortion effects. Laboratory testing of these systems traditionally requires multiple measurement setups to determine metrics such as inter-sensor alignment/distortion, near/far focus performance, latency, etc.; a multi-spectral scene has been created to support many simultaneous, objective measurements from a single mounting position. In some cases, a multi-spectral scene is the only way to test new system-of-system type units because traditional tests don’t engage with or demonstrate their built-in algorithms (ex: fusion). In 2023 Parker et. al. developed a diverse multi-band scene with a diverse target set in order to test camera systems. In this correspondence, we describe a comprehensive and precise calibration of the scene. Among the methods used was a pair of reference cameras (reflective and emissive with a fixed extrinsic relations) translated across the entire field of view. The transformation matrices were determined to map pixel locations to angle; subsequent imaging of the target scene will yield precise locations of each feature, and comparisons between modelled and recorded images based on varied camera positions will validate the success of the calibration. This process will allow various measurements, across multiple wavebands, to be taken simultaneously and efficiently for a wide range of modern electro-optical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electro-Optical (EO) systems are designed for purposes such as detection/recognition/identification and tracking of object(s). In order to design the systems in an optimum manner, there are many processes involved in generation of the images of targets and background at the system detector output, and these components should be carefully examined for various conditions. The image chain starts with a ray originating from target space (object space) and after propagating through atmosphere and EO system; the final position of this wave is the focal plane (image plane) where the detector is placed. EO system designs require optimization of many different system parameters for a given task; therefore, there is a need for an end-to-end imaging system simulator which models cascaded image chain blocks from object space to the detector output. An image-based system performance prediction tool has been developed for generating synthetic data in order to be used for estimation and design/optimization of EO system performance. This paper introduces this image-based performance prediction tool/scene generator in a system designer point of view, and demonstrates some properties of the tool which may be useful for system analyzers/designers for optimization. The synthetic scenes can be generated either via parametric models and/or radiometric measurements for EO system, environment, and object signature. Also, this tool has a user-friendly graphical user interface (GUI) which takes either measurement and/or system/environmental/object space parameters as inputs. The user can observe/obtain the output raw images/videos together with various system design parameters as well as image degrading effects such as modulation transfer function (MTF) and noise. In addition, this tool can be used for generating synthetic data via constructing a big data set for traditional and learning based algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terahertz (THz) imaging systems use active sources, specialized optics, and detectors in order to penetrate through certain materials. Each of these components have design and manufacturing characteristics (e.g. coherence for sources, aberrations for optics, and dynamic range and noise in detectors) that can lead to a nonideal performance for the overall imaging system. Thus, system designers are frequently challenged to designs systems that approach theoretical performance, making quantitative measurement of imaging performance a key feedback element of system design. Quantitative evaluation of actual THz system performance will be performed using many of the same figures of merit that have been developed for imaging at other wavelengths (e.g. infrared imaging systems nominally operating in the shorter 3-12 μm wavelength range). The suitability and limitations of these evaluation criteria will be analyzed as part of the process for improving the modeling and design of high performance THz imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Characterizing Longwave Infrared Focal Plane Arrays (LWIR FPA) is a fundamental task with significant implications for broad applications in industrial and research and development (R and D) domains. Prior research has deliberated upon the selection of reference temperatures, integration time, and frame rate. This work is based on that knowledge and squarely centered on analyzing the overall FPA performance according to its internal temperature. In this context, the operability, uniformity, signal transfer function (SiTF), and noise equivalent temperature difference (NETD) were evaluated. The results indicated a degradation of performance of LWIR FPA with the increase in bad pixel count, which led the operability to an abrupt decrease from approximately 99.5% to 39.4% and the uniformity of the LWIR FPA to fall markedly from 95.9% to 66.2% as the temperature increases. Similarly, under an f/1 aperture, the NETD results showed a value oscillation with an increasing trend, ranging from 32 to 41mK. In summation, this work exposes the need for a meticulous parameter selection for the characterization of LWIR FPA, which can impact both industrial and R and D contexts. Our results affirm our previous research, pointed the best results taking the reference temperatures of 25°C and 60°C. This effort significantly enhances our understanding of LWIR FPA, offering insightful guidance for future characterization practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modulation Transfer Functions (MTFs) describe how a sensor system transfers spatial frequencies of a scene through an imaging system. For Infrared systems, lab measurements are performed in a laboratory setting with a collimated source and a tilted edge target. This method is the standard way to measure a sensor’s performance metric. When these sensors are used for practical applications in the field, factors such as focus, atmospheric turbulence, and path radiance limit the performance of the system. These environmentally induced blurs need to be considered when designing sensor systems to ensure the required performance is met. The effects of these factors on the sensor’s performance can be quantified by measuring an MTF while in the field. By matching laboratory and static field MTFs, the effects of other blurs can be isolated, such as platform dynamics, vibration, and atmospheric turbulence, which will affect the performance of the system. To obtain a field MTF that matches one measured in the laboratory, the variable field conditions need to be well controlled. The effects of MTF target nonuniformity, tilt angle, illumination spectra, integration time, dynamic range, and number of pixels on target were explored as possible environmental factors affecting the quality of field MTF measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will present a comprehensive study on the measurement, modeling, and simulation of the optical properties of wet surface paints. Low observable paints are designed to camouflage the optical signature of a system by imitating the background thermal signature and scattering incident light (visible and IR). These properties are well studied for pristine conditions but their optical properties in real conditions, wet and at cold temperatures, are less known. Herein, we present an in-situ measurement of dry, wet, and icy paint samples commonly used for thermal signature management. The collected data is analyzed for input to ShipIR based on a derived nominal (diffuse) emissivity and specular reflectivity versus incidence angle using the Sanford-Robertson approximation, where the angular and spectral properties of surface reflectance are separable. ). A current and modified version of the ShipIR wetted surface reflectance model will be compared against the optical properties obtained by the SOC reflectometers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last half-decade, the extended shortwave infrared (eSWIR) atmospheric band has become a focus of investigation for its potential to provide better object discrimination at range than the visible, as well as the near, shortwave, midwave, and longwave infrared bands, particularly in degraded visual environments such as smoke, dust, and smog. However, any detection band is only as useful as the best available detector, and thus an investigation into the design of detectors for use in the eSWIR band is necessary before standards are established and applications put into practice. This study examines the relationship between detector parameters and targeting performance in the eSWIR band for both passive and active detection. The effects of pixel pitch, dark current, read noise, frame rate, quantum efficiency, and well depth are examined and ranked in importance to an eSWIR system’s performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study examines topological dependence of diffuse reflectance for IR absorbing materials. A theoretical foundation for this functional dependence is described, elucidating physical processes underlying topological dependence of IR diffuse reflectance of composite-material layers on substrates. The dependence is examined by case studies using composite-material layers that include IR absorbing dyes on fabric substrates. Understanding the topological dependence of diffuse reflectance can assist in determining optimal composite-material configurations for specific reflectance specifications, which can include UV protecting materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collecting sufficient photon flux to clearly observe a target against the background and sensor noise is critical for long-range target imaging. The sensitivity can be characterized by signal-to-noise ratio which can be derived from radiometry. There are many factors that affect the radiometry of an imaging system including type of illumination source, atmospheric effects and the operating wavelength or band. This paper compares passive imaging and active imaging for long-range targets in near infrared (NIR) verse shortwave infrared (SWIR) bands. Passive imaging uses direct sunlight as an illumination source to take an image of a target. For active imaging, we investigate continuous wave (CW) and pulsed laser illumination during both day and night operations. LRG illumination provides temporal controls to reduce atmospheric backscatter and distant background in order to maximize contrast to noise ratio (CNR). This study compares experimental data collected over propagation distances up to 1km against radiometric models implemented analytically and numerical modelling implemented in the Night Vision Integrated Performance Model (NV-IPM). This comparison is performed for each illumination mode using both Near-IR and SWIR bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we will share the main results achieved with a Long Wave Infrared (LWIR) Light Field (LF) imaging system with two novel capabilities relevant to IR image science applications: The capability of digitally refocusing to any nearby object planes with a high Signal to Noise Ratio (SNR), this is, to achieve refocused image object planes almost free of Fixed-Pattern Noise (FPN) and blur artifacts. And, the capability of achieving multispectral LWIR imaging for the global scene and for all the refocused nearby object planes required, this is, LWIR radiometry refocusing capacity. The built-in LWIR LF imaging system is implemented with an LWIR microbolometer Xenics camera 8-12 micrometers spectral band, and a high precision scanning system (Newport). LWIR multispectral capacity is achieved with an array of narrow-band LWIR interference optical filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.