PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12271, including the Title Page, Copyright information, Table of Contents, and Conference Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High performance infrared (IR) sensing and imaging systems require IR optoelectronic detectors that have a high signal-to-noise ratio (SNR) and a fast response time, and that can be readily hybridised to CMOS read-out integrated circuits (ROICs). From a device point of view, this translates to p-n junction photovoltaic detectors based on narrow bandgap semiconductors with a high quantum efficiency (signal) and low dark current (noise). These requirements limit the choice of possible semiconductors to those having an appropriate bandgap that matches the wavelength band of interest combined with a high optical absorption coefficient and a long minority carrier diffusion length, which corresponds to a large mobility-lifetime product for photogenerated minority carriers. Technological constraints and modern clean-room fabrication processes necessitate that IR detector technologies are generally based on thin-film narrow bandgap semiconductors that have been epitaxially grown on lattice-matched wider bandgap IR-transparent substrates. The basic semiconductor material properties have led to InGaAs (in the SWIR up to 1.7 microns), InSb (in the MWIR up to 5 microns), and HgCdTe (in the eSWIR, MWIR and LWIR wavelength bands) being the dominant IR detector technologies for high performance applications. In this paper, the current technological limitations of HgCdTe-based technologies will be discussed with a view towards developing future pathways for the development of next-generation IR imaging arrays having the features of larger imaging array format and smaller pixel pitch, higher pixel yield and operability, higher quantum efficiency (QE), higher operating temperature (HOT), and dramatically lower per-unit cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report a nonlinear optical upconversion 3D imaging system for infrared radiation enabled by zinc indiffused MgO:PPLN waveguides. While raster-scanning a scene with an 1800 nm pulsed-laser source, we record time-of-flight information, thus probing the 3D structure of various objects in the scene of interest. Through upconversion, the 3D information is transferred from 1800 nm to 795 nm, a wavelength accessible to single-photon avalanche diode (SPAD).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Type-II superlattice (T2SL) based semiconductors have emerged as a rival to well-established HgCdTe-based IR detectors, promising comparable performance at significantly lower cost. T2SLs are complex nanostructures that exhibit multiple-carrier and highly-anisotropic electronic transport properties, which renders them exceedingly challenging to study experimentally. The lack of reliable experimental data has limited optimisation and modelling efforts, and thus hampered progress. This paper will present a systematic experimental study of electronic transport in InAs/InGaSb T2SLs, by employing world-leading mobility spectrum techniques developed at UWA and state-of-the art T2SL structures from three leading research groups developing infrared detector technologies based on T2SLs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We aim to measure the glucose concentration in the body through passive mid-infrared spectroscopy using a palm-sized imaging two-dimensional Fourier spectrometer. Radiation in the mid-infrared region (at a wavelength of approximately 10 µm) is emitted from the object surface, with the intensity of the radiated light corresponding to the object temperature. Passive spectroscopy acquires component information from the spectral intensity of the radiated light emitted from the object without a light source. Intrinsic vibrations of molecules in the object itself are detected, and the spectral characteristics are thus the emission spectrum of intrinsic vibration peaks. In contrast, conventional active spectroscopy irradiates the measurement target with light and acquires spectral characteristics from the reflected light. Molecular vibrations excited by the light source are measured, and the spectral characteristics are thus absorption spectra of the energy absorbed at the eigenfrequency of the molecule. The wavelengths that are confirmed as absorption wavelengths in active spectroscopy are confirmed as emission wavelengths in passive spectroscopy. Active spectroscopy and passive spectroscopy thus have a negative–positive relationship. The imaging-type two-dimensional Fourier spectrometer (7 to 14 µm) used in past measurement has transmission optics. Using three Ge lenses for the front lens, objective lens, and imaging lens, we constructed reflective optics using reflective mirrors for the objective and imaging lenses. The reflective mirror guarantees flatness and high spectral reflectance over a wide bandwidth (3 to 20 µm), and the measurement bandwidth is thus extended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic Target Recognition (ATR) and target tracking are fundamental functions in many military systems and so have a significant impact on a sensor system’s performance. In response to the demand for increased capability, ATR designs have evolved from relatively simple filters to increasingly complex algorithms, using techniques such as artificial intelligence. Assessing the performance of image processing algorithms is a significant challenge, particularly with their increased design complexity. Increasing the complexity of a processing design tends to result in greater sensitivity to variations in scene conditions, rendering the system performance more nuanced with respect to the image content. There is a need to develop modelling and simulation design tools that better reflect the impact of image processing on the overall system performance when subjected to a wide variation in input scene and sensor platform characteristics. In this paper, the development and design of the System Performance Model (SPM) is described which provides the modelling and simulation of different image processing algorithms such as ATR. The approach taken within the SPM is to use real imagery and convert this to imagery that would be generated by the modelled camera. This process, which is described in the paper, is critical to the design of the SPM and underpins its effectiveness and accuracy. Example results are given that illustrate the design of the SPM’s image conversion process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Describing a useful performance evaluation method for object tracking algorithms is difficult. Algorithms that are very successful w.r.t. general-purpose performance metrics may perform poorly for a specific scenario. Additionally, algorithm developers frequently face an unanswerable question: will it satisfy the needs of that system (which is currently in the design phase)?”. Even when special time and resources can be allocated to collect reasonably representative data for the scenarios of interest, the answer usually remains ambiguous. Many times, during field tests or usage, the user experiences insufficient performance and the algorithm needs to be revised. In this study, we propose an approach to address this problem. Our approach is based on iterative improvement of the evaluation process. The performance requirements are determined by the field experts or the system designers. Standard questions are asked to the user/system developer and the test dataset is determined in cooperation. Each video segment in the dataset is assigned several tags for scenario type, difficulty and importance. For any novel failure case, representative videos are added to the dataset. This way, quantitative results can be organized to be more informative for the user and improvements to the algorithms can be evaluated more systematically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing design plays an important role in the overall performance of modern EO/IR cameras. Incorporating these functions into models and simulators presents a major design challenge, particularly for those cases where complex processing functions are required, such as those found in autonomous ATD/R systems. An established technique involves the use of image-based simulations which process pre-recorded imagery using representative image processing algorithms. However, such an approach requires extensive run times and a large volume of image data, both of which can be prohibitive. An alternative approach is presented here whereby a limited number of images are processed and then used to generate statistically based performance transfer functions utilising an appropriate interpolation scheme. These transfer functions are then used to represent the output response of the processing chain when the received imagery is subjected to different levels of degradations such as distortion and blurring. Such transfer functions can then be stored in multidimensional look-up tables which can be rapidly accessed by a system-level Monte Carlo performance simulation. The ability to represent and extract the performance-related transfer functions is dependent upon the image quality metrics, and the accuracy of the corresponding parametric model requires careful consideration of the model validation. An example simulation is presented based on an autonomous ATD/R sensor system mounted on an airborne platform. The importance of validation is demonstrated, and the increased run-time benefits are described. The proposed parametric image modelling approach provides sensor system designers with increased confidence in their design and compliance, and this helps reduces the early design risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The AFIT Sensor and Scene Emulation Tool (ASSET) is a physics-based model used to generate synthetic data sets of wide field-of-view (WFOV) electro-optical and infrared (EO/IR) sensors with realistic radiometric properties, noise characteristics, and sensor artifacts. This effort evaluates the use of Convolutional Neural Networks (CNNS) trained on samples of real space-based hyperspectral data paired with panchromatic imagery as a method of generating synthetic hyperspectral reflectance data from wide-band imagery inputs to improve the radiometric accuracy of ASSET. Further, the effort demonstrates how these updates will improve ASSET’s radiometric accuracy through comparisons to NASA’s moderate resolution imaging spectroradiometer (MODIS). In order to place the development of synthetic hyperspectral reflectance data in context, the scene generation process implemented in ASSET is also presented in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accurate quantification of scattered laser radiation is crucial for estimating threats to humans from outdoor applications of high-power lasers. In addition to the hazard due to intrabeam viewing, specular and diffuse reflections from irradiated targets can potentially harm the human eye and skin. Measurement techniques for the detection of scattered laser radiation have been developed in the way to provide quantitative information for advanced analyses as well as proofs and further improvements of laser safety calculations. The detection system has to be calibrated, characterized and validated in order to obtain reliable results. A compact and wavelength-specific detection system is presented, which relies on the technical specifications (i.e. entrance aperture and angular field) according to German laser safety policies.
The mobile detection system is deployed at the DLR laser test range in Lampoldshausen (Germany). Field measurements are performed by irradiating metallic targets with high-power laser radiation at 1030 nm and simultaneously measuring the scattered laser radiation with the calibrated detection system. The field of view of the detection system is oriented to the target by using an alignment laser integrated into the detection system. Target samples with different surface roughness are examined in the experiments to analyze specularly and diffusely scattered laser radiation. The dependence of the scattering angle and the distance from the metallic targets to the detection system are investigated. The results are compared to threshold limits of laser safety standards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection is a crucial task in defense applications such as surveillance, infrared search and track, and missile approach warning systems. Typically, the target image is extended over a few sensor pixels of the imaging system only and the detection is performed by appropriate algorithms.
In order to study the impact of imaging system design parameters and environmental conditions on the detection performance, a simulation tool is developed. Apart from computing detection ranges based on the expected signal to noise ratio the algorithm requires for detection, the tool is also meant for simulating image sequences of engaging targets. Therefore, it provides means to investigate the interplay between system design parameters and algorithms for target detection.
The simulation is based on a rigorous calculation of the target image in the focal plane, with consideration of the optical transfer functions of imaging chain components. Integrating the target image over the active pixel areas yields the additional signal of the detector pixels caused by the target. Based on these values and average background noise the signal to noise ratio (SNR) is obtained as function of the target distance. Image data is generated by overlaying the additional signals over a background image.
We exemplify the application of the simulation tool by studying the effect of various system parameters and environmental conditions on the resulting SNR and detection range. Corresponding simulated image data is presented as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Near-eye displays–displays positioned in close proximity to the observer’s eye–are a technology continuing to gain significance in industrial and defense applications, e.g. for augmented reality and digital night vision. Fraunhofer IOSB has recently developed a specialized measurement setup for assessing the display capabilities of such devices as part of the optoelectronic imaging chain, with the primary focus on the Modulation Transfer Function (MTF).
The setup consists of an imaging system with a high-resolution CMOS camera and a motorized positioning system. It is intended to run different measurement procedures semi-automatically, performing the desired measurements at specified points on the display.
This paper presents the extended work on near-eye display imaging quality assessment following the initial publication. Using a commercial virtual reality headset as a sample display, we further refined the previously described MTF measurement procedures, with one method being based on bar pattern images and another method using a slanted edge image. Refinements include improvements to the processing of the camera images as well as to the method of extracting contrast measurements. Furthermore, we implemented an additional, line-image-based method for determining the device’s MTF.
The impact of the refinements is examined and the results of the different methods are discussed with the goal to find the most suitable measurement procedures for our setup and to highlight the individual merits of different measurement methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mid-wave and long-wave infrared (IR) are two bands of interest for uncooled infrared imaging cameras. While longwave infrared detectors are sensitive to human body temperature, mid-wave infrared detectors are useful to detect “hot sources”. In addition, various gases have absorption bands in the mid-wave IR range, so that environmental monitoring or gas detection should be mentioned as further applications. To realize multispectral uncooled thermal imaging detectors, Fraunhofer IMS investigated the absorption properties of plasmonic metamaterial absorbers made of metal-insulator-metal (MIM) structures. High and multispectral absorption is particularly desirable for various microengineering applications, including microbolometers. The MIM absorbers are developed to be adaptable to Fraunhofer IMS nanotube microbolometer technology.
We report here the first results of simulation and experimental characterization of MIM test structures for multispectral absorption. The test structures consist of upper periodic metal structures, a middle dielectric layer and a lower metal reflector layer to produce surface plasmon resonance at desired absorption wavelengths. For a CMOS-compatible MIM absorber, various materials and thicknesses are being studied to realize selective absorption. We demonstrate the optical characterization of the test structures by Fourier transform infrared (FTIR) measurements and the influence of size, thickness and materials of MIM structures to achieve high selective absorption in a narrow wavelength range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A current approach for performance assessment of imagers is triangle orientation discrimination (TOD). This approach requires observers or human visual system (HVS) models to recognize equilateral triangles pointing in four different directions. Imagers may apply embedded advanced digital signal processing (ADSP) for contrast enhancement, noise reduction, edge sharpening, etc. Unfortunately, applied methods are in general not documented and hence unknown. Within the last decades a vast amount of techniques for contrast enhancement has been proposed. There are some comparisons of such algorithms for few images and figures of merit. However, many of these figures of merit cannot assess usability of altered image content for specific tasks such as object recognition In this work different algorithms for contrast enhancement are compared in terms of TOD assessments by convolutional neural networks (CNN) as models. These models are trained by artificial images with single triangles. Many methods for contrast enhancement highly depend on the content of the entire image. Therefore, the images are superimposed by natural backgrounds with varying standard deviations to provide different signal-to-background ratios. Then these images are degraded by Gaussian blur and noise representing degradational camera effects and sensor noise. Different algorithms are applied, such as the contrast-limited adaptive histogram equalization or local range modification. Then accuracies of the trained models on these images are compared for different ADSP algorithms. Accuracy gains for low signal-to-background ratio and sufficiently large triangles are found, while impairments are found for high signal-to-background ratio and small triangles. Finally, implications of replacing triangles by real target signatures when using such ADSP algorithms are discussed. The results can be a step towards the assessment of those algorithms for generic target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blurry images are not only visually unappealing, but they also degrade the performance of computer vision applications dramatically. As a result, motion deblurring for the thermal infrared picture plays a critical role in infrared systems. In recent years, convolutional neural network-based image deblurring methods have yielded promising performance with remarkable results and low computational cost. Inspired by these works, in this paper, we investigate an end-to-end deblurring model for single blurred thermal IR image by adopting the multi-input approach. Our model achieve PSNR and SSIM scores of 31.83 and 0.6435 when evaluating on our blur-sharp thermal infrared image pair dataset. Furthermore, the lightweight nature of our model allows it to operate at 140 FPS when inferring on Tesla V100 GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anomaly detectıon in large-scale time-series data acquired by Fiber Optic Distributed Acoustic Sensors (DAS) used for perimeter security and pipeline monitoring is a critical problem in machine learning. However, because of the vast amounts of data to process, it can be time and energy intensive. This study looks at how to reduce detection time and computing costs for this use case. In order to distinguish the acoustic event of interest from the noise and establish a binary detection threshold, we employ a Maximum Eigenvalue Detection (MED) approach in conjunction with a Random Matrix Theory (RMT) precept, namely the Tracy-Widom limit. A pipeline of signal processing techniques is used to assist the algorithm, beginning with applying a Moving Average (MA) filter to remove amplitude swings on the signal, which is represented by a data matrix, and then subsampling it to obtain uncorrelated signals among the subsequent columns to reduce the number of data processed. As a result, we can detect events of interest in less time. Following that, low-pass filtering is employed to eliminate low-frequency coefficients induced by various sorts of environmental and seismic events. Following normalization, the MED method is used to each of the Wishart matrices, which are generated by segmenting the data stream into equal small sub-matrices. RMT is used to set a threshold with a false alarm rate of 0.01 (FAR). The data columns matching to the selected MED values are then injected into a Convolutions Neural Network (CNN) to capture and detect the event of interest. When compared to using solely the CNN, the optimal results from our approach, MED followed by a CNN anomaly detection process, demonstrate a faster detection rate for events in security application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear array imaging systems are widely used in remote sensing satellites. The data of these sensors can only be used when they are geometrically, radiometrically, and spectrally calibrated. Therefore, calibration procedures before and after launch must be carefully planned, at the first stages of sensor design, in such a way that the measured payloads data during the satellite lifetime is still validated. As for the push-broom imaging, each line of data has its own independent geometry, there are many sets of independent data in each images which could be used for accurate geometric calibration. In this paper, focusing on optical distortion, a step-by-step procedure for pre-launch geometric calibration of a high resolution push-broom payload is investigated and the mathematical approaches of the calibration coefficients are presented here. Finally an error-budget analysis is done to investigate the methodology sensitivity. The simulation results show that the nonlinear effects of distortion can be minimized and the accuracy of the geometric position of this method on the image screen can be improved to the order of sub-pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police. In these environments, wavelengths in the SWIR regime have superior transmission and less backscatter in comparison to the visible spectral range received by the human eye or RGB cameras. On the emitter side, the active SWIR sensor system features a light-emitting diode (LED) array consisting of 55 SWIR-LEDs with a total optical power output of 280 mW emitting at wavelengths around λ = 1568 nm with a Full Width at Half Maximum (FWHM) of 137 nm, which are more eye-safe compared to the visible range. The receiver consists of an InGaAs camera equipped with a lens with a field of view slightly exceeding the angle of radiation of the LED array. For convenient use as a portable device, a display for live video from the SWIR camera is embedded within the system. The dimensions of the system are 270 x 190 x 110 mm and the overall weight is 3470 g. The superior potential of SWIR in contrast to visible wavelengths in scattering environments is first theoretically estimated using the Mie scattering theory, followed by an introduction of the SWIR sensor system including a detailed description of its assembly and a characterisation of the illuminator regarding optical power, spatial emission profile, heat dissipation, and spectral emission. The performance of the system is then estimated by design calculations based on the lidar equation. First field experiments using a fog machine show an improved performance compared to a camera in the visible range (VIS), as a result of less backscattering from illumination, lower extinction and thus producing a clearer image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the process of plasma-electrolytic synthesis, a new physical surface is synthesized, consisting of a metal oxide layer of a modified surface and the synthesis of elements of a set of electrolyte plasma, the nodal sources of the components of which are the components of the electrodes (electrolyte and metal surface). In this regard, the classification of plasma
electro-discharge processes based on analyzing optical and electrical sensor data using machine learning methods is an actual task. It can be used for intelligent control algorithms of the sensor layers operations and conduct analytical and quantitative studies of the properties of nodal substances. The paper presents the experimental analysis of video and electrical parameters of the oxygen process, automated processing of the basic features of images of plasma-electrolyte discharges, and a segmentation approach of the electric-discharge machining. This approach can help create microsensor elements and materials and systems for intelligent modeling and launching of electrochemical methods for creating an electrolyte plasma and directed synthesis of substances. To test the performance of the proposed algorithm, the database STANKIN is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article proposes a fusion technique and an algorithm for combining images recorded in the IR and visible spectrum in relation to the problem of processing products by robotic complexes in dust and fog. Primary data processing is based on the use of a multi-criteria processing with complex data analysis and cross-change of the filtration coefficient for different types of data. The search for base points is based on the application of the technique of reducing the range of clusters (image simplification) and searching for transition boundaries using the approach of determining the slope of the function in local areas. As test data used to evaluate the effectiveness, pairs of test images obtained by sensors with a resolution of 1024x768 (8 bit, color image, visible range) and 640x480 (8 bit, color, IR image) are used. Images of simple shapes are used as analyzed objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level deep fusion for visible and thermal information. We present the algorithm, combining information from visible cameras and thermal sensors based on the deep learning and parameterized model of logarithmic image processing (PLIP). The proposed neural network based on the principle of an autoencoder. We use an encoder to extract the features of images, and the fused image is obtained by a decoding network. The encoder consists of a convolutional layer and a dense block, which also consists of convolutional layers. Fusing images are in the decoder and the fusion layer operating to the principle of PLIP which close to the human visual system's perception. This fusion approach applied for surveillance application. Experimental results showed the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a mid-infrared passive spectroscopic imaging apparatus that uses an uncooled micro-bolometer array sensor as a light-receiving device. This apparatus makes it possible to acquire component information such as that of glucose directly emitted from the skin without a light source. However, it is difficult to obtain the background for spectroscopic measurements inside the human body. In this paper, we propose a background correction method for calculating the spectral characteristics from the acquired spectral emission intensity. The proposed method estimates the emitted light through a fitting calculation using Planck’s law as a basis function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.