Infrared imaging sensors can nowadays be regarded as a viable alternative to radar guidance that is stealthier and more capable of naval target classification, decoy discrimination and aimpoint selection. In view of this, the design of naval platforms, their sensors, weapon systems and counter measure deployment strategies need to be adapted accordingly. For this, tooling capable of simulating engagements by IR guided threats is essential. This paper presents a recently developed physics-based GPU-accelerated model chain that allows to generate realistic and radiometrically correct image sequences representative of those seen by an IR threat having varying levels of intelligence that is approaching a naval vessel. A description of the scientific and computational aspects of the model and modules will be provided along with examples of modelling output.
Infrared imaging of the sea surface is used for many purposes, such as remote sensing of large oceanographic structures, environmental monitoring, surveillance applications and platform signature research. Many of these studies rely on determining the contrast of a target feature with its background and therefore benefit from accurately predicting the signature of the underlying sea surface background. We here present a model that synthesizes infrared spectral images of sea surfaces. This model traces explicitly the behaviour of the sea wave structure and light propagation. To self-consistently treat spatial and temporal correlations of the clutter, geometrical realizations of sea surfaces are built based on realistic sea wave spectra and their temporal behaviour is subsequently followed. A camera model and a ray tracer are used to determine which parts of the sea surface are observable by individual camera pixels. Atmospheric input elements of the model, being sky dome, path radiance and transmission, are computed with MODTRAN for a chosen atmosphere.
Most models that predict the infrared signature of an object are based on steady-state equilibrium conditions and do not model the dynamic nature of the real world. To gain more understanding of the dynamic infrared signatures of an object, several outdoor experiments were performed, using a CUBI and a small vessel as an object. Dynamic changes were (intentionally) made to the object, while the temperatures of the facets, the meteorological parameters, and the infrared signature were being monitored. The influence of environmental parameters on the dynamic infrared signature of an object is discussed in this paper. A first attempt to model the decrease in object temperature is made.
A CUBI, a simple geometric metal test-object, is placed in an outdoor environment to monitor the infrared signature. The (daily) temperature evolution of the individual facets is monitored as function of environmental parameters, such as solar irradiance and ambient temperature. This provides insight in the parameters that have the strongest effects on the thermal signature of the CUBI. The CUBI is also imaged by infrared cameras and these recordings are used to estimate the temperature of the CUBI. The recorded images are also used to provide insight in the amount of air turbulence generated by the radiance of the hot CUBI facets. The amount of turbulence is compared with the ambient turbulence as calculated by standard bulk theories.
For users of Electro-Optical (EO) sensors at sea, knowledge on their resolution is of key operational importance for the prediction of the obtainable classification ranges. Small targets may be located at ranges of 20 km and more and the present day sensor pixel size may be as small as 10 μrad. In this type of scenarios, sensor resolution will be limited by blur, generated by atmospheric turbulence, easily being greater than 30 μrad (at 20 km range). Predictions of the blur size are generally based upon the theory, developed by Fried [1]. In this theory, the turbulence strength is characterized by the structure parameter for the refractive index Cn2, of which data are assumed to be available from secondary instruments. The theory predicts the atmospheric Modulation Transfer Function (MTF), which can be incorporated into the total system MTF, used in range performance predictions, as described by Holst [2]. Validation of blur predictions by measurements is a complex effort due to the rapid variations of the blur with time and the problems associated with the simultaneous acquisition of proper Cn2 data. During the FATMOSE trial, carried out over a range of 15.7 km in the False Bay near Simon’s Town (South Africa) from November 2009 to October 2010, these data were collected in a large variety of atmospheric conditions [3]. In stead of the atmospheric MTF, the horizontal and vertical line spread function (LSF) was measured with a camera with 5 μrad resolution. Various methods for the determination of the LSF and the associated problems are discussed in the paper. The width of the LSF is via its Fourier transform directly related to the MTF. Cn2 data were collected with a standard BLS scintillometer over a nearby range. Additional Cn2 data were obtained via conversion of the scintillation data from the same camera and from a high speed transmissometer, collecting data over the same range. Comparisons between blur and Beam Wander predictions and measurements from the FATMOSE campaign are discussed in the paper as well as their impact on the range performance of present day sensors at sea.
For maritime situational awareness, it is important to identify currently observed ships as earlier encounters. For
example, past location and behavior analysis are useful to determine whether a ship is of interest in case of piracy and
smuggling. It is beneficial to verify this with cameras at a distance, to avoid the costs of bringing an own asset closer to
the ship. The focus of this paper is on ship recognition from electro-optical imagery. The main contribution is an analysis
of the effect of using the combination of descriptor localization and compact representations. An evaluation is performed
to assess the usefulness in persistent tracking, especially for larger intervals (i.e. re-identification of ships). From the
evaluation on recordings of imagery, it is estimated how well the system discriminates between different ships.
A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras
caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the Point Spread
Function (PSF) of the used optics, the integration time of the camera, and camera sensor specifications like pixel size,
quantum efficiency and full well capacity. Effects of the read-out circuit of the camera are not incorporated. The model
was evaluated with laser dazzle experiments on CCD cameras using a 532 nm CW laser dazzler and shows good
agreement. For relatively low laser irradiance the model predicts the over-exposed laser spot area quite accurately and
shows the cube root dependency of spot diameter on laser irradiance, caused by the PSF as demonstrated before for IR
cameras. For higher laser power levels the laser induced spot diameter increases more rapidly than predicted, which
probably can be attributed to scatter effects in the camera. Some first attempts to model scatter contributions, using a
simple scatter power function f(θ), show good resemblance with experiments. Using this model, a tool is available which
can assess the performance of observation sensor systems while being subjected to laser countermeasures.
Long-term tracking is important for maritime situational awareness to identify currently observed ships as earlier encounters. In cases of, for example, piracy and smuggling, past location and behavior analysis are useful to determine whether a ship is of interest. Furthermore, it is beneficial to make this assessment with sensors (such as cameras) at a distance, to avoid costs of bringing an own asset closer to the ship for verification. The emphasis of the research presented in this paper, is on the use of several feature extraction and matching methods for recognizing ships from electro-optical imagery within different categories of vessels. We compared central moments, SIFT with localization and SIFT with Fisher Vectors. From the evaluation on imagery of ships, an indication of discriminative power is obtained between and within different categories of ships. This is used to assess the usefulness in persistent tracking, from short intervals (track improvement) to larger intervals (re-identifying ships). The result of this assessment on real data is used in a simulation environment to determine how track continuity is improved. The simulations showed that even limited recognition will improve tracking, connecting both tracks at short intervals as well as over several days.
KEYWORDS: Sensors, Signal processing, Data communications, Image processing, Data processing, Telecommunications, Data fusion, Infrared sensors, 3D image processing, Unmanned aerial vehicles
In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.
There are many weapon systems in which a human operator acquires a target, tracks it and designates it. Optical countermeasures against this type of systems deny the operator the possibility to fulfill this visual task. We describe the different effects that result from stimulation of the human visual system with high intensity (visible) light, and the associated potential operational impact. Of practical use are flash blindness, where an intense flash of light produces a temporary “blind-spot” in (part of) the visual field, flicker distraction, where strong intensity and/or color changes at a discomfortable frequency are produced, and disability glare where a source of light leads to contrast reduction. Hence there are three possibilities to disrupt the visual task of an operator with optical countermeasures such as flares or lasers or a combination of these; namely, by an intense flash of light, by an annoying light flicker or by a glare source. A variety of flares for this purpose is now available or under development: high intensity flash flares, continuous burning flares or strobe flares which have an oscillating intensity. The use of flare arrays seems particularly promising as an optical countermeasure. Lasers are particularly suited to interfere with human vision, because they can easily be varied in intensity, color and size, but they have to be directed at the (human) target, and issues like pointing and eye-safety have to be taken into account. Here we discuss the design issues and the operational impact of optical countermeasures against human operators.
Seven countries within the European Defence Agency (EDA) framework are joining effort in a four year project (2009-2013) on Detection in Urban scenario using Combined Airborne imaging Sensors (DUCAS). Data has been collected in a joint field trial including instrumentation for 3D mapping, hyperspectral and high resolution imagery together with in situ instrumentation for target, background and atmospheric characterization. Extensive analysis with respect to detection and classification has been performed. Progress in performance has been shown using combinations of hyperspectral and high spatial resolution sensors.
This paper studies change detection of LWIR (Long Wave Infrared) hyperspectral imagery. Goal is to improve target acquisition and situation awareness in urban areas with respect to conventional techniques. Hyperspectral and conventional broadband high-spatial-resolution data were collected during the DUCAS trials in Zeebrugge, Belgium, in June 2011. LWIR data were acquired using the ITRES Thermal Airborne Spectrographic Imager TASI-600 that operates in the spectral range of 8.0-11.5 μm (32 band configuration). Broadband data were acquired using two aeroplanemounted FLIR SC7000 MWIR cameras. Acquisition of the images was around noon. To limit the number of false alarms due to atmospheric changes, the time interval between the images is less than 2 hours. Local co-registration adjustment was applied to compensate for misregistration errors in the order of a few pixels. The targets in the data that will be analysed in this paper are different kinds of vehicles. Change detection algorithms that were applied and evaluated are Euclidean distance, Mahalanobis distance, Chronochrome (CC), Covariance Equalisation (CE), and Hyperbolic Anomalous Change Detection (HACD). Based on Receiver Operating Characteristics (ROC) we conclude that LWIR hyperspectral has an advantage over MWIR broadband change detection. The best hyperspectral detector is HACD because it is most robust to noise. MWIR high spatial-resolution broadband results show that it helps to apply a false alarm reduction strategy based on spatial processing.
The EDA project "Detection in Urban scenario using Combined Airborne imaging Sensors" (DUCAS) is in progress.
The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne
imagery for several defense applications in the urban area. The project is taking advantage of the combined resources
from 7 contributing nations within the EDA framework. An extensive field trial has been carried out in the city of
Zeebrugge at the Belgian coast in June 2011. The Belgian armed forces contributed with platforms, weapons, personnel
(soldiers) and logistics for the trial. Ground truth measurements with respect to geometrical characteristics, optical
material properties and weather conditions were obtained in addition to hyperspectral, multispectral and high resolution
spatial imagery.
High spectral/spatial resolution sensor data are used for detection, classification, identification and tracking.
During the FATMOSE trial, held over the False Bay (South Africa) from November 2009 until October 2010, day and
night (24/7) high resolution images were collected of point sources at a range of 15.7 km. Simultaneously, data were
collected on atmospheric parameters, as relevant for the turbulence conditions: air- and sea temperature, windspeed,
relative humidity and the structure parameter for refractive index: Cn2. The data provide statistical information on the
mean value and the variance of the atmospheric point spread function and the associated modulation transfer function
during series of consecutive frames. This information allows the prediction of the range performance for a given sensor,
target and atmospheric condition, which is of great importance for the user of optical sensors in related operational areas
and for the developers of image processing algorithms. In addition the occurrence of "lucky shots" in series of frames is
investigated: occasional frames with locally small blur spots. The simultaneously measured short exposure blur and the
beam wander are compared with simultaneously collected scintillation data along the same path and the Cn2 data from a
locally installed scintillometer. By using two vertically separated sources, the correlation is determined between the
beam wander in their images, providing information on the spatial extension of the atmospheric turbulence (eddy size).
Examples are shown of the appearance of the blur spot, including skewness and astigmatism effects, which manifest
themselves in the third moment of the spot and its distortion. An example is given of an experiment for determining the
range performance for a given camera and a bar target on an outgoing boat in the False Bay.
Knowledge on the marine boundary layer is of importance for the prediction of the optical image quality obtained from
long range targets. One property of the boundary layer, that can be studied rather easily by means of optical refraction
measurements, is the vertical temperature profile. This profile can be compared with the profile, as predicted by the
generally accepted Monin-Obukhov (M-O) similarity theory, such as applied in the EOSTAR model, developed at TNO.
This model also predicts the atmospheric turbulence profile, for which a validation can be done by means of scintillation
measurements. Along these lines we explored the data from the year-round FATMOSE experiment, arranged over the
False Bay (South-Africa). Because of the large amount of refraction and scintillation data, supported by extensive data
from various local weather stations, we could select the conditions for which the M-O theory is valid and determine the
particular conditions where this theory is failing. In the paper model predictions (including Angle of Arrival calculations
in non-homogeneous conditions along the 15.7 km path) and associated refraction and scintillation measurements are
shown for a representative variety of conditions.
Potential asymmetric threats at short range in complex environments need to be identified quickly during coastal operations.
Laser range profiling is a technology that has the potential to shorten the OODA loop (Orient, Observe, Detect,
Act) by performing automatic characterisation of targets at large distance. The advantages of non-cooperative target
recognition with range profiles are: (a) a relatively short time on target is required, (b) the detection range is longer than
in the case of passive observation technologies such as IRST, and (c) characterisation of range profiles is possible at any
aspect angle. However, the shape of a range profile depends strongly on aspect angle. This means that a large data set is
necessary of all expected targets with reference profiles on a very dense aspect angle grid. Analysis of laser range profiles
can be done by comparing the measured profile with a database of laser range profiles obtained from 3D models of
possible targets. An alternative is the use of a profile database from one or several measurement campaigns. A prerequisite
for this is the availability of enough measured profiles of the appropriate targets, for many aspect angles. Comparison
of measured laser range profiles with a reference database can be performed using, e.g., formal statistical correlation
techniques or histogram dissimilarity techniques.
In this work, a field trial has been conducted to validate the concept of identification by using a laser range profiling
system with a high bandwidth receiver and short laser pulses. The field trial aimed at characterization of sea-surface
targets in a coastal/harbour environment. The targets ranged from pleasure boats like sailing boats, jet skis, and speed
boats to professional vessels like barges, cabin boats, and military vessels, all ranging from 3 to 30 meters in length. We
focus on (a) the use of a reference database generated via 3D target models, and (b) the use of a reference database of
measured laser range profiles. A variety of histogram dissimilarity measures was examined in order to enable fast and
reliable classification algorithms.
The FATMOSE trial (FAlse-bay ATMOSpheric Experiment) running over a period from November 2009 to July 2010,
was a continuation of the cooperation between TNO and IMT on atmospheric propagation and point target detection and
identification in a maritime environment. Instruments were installed for measuring scintillation, blurring- and refraction
effects over a 15.7 km path over sea. Simultaneously, a set of instruments was installed on a mid-path lighthouse for
collecting local meteorological data, including scintillation, sea surface temperature and visibility. The measurements
covered summer and winter conditions with a prevailing high wind speed from the South East, bringing in maritime air
masses. The weather conditions included variations in the Air-Sea Temperature Difference (ASTD), that may affect the
vertical temperature gradient in the atmospheric boundary layer, causing refraction effects in the lightpath. This was
measured with a theodolite camera, providing absolute Angles of Arrival (AOA). Blur data were collected with a high
resolution camera system with 10 bits dynamic range. Specially designed image analysis software allows determination
of the atmospheric blur, while simultaneously providing information on the Scintillation Index (S.I.). This S.I. was also
measured by using the Multiband Spectral Radiometer Transmissometer (MSRT). The ratio of the transmission levels of
this instrument contains information on the size distribution of the aerosols along the path. In the paper, experimental
details on the set-up and the instrumentation are given as well as the methods of analysis. Preliminary results are shown,
including a comparison of measured blur and scintillation data with Cn
2 data from the scintillometer, correlation between
AOA and ASTD and comparison of transmission data with data from the visibility meter. Blur and scintillation data are
compared with predictions from standard turbulence model predictions, using Cn
2. In future studies the data will be used
for validation of propagation models such as EOSTAR.
Small maritime targets, e.g., periscope tubes, jet skies, swimmers and small boats, are potential threats for naval ships under many conditions, but are difficult to detect with current radar systems due to their limited radar cross section and the presence of sea clutter. On the other hand, applications of lidar systems have shown that the reflections from small targets are significantly stronger than reflections from the sea surface. As a result, dedicated lidar systems are potential tools for the detection of small maritime targets. A geometric approach is used to compare the diffuse reflection properties of cylinders and spheres with flat surfaces, which is used to estimate the maximum detectable range of such objects for a given lidar system. Experimental results using lasers operating at 1.06 μm and 1.57 μm confirm this theory and are discussed. Small buoys near Scheveningen harbor could be detected under adverse weather over more than 9 km. Extrapolation of these results indicates that small targets can be detected out to ranges of approximately 20 km.
In this submission, we report on the successful field demonstration of the LOTUS landmine detection system that took place in August 2002 near the village of Vidovice, in the Northeast of Bosnia and Herzegovina.
KEYWORDS: Sensors, Sensor fusion, Land mines, General packet radio service, Probability theory, Data fusion, Metals, Infrared radiation, Infrared sensors, Infrared imaging
In this paper we introduce the concept of depth fusion for anti-personnel landmine detection. Depth fusion is an extension of common sensor-fusion techniques for landmine detection. The difference lies within the fact that fusion of sensor data is performed in different physical depth layers. In order to do so, it requires a sensor that provides depth information for object detections. Our ground-penetrating radar fulfills this requirement. Depth fusion is then taken as the combination of the output of sensor fusion of all layers. The underlying idea is that sensor fusion for the surface layer has a different weighing of the sensors when compared with the sensor fusion in the deep layers because of apparent sensor characteristics. For example, a thermal IR sensor hardly adds information to the sensor fusion in the deep layers. Furthermore, GPR has difficulties suppressing clutter in the surface layer. As such, the surface fusion should emphasize on the TIR sensor, whereas sensor fusion in the deep layers should have a higher weighing of the GPR. This a priori information can be made explicit by choosing for a depth-fusion approach. Experimental results form measurements at the TNO-FEL test facility are presented that validate our depth-fusion concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.