Object tracking is a direct or indirect key issue in many different military applications like visual surveillance,
automatic visual closed-loop control of UAVs (unmanned aerial vehicles) and PTZ-cameras, or in the field of
crowd evaluations in order to detect or analyse a riot emergence. Of course, a high robustness is the most
important feature of the underlying tracker, but this is hindered significantly the more the tracker needs to have
low calculation times. In the UAV application introduced in this paper the tracker has to be extraordinarily
quick.
In order to optimize the calculation time and the robustness in combination as far as possible, a highly efficient
tracking procedure is presented for the above mentioned application fields which relies on well-known color
histograms but uses them in a novel manner. This procedure bases on the calculation of a color weighting vector
representing the significances of object colors like a kind of an object's color finger print. Several examples from
the above mentioned military applications are shown to demonstrate the practical relevance and the performance
of the presented tracking approach.
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods,
the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment
of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It
comprises a semi-automatic marking of target objects (ground truth generation) including their propagation
over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the
object's movement conspicuity.
In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements
over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image
data that can show noise or image artefacts. The presented methods fathom the correlations between image
processing and camouflage assessment. A novel algorithm is presented based on template matching to assess
the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined
with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the
influence of object movement to a camouflage effect in different environments. As the results show, the presented
methods contribute to a significant benefit in the field of camouflage assessment.
KEYWORDS: Camouflage, Image sensors, Image filtering, Image transmission, Signal to noise ratio, Image enhancement, Cameras, Detection and tracking algorithms, Digital filtering, Computing systems
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods,
the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment
of objects in multispectral image sequences (see contributions to SPIE 2007, SPIE 2008 and SPIE 2009
[1], [2], [3]). It comprises a semi-automatic marking of target objects (ground truth generation) including their
propagation over the image sequence and the evaluation via user-defined feature extractors. The conspicuity of
camouflaged objects due to their movement can be assessed with a purpose-built processing method named MTI
snail track algorithm.
This paper presents the enhancements over the recent year and addresses procedures to assist the camouflage
assessment of moving objects for image data material with strong noise or image artefacts. This extends the
evaluation methods significantly to a broader application range. For example, some noisy infrared image data
material can be evaluated for the first time by applying the presented methods which fathom the correlations
between camouflage assessment, MTI (moving target indication) and dedicated noise filtering.
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods,
the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment
of objects in image sequences (see contributions to SPIE 2007 and SPIE 2008 [1], [2]). It works with
visual-optical, infrared and SAR image sequences. The system comprises a semi-automatic annotation functionality
for marking target objects (ground truth generation) including a propagation of those markings over the
image sequence for static as well as moving scene objects, where the recording camera may be static or moving.
The marked image regions are evaluated by applying user-defined feature extractors, which can easily be defined
and integrated into the system via a generic software interface.
This article presents further systematic enhancements made in the recent year and addresses particularly the
task of the detection of moving vehicles by latest image exploitation methods for objective camouflage assessment
in these cases. As a main topic, the loop was closed between the two natural opposites of reconnaissance
and camouflage, which was realized by incorporating ATD (Automatic Target Detection) algorithms into the
computer aided camouflage assessment. Since object (and sensor) movement is an important feature for many
applications, different image-based MTI (Moving Target Indication) algorithms were included in the CART
system, which rely on changes in the image plane from an image to the successive one (after camera movements
are automatically compensated). Additionally, the MTI outputs over time are combined in a certain way which
we call "snail track" algorithm. The results show that their output provides a valuable measurement for the
conspicuity of moving objects and therefore is an ideal component in the camouflage assessment. It is shown
that image-based MTI improvements lead to improvements in the camouflage assessment process.
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods,
the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment
of objects in image sequences. Since camouflage success is directly correlated with the detection range of
target objects, the system supports the evaluation of image sequences with moving cameras. The main features
of CART comprise a semi-automatic annotation functionality for marking target objects (ground truth generation)
including a propagation of those markings over the image sequence, as well as a real-time evaluation of
the marked image regions by applying individually selected feature extractors. The system works with visualoptical,
infrared and SAR image data, which can be used separately or simultaneously. The software is designed
as a generic integration platform, which can be extended to further sensors, measurements, feature extractors,
methods, and tasks.
Besides the demand of using moving cameras, it is important to support also moving objects in the scene
(CACAMO - Computer Aided Camouflage Assessment of Moving Objects). Since moving objects are more
likely to be discovered than other ones, the state of movement obviously is a significant factor when designing
camouflage methods and should explicitly be incorporated into the assessment process. For this, the software
provides auto-annotation tools, as well as a specific movement measurement component in order to capture the
conspicuity depending on different moving states. The auto-annotation assistance for moving objects is done
with the aid of tracking algorithms, incorporating color information, optical flow and change detection using
Kalman and particle filters. The challenge is to handle semi or full camouflaged objects, a circumstance which
naturally hinders computer vision algorithms.
Current army operations demand for continuous improvements of camouflage and concealment. This requires
systematic, objective assessment methods which is a very time consuming task using the present software systems.
Also the interactive composition of ground-truth is cumbersome. We present a system for camouflage assessment
using image sequences in real-time. The image sequences may stem from any imaging sensor, e.g. visual-optical
(VIS), infrared (IR), and SAR. Flexible navigation in image sequences, a semi-automatic generation of groundtruth
along with several functional enhancements form the base of the system whereas the main issue is the
camouflage assessment function with its generic interface to include individually defined feature extractors. For
semi-automatic annotation and ground-truth construction the user has to define interesting areas with polygons
in some starting frame. After that, the system estimates the transformation parameters for successive images
in real-time and applies the parameters on the previously defined polygons in order to warp the ground-truth
polygons onto the new frames. Various classes of polygons (target 1 : : : n, background area 1 : : :m, etc.) can
be defined and colorized. Defined ground-truth areas can be evaluated in real-time by applying individually
selected feature extractors whilst the results are displayed graphically and as a chart. For the evaluation, new
measurements can be integrated by the user and applied via a generic interface. The system is built as a generic
integration platform offering plenty of extension potential in order to further enable or improve camouflage
assessment methods. Due to generic interfaces also ATR and ATD methods for automatic and semi-automatic
camouflage assessment are integrable.
A dual-band infrared camera system based on a dual-band quantum well infrared photodetector (QWIP) has been developed for acquiring images from both the mid-wavelength (MWIR) and long-wavelength (LWIR) infrared spectral band. The system delivers exactly pixel-registered simultaneously acquired images. It has the advantage that appropriate signal and image processing permit to exploit differences in the characteristics of those bands. Thus, the camera reveals more information than a single-band camera. It helps distinguishing between targets and decoys and has the ability to defeat many IR countermeasures such as smoke, camouflage and flares. Furthermore, the system permits to identify materials (e.g. glass, asphalt, slate, etc.), to distinguish sun reflections from hot objects and to visualize hot exhaust gases.
Furthermore, dedicated software for processing and exploitation in real-time extends the application domain of the camera system. One component corrects the images and allows for overlays with complementary colors such that differences become apparent. Another software component aims at a robust estimation of transformation parameters of consecutive images in the image stream for image registration purposes. This feature stabilizes the images also under rugged conditions and it allows for the automatic stitching of the image stream to construct large mosaic images. Mosaic images facilitate the inspection of large objects and scenarios and create a better overview for human observers. In addition, image based MTI (moving target indication) also for the case of a moving camera is under development. This component aims at surveillance applications and could also be used for camouflage assessment of moving targets.
Technological progress in the fields of computing hardware and efficient algorithms make it possible to set up real-time exploitation systems for a huge number of applications (e.g. assessment of camouflage effectiveness, or various surveil-lance applications, UAVs, as well as image sequence data reduction, indexing, archiving, and retrieval). The system in question has been developed to cope with highly dynamic situations. Such dynamic situations may be characterized by moving targets acquired by a static, trembling, or moving sensor system. The image sequences may stem from a visual-optical (VIS) or some forward looking infrared (FLIR) sensor. Except for wide-angle lenses (due to their optical distortions) neither sensor nor calibration parameters have to be known to the automatic exploitation system. Furthermore no human interaction is required. The algorithmic approach tries to digitally stabilize the movement of the sensor system. To accomplish this task the algorithm extracts 40-60 tie points from the static nonmoving background, then robustly matches the tie point constellations frame to frame for calculating the 8 parameters of a projective mapping. This is the basis for some sort of background stabilization. The difference image of two consecutive and matched image frames re-veals the moving targets. After the segmentation of the (moving) target signatures, additionally attached tracking and classification components have been tested.
An international multisensor measurement campaign called "MUSTAFA" yielded many infrared image sequences of differently camouflaged targets. The image sequences were acquired by a helicopter sensor
platform approaching the targets. The effectiveness of the various camouflage methods still has to be evaluated. Apart from observer experiments, FGAN/FOM and IITB pursue an ATR (Automatic Target Recognition) -based method for the automatic evaluation of the camouflage variants. The ATR approach consists basically of the
detection component of an ATR for reconnaissance purposes in forward-looking infrared image sequences (FLIR). Given some flight and sensor parameters the algorithm can report detection hypothesizes together with a measure of confidence and the detection range for each hypothesis. Proceeding on the assumption that better camouflage yields late automatic detection of the corresponding target in approaching image sequences, the detection range output of the algorithm could be an additional criteria for camouflage evaluation. The paper presents some aspects of the reconnaissance detection algorithm, detection ranges for exemplary image sequences of the MUSTAFA data set, and future options: e.g. real-time operation in the sensor platform during the measurement campaign.
KEYWORDS: Image fusion, Data fusion, Geographic information systems, Detection and tracking algorithms, Sensors, Automatic target recognition, Target detection, Image registration, 3D modeling, Algorithm development
It is well known that background characteristics have an impact on target signature characteristics. There are many types of backgrounds that are relevant for military application purposes; e.g. wood, grass, urban, or water areas. Current algorithms for automatic target detection and recognition (ATR) usually do not distinguish between these types of background. At most they have some sort of adaptive behavior. An important first step for our approaches is the automatic geo-coding of the images. An accurate geo-reference is necessary for using a GIS to define Regions of Expectations (ROE-i.e. image background regions with geographical semantics and known signature characteristics) in the image and for fusing the (multiple) sensor data. These ROEs could be road surfaces, forest areas or forest edge areas, water areas, and others. The knowledge about the background characteristics allows the development of a method base of dedicated algorithms. According to the sensor and the defined ROEs the most suitable algorithms can be selected form the method base and applied during operation. The detection and recognition results of the various algorithms can be fused due to the registered sensor data.
A diagnostic method to detect differences between diseased and normal tissue from bladder carcinoma by FTIR-microspectroscopy is described. Regions of interest on 10 micrometer thin tissue sections where mapped in transmission mode. After IR-Mapping, the samples have been analyzed with common pathological techniques. Quadratic discriminant as well as correlation analysis was applied to the obtained IR-maps allowing differentiation between cancerous and normal tissue. In the case of the correlation analyses it is further possible to distinguish between different types of tissue.
Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.
The paper presents approaches for the characterization of saliency with respect to MMO (Man-Made Object) detection at the example of vehicle detection in infrared (IR) images. The methodology is based on an extended evaluation of gradient direction histograms presented in earlier AeroSense (1996, 1997) symposia. The detection of conspicuous image domains (ROI -- Regions of Interest) is an early signal near operation in the process of automated detection and recognition of MMO used in ATR (Automatic Target Recognition) algorithm chains. For this purpose, the ROI detection has to be fast and reliable. It can be used as an efficient data reduction device to speed up subsequent exploitation phases without loss of relevant information. Usually two complementary error classes are distinguished: class (alpha) (an interesting image domain was not detected) and class (beta) [an irrelevant image domain (clutter) has been labeled]. (beta) errors lead to an increased analysis workload in subsequent processing phases. In unfavorable cases much too many image domains are labeled and hence the ROI detection is ineffective. (alpha) errors are even more problematic since it is hard to compensate omissions in subsequent evaluation phases. The quality (efficiency and effectiveness) of the MMO detection restricts the ultimate achievable system performance and hence determines the possible application fields (e.g. on-board or ground based ATR). The optimization trade off between (alpha) and (beta) demands for application specific solutions.
Automatic target detection (ATR) generally refers to the localization of potential targets by computer processing of data from a variety of sensors. Automatic detection is applicable for data reduction purposes in the reconnaissance domain and is therefore aimed at reducing the workload on human operators. ATR covers activities such as the localization of individual objects in large areas or volumes for assessing the battlefield simulation. An increase of reliability and efficiency of the overall reconnaissance process is expected. The results of automatic image evaluation are offered to the image analyst as hypotheses. In this paper cluttered images from an infrared sensor are analyzed with the aim of finding Regions of Interest (ROIs), where hints for man-made objects have to be found. This analysis uses collateral data from acquisition time and location (e.g. day time, weather condition, resolution, sensor specification and orientation etc.). The assumed target size in the image is also compared by using collateral data. Based on the collateral data, the algorithm adjusts its parameters in order to find ROIs and to detect targets. Low contrast conditions can be successfully tackled if the directions of the grey value gradient are considered, which are nearly independent of the contrast. Blobs are generated by applying adaptive thresholds in the ROIs. Here the evaluation of histograms is very important for the extraction of structured features. The height, aspect angle, and camera parameters are approximately known for an estimation of target sizes in the image domain out of the collateral data.
Computer-augmented detection of targets generally refers to the localization of potential targets by computer processing of data from a variety of sensors. Automatic detection is applicable for data reduction purposes in the reconnaissance domain and is therefore aimed at reducing the workload for human operators with respect to activities such as to targeting individual targets on large areas or volumes for assessing the battlefield/battlespace situation. An increase of reliability and efficiency is expected. The results of automatic image evaluation are offered to the image analyst as hypotheses. In this paper image sequences from an infrared sensor (spectral range 3 - 5 micrometers ) are analyzed with the aim of finding Regions of Interest (ROIs), where the target-background segmentation is performed by means of blob evaluation. Also low contrast conditions can be successfully tackled if the directions of the gray value gradient are considered, which are nearly independent of the contrast. Blobs are generated by applying adaptive thresholds in the ROIs. Here the evaluation of histograms is very important for the extraction of structured features. It is assumed that the height, aspect angle, and camera parameters are approximately known for an estimation of target sizes in the image domain. This estimation yields important parameters for the target/clutter discrimination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.