|
1.INTRODUCTIONOur world is constantly changing, which has its effect on worldwide military operations. There is a change from conventional warfare to a domain with asymmetric threats as well. The availability of high-quality imaging information from electro-optical sensors is of high importance, for instance for timely detection and identification of small threatening vessels within a large amount of neutral vessels. Furthermore, Rules of Engagement often require a visual identification before action is allowed. The Dutch Defence Research and Development Programme “Multifunctional Electro-Optical Sensor Suite (MEOSS)” aims at the development of knowledge necessary for optimal employment of Electro-Optical systems on board of current and future ships of the Royal Netherlands Navy, in order to carry out present and future maritime operations in various environments and weather conditions. This includes operation at full sea, as well as littoral and riverine operations (blue water and brown water ops) in all possible climate and weather conditions. The focus during this programme will be on the replacement of the existing Multipurpose frigates used by the Royal Netherlands Navy. The challenge in the operations is to detect, classify and identify a target by human operators or dedicated algorithms at a reasonable range, while avoiding too many false alarms or missed detections. The quality of signal processing in current systems is under-developed compared to developments on EO sensor hardware. Nowadays limitations in sensor performance occur, due to the effects of sun glints and spray which are not well modelled in the detection filters yet. For identification the signal-to-noise ratio and the resolution on the target is still limiting. Besides, the influence of the environment can reduce the sensor range in such a way that the operational task becomes challenging or even impossible. For this reason, Tactical Decision Aids will become an important factor in future operations. Also, developments on both hardware and signal processing demand new research on the military use of EO sensors for the improvement of current systems (smart user) and to specify future needs (smart specifier). Operational improvements such as a better all-weather capability and increased operational sensor ranges are expected by:
This paper is organized as follows. In section 2 we describe the platforms, systems and scenarios studied in this programme. In section 3 we discuss the challenges for performance improvement of current and future systems. In section 4 and 5 we focus on image processing: in section 4 on image enhancement and in section 5 on information extraction. In section 6 we focus on evaluation methods for these systems. In section 7 the challenges for environment modelling are presented. This paper ends with future trends in section 8. 2.PLATFORMS, SYSTEMS AND SCENARIOSMEOSS is focused on the replacement of the Dutch multi-purpose M-frigate. This is a multi-functional ship, i.e. it shall be capable of operations under different circumstances, with different tasks and at different locations. Examples of other Dutch platforms are the Joint Support Ship, the Patrol ships and the Walrus submarine. The replacement of the M-frigate will be designed to operate across the entire force spectrum, for high and low violence situations and for asymmetric to symmetric warfare. There are two different operational roles 1) the combat role and 2) the non-combat role. The combat role includes sea control missions such as naval presence and ISTAR. The non-combat role consists of constabulary and humanitarian missions. The M-frigate replacement shall be able to operate world-wide. The climate around the world varies strongly from arctic to tropical. A particular climate to consider is the climate around the North Sea. The climate also specifies the global conditions for the different sensor systems to perform in. The EO systems should be able to work daytime and night-time, thus 24/7. There are also different types of environment to consider. The main environments are: open sea (blue water ops), littoral/choke point (brown water ops) and a harbour environment. For observation with EO systems, these environments strongly differ in line-of-sight, amount of clutter in the background and amount of possible threats. The last variable we consider are the different threats that can be encountered. The targets that can operate as threats can be airborne, surface targets or land targets. Typical surface objects to detect are: FIACs, water scooters, patrol boats, corvettes, frigates and different civilian vessels. Air objects can be fixed-wing or rotary wing, and manned or unmanned. Other types of threats can be missiles, land-based systems and swimmers. Typical camera systems on the current Dutch ships are surveillance cameras such as the Gatekeeper and the Mirador and weapon cameras such as the Marlin and the Hitrole.
3.EO SENSOR SYSTEM CONCEPTSThe goal of this topic is to study the performance of different tasks of the EO sensor systems on board of the ships and the hardware possibilities to achieve performance improvement. Typical tasks that are done with EO/IR systems are: detection, classification and identification of both air and surface threats [1, 2]. These tasks can be fully automated, be done by humans interpreting the images, or by humans aided by the system. Turbulence is addressed as a separate item in this section because it can have a drastic impact on the potential solutions for performance improvement. 3.1DetectionFor the detection task the challenge is to find relevant objects in the environment. In order to do so in a horizon search, a large Field of Regard (FOR) is required [3]. Such a large FOR can be obtained with cameras with a large FOV, and scan these along the horizon, or multiple cameras. With multiple cameras a lower sensor resolution is required to obtain the same FOR as with a single sensor that needs to cover the full FOR. An example for this is the Thales Gatekeeper system. With present-day WFOV electro-optical sensors there is a resolution limitation. A sufficient resolution is needed to:
Furthermore detection is limited by the line-of-sight (LOS). Different objects can contribute to the limitation of the LOS, such as waves, harbour, overgrown, buildings. A limited LOS leads to a shorter reaction time, and hence faster detection and improved recognition is required. For the detection task, range can be improved by using higher resolution sensors, also in combination with more sensitive sensors. Detection also depends on the target contrast with the background after propagation through the atmosphere. The combination of atmospheric and system blur (dependent among others on the optical quality of the sensor) limits the sensitivity. The present trend is to use more and multiple uncooled WFOV IR cameras because of their lower acquisition and maintenance price. Resolution enhancement in these cameras can be seen as a promising development in the next years, because resolution is relatively poor compared to COTS TV cameras [1]. 3.2Classification and IdentificationFor classification and identification the challenge is to recognise relevant objects in the environment [1]. We should discriminate objects as smuggling, pirates, suspect behaviour, and identification friend-or-foe. Here we describe three ways to distinguish between threats and non-threats:
As stated here, recognition can be improved with higher resolution. Recognition range depends on the physical size of the targets and camera specifications (basically the Instantaneous Field Of View, IFOV). The camera perception specifications can be measured properly with the Triangle Orientation Discrimination (TOD) method, see chapter 6. In a WFOV IR camera as applied in the Gatekeeper the recognition range is much smaller than for the applied TV camera, mainly due to the resolution difference. In ideal circumstances (high contrast and total blur smaller than the IFOV) the recognition range is proportional to the reciprocal IFOV. 3.3TurbulenceFor long range recognition (over several kilometres) turbulence blur is the limiting factor on recognition [1, 4]. A promising trend is to assess the turbulence level and to compensate for the turbulence effects. For the assessment the effects of observed turbulence are compared to the performance degradation of specific available sensors. For example, when a classification camera is applied in heavy turbulence conditions, the effects on the NFOV IR image are normally smaller than on the NFOV TV camera, because of its resolution. With this quantitative knowledge adaptive sensor management can be applied, applying the best performing camera, or a combination of best performing cameras, in the situation. Additionally the best performing algorithm can be applied in that case. 4.IMAGE ENHANCEMENTThe goal for signal processing methods is to improve the detection and visual recognition and identification of targets and threats [5]. This can be done by enhancing the image quality (more in this section) or by improving the automatic information extraction for the different camera systems (see next section). Signal processing techniques are widely used for image enhancement. Using these techniques it is possible to enhance the details and to reduce the noise in the image [5]. This section focuses on image enhancement techniques: resolution improvement, noise reduction (temporal and non-uniformity), contrast enhancement and stabilization. These techniques are implemented within the TNO Signal Conditioning Suite. In case multiple frames of a scene are available, temporal filtering can be applied to improve the image. When the camera and scene are stationary or the image data can be aligned accurately, temporal algorithms such as noise reduction or super-resolution can be applied. Temporal noise reduction can be performed by averaging of the aligned frames, or by using more complex techniques such as super-resolution. Multi-frame SR reconstruction is the process of combining a set of under-sampled (aliased) low resolution (LR) images to construct a high-resolution (HR) image or image sequence. During the previous decade numerous SR methods have been reported in the literature. Reviews can for instance be found in van Eekeren [6], Farsiu [7] and Park [8]. Schutte et.al [9] presented the Dynamic Super Resolution (DSR) algorithm. For super-resolution the number of pixels and the information within these pixels is increased. DSR2 indicates that the resolution is up-sampled with a factor 2, and DSR3 indicates a factor of 3. For noise reduction the DSR algorithm is used without increasing the number of pixels, i.e. DSR1. One of the challenges of temporal noise reduction and super-resolution is the preservation of moving elements in the scene. The super resolution algorithm also reduces the noise and enhances the edges. Images taken with an infrared camera may be degraded by a fixed intensity pattern which does not change with scene or camera movement. This pattern is caused by non-uniform pixel responses in the focal plane array. The fixed pattern can be corrected with a factory non-uniformity correction (NUC), or by using a calibration target. The remaining nonuniformity can be estimated using the scene motion to find the stationary fixed pattern. This is called scene-based nonuniformity correction. When the aim is not to restore the image, but to be able to observe more details, the image can also be improved using contrast enhancement. Several global and local contrast enhancement methods are described in the literature, for instance in [10, 11, 12]. Typical global contrast enhancement methods are contrast stretching, gamma manipulation and histogram stretching. To enhance local contrasts, also local adaptive contrast enhancement can be used. The idea of local contrast enhancement is that the processing depends on the features of a local region. Narenda and Finch [11] propose a contrast enhancement method in which the local statistics are described by the local mean and variance. Using a local region means that the contrast is enhanced in a specific spatial scale in the image, for instance only on small details. In our evaluation we use a multi-scale approach the LACE (Local Adaptive Contrast Enhancement) algorithm as described by Schutte [12]. We show the added value of image enhancement methods applied to maritime scenarios in Figure 1 and Figure 2. The tested algorithms are part of the TNO Signal Conditioning Suite, a software toolbox which forms the basis of the TNO software development on image enhancement for the past decade. These algorithms can run in real-time. It can be seen that the resulting images are sharper and have more detail. In Figure 1 DSR2 and LACE are applied on a colour camera. It can be seen that the name of the ship is better readable. The details on the ship are better visible, the waves on the sea are blurred out. In Figure 2 DSR2 and LACE are applied to a IR harbour scene. In this image both the static background and the moving objects are improved. It can be seen that the structures are better visible, but also that the amount of noise is increased. 5.INFORMATION EXTRACTIONBesides image enhancement, another option to improve the detection and visual recognition and identification of targets and threats is to perform information extraction on the imagery. These algorithms, automatic detection, tracking, classification and identification, should be applicable to a large range of targets mentioned in section 3 and should increase the operational added value of the EO sensors within a (multi-)sensor suite. This can be done by taking over part of tasks of operators, or by increasing the value of information available to an operator. Operational challenges for information extraction are, e.g. [13]:
Object detection and tracking are described in the next section, followed by a section on classification. Threat estimation is a higher information level process, that is relying on information of an object over a longer time, for which persistent tracking, as described in section 5.3, is important. 5.1Object detection and trackingAutomated detection processing can be used as operator aid. Scanning the edges of the areas of interest (detection horizon, forest edge, access roads) provides the first information on potential threats. To be successful in automated detection the false alarm rate must be low. Sufficiently accurate detection outputs can be used to cue high resolution cameras to address the recognition process. Tracking is used to monitor the position of the object through time. Detection and tracking are closely related, as the detections are used as input for the tracking process and the tracking can be used to improve the detection process. Object detection and tracking (ODT) can have different levels of complexity. A simple scenario consists of a static camera setup, with fixed lighting conditions, a single moving object, moving at a set distance, with a rigid shape, and with an appearance that differs from the background. A complex scenario might be the detection and tracking of multiple camouflaged objects in Dutch weather conditions while the sensor platform is moving. For object detection and tracking we consider the scheme in Figure 3. The break-down structure of these challenges are shown as blocks. For instance, a camera on board of a ship suffers from the influence of the waves for which the ODT scheme could compensate using image stabilization. Image stabilization can assist other algorithms such as superresolution, turbulence mitigation, and shadow and sun glint removal algorithms by providing sequential images for which the (static) background is aligned. With the background aligned, the creation of a background model becomes a possibility. Such a background model can be used to detect moving objects (blobs) by analyzing the high intensity differences of the current image and the background model. Robustness to changing environmental conditions is important for this case. A background model may also be estimated for each image, based on locally obtained statistics, such as intensity changes with distance [14, 15], allowing for detection of more stationary objects. For object recognition, objects are found by comparing image parts to pre-learned visual appearances, e.g., from examples of appearances of ships, or from a description of a currently tracked unknown ship. Both the blob detector and the object recognition produce false alarms. The goal of clutter reduction is to suppress these false alarms. Depending on the memory consumption of the tracking, the clutter reduction can be less or more strict. I.e., if computational effort allows, it may be better to filter less, missing less relevant objects, removing unwanted objects after tracking based on the whole obtained track. The tracker keeps track of the currently observed objects, associating current detections to existing tracks, using a prediction step, a gating step, an association step and an update step. Multiple detections of a large ship should be handled by associating all to the correct track. In many scenarios not only relevant targets pass the tracker but possibly also irrelevant objects. In maritime scenarios these can be waves, vehicles (in harbor), pedestrians or waving flags. Unwanted tracks such as waves and flags can be suppressed by using their temporal behaviour. By prioritizing the remaining tracks, the operator can select certain types, for instance targets that are approaching, targets that are within a certain range or targets that have arrived recently. 5.2Classification and identificationFor naval operations in a coastal environment, detection and tracking of boats may not be sufficient. It is also important to find the one threat, among many other friendly and/or neutral objects. To do so, one needs more information about the target. Typical information that one needs to know is:
Operators on the ships can be aided in gathering this information. Automatic algorithms that can be used for this are e.g. classification, recognition and identification. Classification is defined as systematic placement in categories. Recognition is defined as an awareness that something perceived has been perceived before. This can be done in two different ways: identification, which answers the question that the object is in fact a specific person or other object, or re-recognition where you can only say that the specific object was seen before, but not exactly who it is. Before classification or recognition or identification can be done, the object first has to be detected, as described in the previous section. Classification and recognition are both based on comparing a description of a current observation, with a stored description. For classification, this description is a generalization of a number of objects, for recognition of a specific object. An example for automatic classification is described in [16, 17]. A detection algorithm provides bounding boxes in the image encapsulating a ship. For such a region of interest, the image part is extracted. Ships are detected at various sizes, distances and viewing angles. First the image is scaled to a pre-defined number of pixels to achieve scale invariance. Because a relatively large range of angles only causes limited affine transformations, viewing angle dependence can be reduced by stretching the images to a fixed width. The ship image is subsequently described by computing features, based on shapes or texture. These features can be matched against features of known objects, selecting the category that matches best. 5.3Persistent TrackingIn order to determine the intent of a target, as part of threat assessment, knowing a history of an object is important, as it allows for a description of its long term behaviour [18]. This can for example be that a boat is present in a fishing area for a longer time, or that a ship was seen earlier in a suspect harbour. As a ship is not observed all the time, traditional tracking is not possible. Kinematics alone are not enough to associate current observations to historical ones, as several objects might have travelled to the current location in the time they were not observed. To enable this re-recognition, descriptions of the current object, in visual features such as contour, shape or visually distinct appearance, can be used to exclude impossible matches and find likely ones [19]. For shorter periods without observation, such general indication may be enough to follow objects over time. For long periods, more specific descriptions are needed, requiring more detailed observations, e.g. by using high-resolution cameras or close observations, for example using airborne assets. 6.TESTING OF INFRARED AND TV SENSOR PERFORMANCEThere exist a variety of approaches to test IR and TV sensor performance and to predict Target Acquisition range. In this section we explain the Triangle Orientation Discrimination (TOD) test method [20, 21]. The TOD methodology includes a sensor test [20,21], a sensor performance model [22] and a TA range prediction model [21]. The method has been applied to many types of conventional and advanced image-forming systems. So far the validity of the test method has not yet been violated despite many validation studies covering the effects of target contrast, aspect angle, amount of under-sampling, spatial and temporal noise, motion, dynamic super resolution, local adaptive contrast enhancement, smear and combinations of these [21]. The relationship with the US TTP metric and the underlying tactical vehicle perception dataset has been assessed in several studies, see e.g. [23]. For the prediction from the TOD to TA range, a target specific parameter set is required consisting of i) the target characteristic size √A (i.e. square-root of the target area), ii) target characteristic contrast CRSS or ΔTRSS (i.e. the root sum square of internal and external visual or thermal contrast, see [21]) and iii) a task difficulty characterizing scaling factor M75 between target and TOD test pattern characteristic size. The TOD method is recommended in ITU G. 1070 [24] for videophony and is candidate for updates of STANAGs 4347-4351 [25-29] for the performance measurement and modelling of thermal imagers and image intensifiers. The sensor performance and range prediction model are currently being implemented in the EDA ECOMOS computer model. The Triangle Orientation Discrimination (TOD) test method was used to characterize both the Gatekeeper IR and TV camera performance. A complicating factor with the Gatekeeper is its advanced image processing that i) uses sensor input from all 3 sensors in a sensor head and ii) optimizes for a cluttered sea environment. This means for the test that a 120 degrees background was required and that it needs spatial structure instead of the regular uniform TOD background in order to obtain results that are representative for operational performance. Details of the TOD test study on the Gatekeeper system are published elsewhere [30]. Example results are plotted in Figure 4. In addition, target set characteristic parameters √A, CRSS, ΔTRSS and M75 were derived or collected from literature for a number of relevant targets (not published). Together they can be used to predict the operational TA ranges with the Gatekeeper EO and IR system for these targets. 7.ENVIRONMENT MODELLINGThe mission of the Royal Netherlands Navy (RNLN) to be deployable world-wide and in all weather conditions places stringent requirements on the capability to perform rapid environmental assessment (REA). In this process, it is vital to have insight in the actual performance (i.e., for the environmental conditions in the operational area) of the on-board sensors to gauge detection and classification ranges of possible threats, and likewise, to estimate the performance of threat sensors to gauge the platform’s covertness and vulnerability. The performance of a sensor does not solely depend on its technical capabilities. Instead, the environment (weather) and the characteristics of the target play an important (and often determining) role. Comprehensive model approaches that address all elements of the observation chain are thus required to evaluate the probability that a sensor can accomplish a particular task in the actual environment in which the sensor operates and against a particular threat in that environment. The goal of environment modelling is to predict the actual EO sensor performance by using environmental data and ship signatures. TNO developed a software tool, EOSTAR (Electro-Optical System Transmission And Ranging), which constitutes an advanced EO-sensor performance prediction tool that reports the detection range and detection probability for specific target selected EO sensors in a well-defined operational scenario [32]. The tool takes into account the complete observation chain, from background, target signature and optical propagation, to the sensor, and calculates the detection range and detection probability of a target. This is not only relevant for observations by the own platform’s (ship) sensors, but also provides information about the own platform’s vulnerability (visibility) for attacks by missiles that use EO sensors to aim at their target. Based on these modelling capabilities, the EOSTAR tool can play an important role for mission planning, mission rehearsal and training. There exist several operational applications of sensor performance modelling:
Apart from operational analysis, such results can also be used in the specification and design phases of a platform. The most important outputs of EOSTAR PRO, shown in Figure 5, are the coverage diagram in which the detection probability as a function of range is visualized using red/yellow/green colours, and the synthetic sensor image output which shows the operator how the target image will appear on the sensor (figure 2) under the defined environmental and meteorological conditions. In the current research programme, the focus is on the performance of the sensor in the detection and classification of non-stationary, asymmetric targets. These have relatively much dynamic interaction with the sea surface. 8.FUTURE TRENDSSeveral promising future trends can be defined in order to support the operator in the execution of his task [1]. The first trend is application of image enhancement technology, such as stabilisation, in which the image is presented to the operator with higher quality (higher resolution and less vibrations). This enables the operator to perform and to ease a better detection and recognition task. In long range recognition, this would also imply estimating the effect of turbulence on the image and compensating for this effect using turbulence mitigation techniques. A second trend is the application of automatic object detection and recognition techniques. This allows the system to automatically detect objects in the imagery and automatically classify the objects. This way, the operator is supported so he can focus on the objects of most interest and urgency to him. The third trend is persistent surveillance. This means maintain tracking of objects over longer periods of time (hours, days, weeks). These objects, such as humans and small vessels, are being tracked persistently and their behaviour is classified. To achieve this, different observations of the same objects need to be coupled in a re-recognition process. This should be done in a very cluttered environment. This is the approach in the programme Maritime Situational Awareness. Important challenges for EO sensors for maritime applications are:
ACKNOWLEDGEMENTThe work for this paper was partly carried out within the programme MEOSS Multi-functional Electro-Optical Sensor Suites. REFERENCESSchwering, P.B. W., Benoist, K.W., Dijk, J.,
“V1012 Technology assessment,”
TNO report TNO 2014 R10708,
(2014). Google Scholar
Schwering, Piet B.W.; Sebastiaan P. van den Broek; Rob A.W. Kemp; Henk A. Lensen,
“Task-specific sensor settings for electro-optical systems in a marine environment,”
7666 27
(2010). Google Scholar
de Jong A.N., Schwering, P.B., Infrared Technology and Applications XXIX, 5074 658
–668 Orlando Florida (USA), April 21–25, 20032003). https://doi.org/10.1117/12.488788 Google Scholar
K.W. Benoist, Impact of atmospheric turbulence on imaging sensor systems,
(2013). Google Scholar
in In 4th International Symposium on Optronics in Defense and Security, OPTRO 2010,
(2010). Google Scholar
Eekeren, A.W. M. van, Schutte, K., Oudegeest, O.R., and Vliet, L.J. van,
“Performance evaluation of super-resolution reconstruction methods on real-world data,”
EURASIP Journal on Advances in Signal Processing, 111 43953
(2007). https://doi.org/10.1155/2007/43953 Google Scholar
Farsiu, S., Robinson, D. Elad, M. and Milanfar, P.,
“Advances and Challenges in Super-Resolution,”
International Journal of Imaging Systems and Technology, 14 47
–57
(2004). https://doi.org/10.1002/(ISSN)1098-1098 Google Scholar
Sung Cheol Park, Park, S.C., Park, M.K. and Kang, M.G.,
“Super-Resolution Image Reconstruction: A Technical Overview,”
IEEE Signal Processing Magazine, 20
(3), 21
–36
(2003). https://doi.org/10.1109/MSP.2003.1203207 Google Scholar
Schutte, K. de Lange, D.J. J. and Broek, S. van den,
“Signal conditioning algorithms for enhanced tactical sensor imagery,”
Proc SPIE, 5076 92
–100
(2003). https://doi.org/10.1117/12.487720 Google Scholar
Young, I.T., J.J. Gerbrands, and L.J. van Vliet, The Digital Signal Processing Handbook, 51 1
–81 IEEE Press and CRC Press.1998). Google Scholar
Narenda, P.M. and R.C. Finch, Real-time adaptive contrast enhancement IEEE transactions on pattern analysis and machine intelligence, 3
(6), 655
–661
(1981). https://doi.org/10.1109/TPAMI.1981.4767166 Google Scholar
Schutte, K., Multi-scale adaptive Gain control of IR images SPIE, 3061 906
–914
(1997). Google Scholar
Schwering, P.B.W. & Schutte, K.,
in In 4th International Symposium on Optronics in Defense and Security,
(2010). Google Scholar
Broek, S.P. van den, Bakker, E.J., Lange, D.J. J. de and Theil, A.,
“Detection and classification of infrared decoys and small targets in a sea background,”
4029 70
–80
(2000). Google Scholar
Bouma, H., Lange, D.J. de, Broek, S.P. van den, Kemp, R., Schwering, P.,
“Automatic detection of small surface targets with electro-optical sensors in a harbor environment,”
7114
(2008). Google Scholar
Broek, S.P. van den, Bouma, H., Degache, M.,
“Discriminating small extended targets at sea from clutter and other classes of boats in infrared and visual light imagery,”
6969
(2008). Google Scholar
Broek, S.P. van den, Bouma, H., Degache, M., Burghouts, G.,
“Discrimination of classes of ships for aided recognition in a coastal environment,”
7335
(2009). Google Scholar
PBW Schwering, S.P. van den Broek, KD Liem, HMA Schleijpen,
in Symposium on “SET-183/ IST-112 Joint Meeting on Persistent Surveillance: Networks, Sensors, Architecture” (SET-183),
(2012). Google Scholar
Broek, S.P. van den, Bouma, H., den Hollander, R.J.M., Veerman, H.E.T., Benoist, K.W., Schwering, P.B.W.,
“Ship recognition for improved persistent tracking with descriptor localization and compact representations,”
9249–23
(2014). Google Scholar
Bijl, P. and Valeton, J.M.,
“TOD, the alternative to MRTD and MRC,”
Optical Engineering, 37
(7), 1976
–1983
(1998). https://doi.org/10.1117/1.601904 Google Scholar
Bijl, P. & Hogervorst, M.A., Proceedings OPTRO, Paris, France, (2012). Google Scholar
Bijl, P., Valeton, J.M. & de Jong, A.N.,
“TOD predicts target acquisition performance for staring and scanning thermal imagers,”
SPIE Proceeding, 4030 96
–103
(2000). https://doi.org/10.1117/12.391771 Google Scholar
Bijl, P., Reynolds, J.P., Vos, W., Hogervorst, M.A. & Fanning, J.D.,
“TOD to TTP calibration,”
In: Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXII, 8014 80140L
(2011). https://doi.org/10.1117/12.887219 Google Scholar
ITU-T Recommendation G., Opinion model for videophony, 1070
(2007). Google Scholar
NATO STANAG, 4347 Google Scholar
NATO STANAG, 4349 Google Scholar
NATO STANAG, 4350 Google Scholar
NATO STANAG, 4348 Google Scholar
NATO STANAG, 4351 Google Scholar
Gosselink, G.A.B., Anbeek, H., Bijl, P. & Hogervorst, M.A.,
“TOD Characterization of the Gatekeeper Electro Optical Security System,”
In: Proc. SPIE 8706, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXIV, 87060I
(2013). https://doi.org/10.1117/12.2016589 Google Scholar
Bijl, P.,
“The ideal Target Acquisition model: where on earth do I find it,”
OPTRO-2014-2967244 (France). Keynote,
(2014). Google Scholar
A.M. J. van Eijk, M.A.C. Degache, D. Tsintikidis, S. Hammel, Optics in Atmospheric Propagation and Adaptive Systems XIII, Toulouse, France, SPIE Proc. 7828; doi:2010). Google Scholar
|