Open Access
11 January 2019 Object-based image analysis approach for vessel detection on optical and radar images
Author Affiliations +
Abstract
Commercial satellites for Earth observation can integrate conventional positioning and tracking systems for monitoring legal and illegal activities by sea, in order to effectively detect and prevent events threatening human life and environment. This study describes an object-oriented approach to detect vessels combining high- and medium-resolution optical and radar images. Once detected, the algorithm estimates their position, length, and heading and assigns a speed range. Tests are done using WorldView-2, QuickBird, GeoEye-1, Sentinel-2A, COSMO-SkyMed, and Sentinel-1 data imaged in several test sites including China, Australia, Italy, Hong Kong, and the western Mediterranean Sea. Validation of results with data from the automatic identification system shows that the estimates for length and heading have R2  =  0.85 and R2  =  0.92, respectively. Tests for evaluating speed from Sentinel-2 time-lag image displacement show encouraging results, with 70% of estimates’ residuals within ±2  m  /  s. Finally, our method is compared to the state-of-the-art search for unidentified maritime object (SUMO), provided by the European Commission’s Joint Research Centre. Finally, our method is compared to the state-of-the-art SUMO. Tests with Sentinel-1 data show similar results in terms of correct detections. Nevertheless, our method returns a smaller number of false alarms compared to SUMO.

1.

Introduction

1.1.

Maritime Surveillance

Maritime activities over the world’s seas and oceans constitute a considerable portion of human commercial trades. In addition to legal business, piracy, drug trafficking, illegal fishing, marine pollution, and human smuggling have turned to be nowadays ordinary activities and need to be controlled and limited. Therefore, efficient maritime surveillance and awareness are required to timely detect and prevent events threatening human life and environment.1

Today, vessels’ monitoring is performed mainly through land radars, sea radars, or ship-to-ship visual information exchange. Nevertheless, this approach has technical limitations that constrain the maximum acquisition range (about 100, 60, and 20 km, respectively) and practical limitations related to territorial waters boundaries, daylight, or adverse weather conditions.

Satellites can easily extend visual and instrumental horizon limits, are not constrained by national boundaries, and can operate even in adverse weather conditions, providing quick access to global imagery with high-revisit time.2 Their improved performances in terms of spatial, spectral, and temporal resolution have fostered their employment in monitoring and reconnaissance tasks, particularly in dynamic contexts extended over wide areas as maritime environments.

Therefore, monitoring of ships routes from space, together with conventional positioning and tracking methods (global positioning systems), land-based systems, and vessel information repositories [Automatic Identification System (AIS)], can rapidly intercept vessels travelling by sea, thus broadening the surveillance information provided by navy, coast guards, or collaborative ships.

1.2.

Vessel Detection from Space

Vessel detection is a topic of interest since the beginning of the 19th century. Initially, the goal was to avoid collisions between ships and nowadays this is a well-known application for monitoring illegal activities.

Operational algorithms and existing applications mainly rely on the single use of optical or radar data, and rarely foresee a combination of both the sources.

Nevertheless, the use of high- and medium-resolution optical and radar images allows for a regular observation, unrestricted by lighting and atmospheric conditions, and complementarity in terms of geographic coverage and geometric details. As an example, the areas to be analyzed could be very large but at the same time it may be necessary to zoom to high spatial resolution to get detailed information about specific small targets. Moreover, the need to detect vessels made of different materials (e.g., wooden, rubber, or metallic) makes synergic the joint use of multispectral and microwave technologies, as the use of a single technology may lead to missed identification or misclassification.

Synthetic aperture radars (SARs) are able to operate in all-weather and lighting conditions. In addition, thanks to their different imaging modes (i.e., stripmap, spotlight, or wide-swath), according to application needs,3 SAR sensors can be programmed to collect higher resolution images with a smaller swath (50-cm spatial resolution at 4-km coverage4), or lower resolution images on a wider area (70-m spatial resolution at 500-km coverage4). However, on the sea, SAR amplitude images usually have a high noise and are sensitive to sea roughness caused by winds and waves, thus producing high clutter, which tends to obscure smaller vessels and create false alarm.2,5 In addition to incidence angle, polarization and orientation of vessel respect to sensor can influence the detection rates.6

A typical issue of radar images is the occurrence of azimuth ambiguity patterns. These are image artefacts that appear as weaker repetitions (“ghosts”) of the targets shifted at fixed distances in azimuth. Although usually azimuth ambiguities generate lower reflections, for strong targets and limited clutter conditions, they can be erroneously detected and mistaken for real ships.4,7 In addition, bright targets on land or near the coast may produce azimuth ambiguities on the sea (e.g., small islands or reefs outside land mask, off-shore constructions, and strong scatterers as cities or harbours), but they can be classified as recurrent targets if repeat-pass images are available, and, thus, be labelled as false alarms.4 All these factors make SAR images visual interpretation difficult6,8 and limit the amount of information obtainable from SAR data to geographic location, length, width, and heading.4

On the contrary, sunlight and absence of clouds are required to make observation with optical sensor. In addition, sea surface is characterized by areas of higher reflectance, which compromise accurate detection of bright targets. Consequently, high-reflective sea roughness and waves, whose position is difficult to predict, could be easily misclassified for vessels.9 Nevertheless, current spatial resolutions allow the detection of very small targets,8 granting a more accurate features estimation.10 This makes possible to recognize and discriminate between many different ships.11 However, while high-resolution multispectral images (e.g., 2-m spatial resolution at 20-km swath width) could be very useful for detailed studies over small areas, they are not appropriate for monitoring activities over wide regions because too expensive in terms of collection time, processing times, and costs. On the other hand, medium-resolution multispectral images (e.g., 20-m spatial resolution at 290-km swath width12) could be a compromise for monitoring tasks of wide areas. In addition, the high-revisit time of Sentinel-2 (with the addition Landsat-8) makes possible a near real-time observation of large portions of the sea.

Vessel detection is not a new theme and there is plenty of bibliographic material available in the literature on this topic, particularly on SAR images.6 The most popular category belongs to adaptive thresholding [constant false alarm rate (CFAR)] detectors,13 proposed with various distributions of background statistics,4,14 due to their simple implementation and reliable statistical approach. A method based on variational Bayesian inference for multitarget situation and very complex backgrounds is proposed in Ref. 15. Some more sophisticated methods rely on multichannel information, such as polarimetric detectors16 or along-track interferometry.17 These approaches are less affected by inhomogeneities of sea surface,4 but less straightforward than the CFAR.

Recently, vessel detection with multispectral imaging systems has attracted increasing attention.2 Some works deal with ship detection in harbor areas, where, differently from ship detection on open sea, similarity between ships and port structure could be an issue for detection. Existing methods can be roughly grouped into three categories: (i) sea-state analysis combined with threshold-based methods8,18,19 are fast algorithms, but illumination changes tangle automatic thresholding;20,21 (ii) genetic algorithms and neural networks22,23 provide better differentiation of ships respect to the background, with the drawback of high computational complexity;24 and (iii) textural and geometrical descriptors2529 convey quick results given a high operator’s knowledge.20

In contrast to detection, classification of vessels is much more developed with optical imagery30 as the analysis of multispectral information offers a valuable opportunity to discriminate ships.2

Thus the integration of optical and radar data within a ship detection system turns out to be a significant task for an extensive maritime surveillance. To this aim, in this work, we describe a parallel approach to identify moving vessels and estimate their movement properties (position, length, heading, and speed) combining various data collected by existing satellites for Earth observation in the optical and microwave domain.

2.

Data

2.1.

Satellite Data

In our research, we used satellite images, collected with different spatial resolution, with different weather and sea conditions, in different locations and imaging vessels of different shape, size, and speed (Table 1 shows a summary).

Table 1

Satellite imagery used in this study.

SatelliteAcquisition dateLocationSpatial resolution (pixel size) (m)
Multispectral bandPanchromatic band
WorldView-231/12/2010Xiapu (China)2.00.5
03/04/2011Sydney (Australia)
QuickBird16/05/2001Venice (Italy)2.40.6
GeoEye-112/02/2009Venice (Italy)1.60.4
Sentinel-2AVariousVarious10.0n.a.
COSMO-SkyMed07/01/2012Hong Kong area3.0n.a.
19/09/2012
Sentinel-1VariousWestern Mediterranean Sea23.0n.a.
Notes: n.a., not available.

The multispectral data set include high spatial resolution images acquired by WorldView-2, QuickBird, and GeoEye-1 and medium resolution images acquired by Sentinel-2. The SAR data set include X-band stripmap images acquired by the COSMO-SkyMed in single-look complex format and HH polarization31 and C-band stripmap ground range detected high-resolution images acquired by Sentinel-1A in HV+HH or VH+VV dual polarizations.32

2.2.

Automatic Identification System

Over the years, several shipping cooperative systems were introduced to guarantee maritime safety, security, and sustainable use of natural resources. The AIS is part of this category and is an automatic tracking system used on ships, which receives and transmits information with other nearby ships (ship to ship) and with ground-based stations (ship to shore). The information exchanged between AIS devices can be grouped in: (i) static data (ship name and type, length, maritime mobile service identity and IMO numbers—which uniquely identify each vessel—load and type of cargo, etc.) and (ii) dynamic data (ship position, speed, heading, estimated time of arrival, departure harbour, next port of call, etc.) which are of particular interest to large-area ocean surveillance. Duties for vessels in transmitting their AIS information are regulated by legislation structured at international, European, and national levels. Restrictions are mainly related to vessels size and tonnage; as an example, vessels smaller than 15 m or with a gross tonnage lower than 300 ton are not obliged to transmit AIS data.

Since 2010, new satellite AIS data are available from the AISSat system, which allows overcoming the constraints of VHF range and proves a global coverage. AISSat-1 was successfully launched in 201033 and was then followed by AISSat-2 in 2014.

AIS data are useful to retrieve known ship routes at the time of image acquisition, thus helping in the identification of unknown possible illegal vessels by cross-checking estimates with real data.3436 Within this research, we used AIS data as ground truth for validation of results.

3.

Methods

The ship detection and characterization method proposed within this work is based on subsequent phases, which are essentially parallel for optical and radar images, except for some preprocessing steps, which are distinctive for the specific nature of the optical and radar sensors.

Concerning preprocessing, a land masking phase has been applied on both optical and radar data, in order to constrain the analysis to sea surface, so that biases resulting from misclassification of vessel-like objects outside water bodies could be avoided, and to reduce computing time for image segmentations and classifications.

The core and novelty of the methodology focus on the object based image analysis (OBIA), which has been applied through a dedicated software on both optical and radar data, in order to combine them in the parallel approach. This phase is the actual detection of moving vessels (i.e., ships and their wakes).

A following spatial analysis is applied to detected targets in order to estimate their position, length, heading, and a speed range. An approach for precise speed estimate based on multispectral bands time-lag has been tested on vessels detected exclusively on optical data.

The whole workflow is represented in Fig. 1 and is deepened in the next sections.

Fig. 1

Processing chain adopted for optical and radar images to identify and spatially characterize vessels.

JARS_13_1_014502_f001.png

3.1.

Vessel Detection in Optical Remotely Sensed Images

Standard radiometric calibration and atmospheric corrections have been applied to all the optical data. Radiometric calibrations have been performed by applying to each satellite data spectral band its proper gain and offset values derived from metadata. Satellite data have been then corrected for the atmospheric effect with the flat terrain module of ATCOR software.37 The aerosol type has been chosen between maritime and urban types depending on the image location, land cover (if present), and expected aerosol composition, whereas the choice of water vapor category has been defined in relation with the image center latitude and the acquisition date (mid-latitude summer, mid-latitude winter, and fall/spring).

Vessel-like objects have been distinctly detected on the multispectral bands, whereas wakes are extracted from the panchromatic bands (a synthetic panchromatic band has been generated for Sentinel-2 data). We used the minimum noise fraction (MNF) transform38 to select the most suitable band for ship detection from the multispectral data cube. MNF consists of a principal components analysis (PCA) rotation that decorrelates and rescales noise within the data through the principal components of the noise covariance matrix (noise whitening), followed by a standard PCA of the noise-whitened data. This technique is frequently used for noise removal from hyperspectral data. However, a side effect is the ordering of the set of decorrelated components (the MNF components) according to image quality (the covariance structure of noise). Consequently, MNF could also be used as a preprocessing technique when highlighting specific features.

The selected MNF component and the panchromatic band have been applied as input data for the OBIA processing used to extract ships and wakes, respectively. OBIA consists in a two-step process: segmentation and classification. Through a region-growing algorithm, segmentation groups adjacent image pixels into self-existent objects with spectral and geometric similarities, so that textural and contextual/relational characteristics among objects are exploited.39 Identified objects are then classified through a decision tree algorithm based on spectral and geometric properties. An important requirement for the applicability of this methodology to various data is simplicity and robustness. Thus a significant task has been to define not only a unique set of parameters, but also a common variability range of their values for all the images. The segmentation phase relies on scale, shape, and compactness properties of the image and of the objects that are generated. For the classification phase, the choice of parameters has fallen on amplitude, area, and length to width ratio (for ship objects) or border index (for wake objects), in order to guarantee a proper discrimination of targets from the background by exploiting their spectral and geometrical peculiarities. Parameters’ values have been set through a trial and error procedure and have been properly scaled according to input data spatial resolution if necessary. With respect to segmentation, shape and compactness parameters have not been rescaled since they are related to the fractured nature of image objects and not to their spatial detail. Concerning classification length/width and border index did not change being ratios. Segmentation and classification parameters and their values are summarized in Tables 2 and 3, respectively. Example of processing results over high- and medium-resolution optical images are represented in Fig. 2.

Table 2

Segmentation parameters applied to the input layer for high- and medium-resolution optical images.

High spatial resolutionMedium spatial resolution
Input layersPAN band segmentationMS selected band segmentationPAN band segmentationMS selected band segmentation
PAN bandYesNoYesNo
MNF component’s valuesNoYesNoYes
Scale40010502
Shape0.10.10.10.1
Compactness0.50.50.50.5

Table 3

Features and values used for high- and medium-resolution optical images classification.

High spatial resolutionMedium spatial resolution
FeaturesPAN band classificationMS selected band classificationPAN band classificationMS selected band classification
Reflectance0.035÷0.6500.030÷0.300
MNF component’s values23÷35÷150
Area (pixel)1÷210001÷23001÷20001÷550
Length/width0.9÷8.90.9÷8.9
Border index0÷70÷7

Fig. 2

Example of vessel detection on high and medium spatial resolution optical images: (a) tile of a QuickBird-2 image (spatial resolution 2.4 m) collected over Venice (Italy) showing a 20-m long vessel; (b) detected ship object (dark pink) and wake object (green); (c) tile of a Sentinel-2 image (spatial resolution 10 m) collected over the Mediterranean Sea, showing a 60-m long vessel; and (d) detected ship object (dark pink) and wake object (green).

JARS_13_1_014502_f002.png

Processing on the optical component has been also optimized to reduce computational time through a statistical index, as detailed in Ref. 40.

3.2.

Vessel Detection in SAR Remotely Sensed Images

Potential targets detection has been performed over SAR images through the widely known adaptive threshold CFAR algorithm,13 as its simple approach and satisfying results have been retained crucial for the analysis of radar images acquired from various sources.

This algorithm searches on amplitude SAR images for pixels brighter than the background through a 2-D moving window, supposing a ship can be distinguished from the background if its radar reflection generates pixel values exceeding the mean background plus noise.4

The pixel (or group of pixels) under test, which is called the cell under test (CUT), is surrounded by a guard area and by a background window [Fig. 3(a)]. The CFAR approach proposes that all sea clutter pixels have values that follow the model probability density function (PDF) that is fitted to the local background, as the level of sea clutter in the image is influenced by variations in incidence angle, wind, and other meteorological and oceanographic effects.4 Thus pixels belonging to the background window are only used to locally estimate background statistics. The purpose of the guard window is to ensure that pixels belonging to the target do not bias background statistics estimation. A reasonable rule is often to choose the dimension of the guard window as large as the biggest target expected in the detection scenario. The windows are usually square, not knowing a priori target orientation. The background window should be chosen as large as possible in order to have sufficient pixels to accurately estimate sea statistics. At the same time, the window should not be too large in order to not include nuisance pixels (e.g., nearby vessels, which is, however, an unlikely case in an open waters detection scenario). Within this work, guard and background windows have been set to 100×100 and 200×200  pixels, respectively, on the full resolution SAR images (Table 4).

Fig. 3

CFAR detection rationale. A moving window is swept all-over the image. (a) The CUT, the guard window, and the background window are represented in red, blue, and green, respectively. (b) The result of the detection is postprocessed so as to eliminate spurious points and cluster vessel points. The centroid of the detected vessel is computed and superposed over the detection mask as a red circle indicating vessel position.

JARS_13_1_014502_f003.png

Table 4

Guard and background windows dimensions set for the CFAR algorithm and features and values used for the segmentation and classification steps for original resolution and resampled SAR images.

CFARGuard windowBackground window
Original resolutionFactor 5Factor 10Factor 20Original resolutionFactor 5Factor 10Factor 20
100×10020×2010×105×5200×20040×4020×2010×10
FeaturesSegmentationClassification
Scale5020105
Shape0.10.10.10.1
Compactness0.50.50.50.5
Band value1111
Area (pixel)0÷35040÷30015÷1503÷75
Length/Width0.9÷61÷51÷51÷5

A threshold is determined according to the estimated background statistics in order to ensure a given probability of false alarm (PFA). This value strongly influences the number of false alarms; low values helps in minimizing false alarms, but weak targets could be missed. Oppositely, high values generate many false alarms, but weak targets can be detected.4 Within this work, the PFA has been set to 105, as it guaranteed a satisfying target detection without false alarms predominance. The CUT is then tested against this threshold and detection is assessed if CUT amplitude is greater than the threshold. The result of the CFAR is a binary image of the detected targets [Fig. 3(b)].

In this work, a simple Gaussian modeling of the sea statistics has been chosen. This is not the common approach for SAR data processing and standard PDFs used for CFAR are the K-distribution or the F-distribution, because the Gaussian modeling is usually deemed not to be the most faithful choice to model sea clutter statistics unless a sufficient averaging of pixels is previously performed.13 Nevertheless, both K-distribution and F-distribution do not have a strong theoretical background and are generally not valid.14,41 Further, the estimation of the K-distribution involves nonlinear transcendental functions, which makes its computation numerically challenging and time consuming.14 For all these reasons, when simulating real operations with full size COSMO-SkyMed images, we found that the Gaussian PDF was a good trade-off between speed (much faster than the K-distribution PDF) and accuracy (not significantly lower than using the K-distribution PDF). In addition, the CFAR algorithm with Gaussian PDF proved to be an efficiently solution for integrating the CFAR detector in our OBIA workflow. The fast implementation is due to the closed form relation between the PFA and the testing threshold,13 namely

Eq. (1)

PFA=1212erf(t2)[0;1],
where t is the threshold and erf is the error function. The threshold t is then locally adapted on the basis of local sea clutter statistics,13 according to the following equation:

Eq. (2)

CUT<>μb+σbt,
where μb and σb are local background mean and standard deviation, respectively.

The CFAR alone only provides indication of potential targets presence, but sea clutter conditions and the typical noise of radar images produce several persisting isolated detections and possible false alarms. Thus the choice of retaining a Gaussian clutter modeling is coupled with ad hoc image postprocessing, in order to properly remove false alarms (e.g., spurious points). A majority filter42 coupled with the application of the morphological operators of dilation and erosion have been applied to the resulting binary images. Then OBIA has been applied to cluster and extract ship objects as described for optical data. The same set of segmentation and classification parameters used for optical data have been retrieved also for SAR data, as they have been retained representative to delineate ships objects. Parameters values selected for the optical component have been tuned to work on SAR data, in order to properly discriminate false alarms from real vessels (Table 4).

The processed SAR dataset mainly include big-size vessels (i.e., typically of length >100  m) isolated from nearby ships. Smaller ships are available in port areas near the coast, where the concentration of vessels and of land scatterers constitute adverse conditions for CFAR detection, representing a difficult scenario for the detection with the previous methodology. Smaller isolated vessels have been, thus, simulated starting from big-size vessels. Full resolution SAR data have been low-pass filtered in order to generate three additional datasets, in which the resolution is worsened by factors 5, 10, and 20 (Fig. 4). As an example, a ship of 150-m length is composed by 50 pixels along the major axis at the full resolution SAR image; the same vessel is composed by 10 pixels in the first additional dataset and 5 pixels in the second one, as a ship of 30- and 15-m length, respectively, at the full resolution. It is worth remarking here that this simulation approach concerns only the geometric detection/estimation performance. The scattering properties of the simulated smaller vessels still remain those of big vessels imaged at a coarser resolution. In lack of real data, the simulation of backscatter from smaller vessels would require a complex electromagnetic modeling, which, however, is beyond the aim of this paper.

Fig. 4

(Left) Portion of a COSMO-SkyMed image acquired over Hong Kong at the: (a) full resolution and resampled by factors, (b) 5 (length/5), (c) 10 (length/10), and (d) 20 (length/20) and (right) the respective vessel detections.

JARS_13_1_014502_f004.png

The processing flow applied on full resolution SAR images has been applied to the simulated dataset, properly scaling both CFAR and OBIA parameters (Tables 1 and 4).

3.3.

Vessels’ Movement Parameters Estimate

Each detected target has been characterized by its movement parameters (position, length, heading, and speed or speed range) by means of a spatial analysis performed in a Geographic Information System environment. In order to remove false targets on optical images, only objects pairs (ships and their wake) intersecting their own boundaries have been selected. Then each ship object has been fitted with and ellipse and a rectangle to extract position and length. The ellipse enclosing ship’s object on SAR data has also been used to retrieve its direction, while on optical images, where the wake is visible, the ellipse encompassing the wake object has been used for heading estimate. A more detailed description on route’s parameters extraction can be found in Ref. 40.

Within this work, we propose a semiautomatic processing applied on multispectral data to refine vessels speed estimate basing on interbands time lags. Some sensors are characterized by asynchronous bands recording times; this time lag, which consists in a few hundreds of milliseconds, causes displacement of moving objects in the final scene.43 Once the displacement of the object within the time lag has been determined, speed estimate for each moving object can be easily derived. This approach has been proved to be efficient for fast moving objects, as cars.4346 However, in case of slow moving objects, like vessels, the time lag between bands could be insufficient to clearly detect the correspondent displacement of the object. Another issue is the difficulty to isolate the two points clusters representing ship’s positions due to the presence of wake and lather generated by ship’s movement.

Within this work, this approach has been tested on Sentinel-2A images, whose corresponding AIS data were available for bands time lag calibration and estimated speed validation. Sentinel-2 bands B2 and B4 have been used to estimate the time lag, as the larger displacement between ships’ positions has been observed among the available images. This is represented in Fig. 5(a) in a false color composition of the two spectral bands. Available Sentinel-2 images have been grouped into two samples, one for time lag calibration and one for speed estimate.

Fig. 5

(a) False color composition of a Sentinel-2 image to make evidence of the vessels displacement and (b) estimated speed residuals respect to real AIS data speed values.

JARS_13_1_014502_f005.png

Vessels with a length comprised between 100 and 200 m (equivalent to 10- to 20-m vessels in high spatial resolution images) have been selected and corresponding tiles have been extracted from the first group of images, to make the computations completely independent between each other. An image matching technique between positions occupied by each ship in the B2 image and in the B4 image has been used to compute the time lag between the bands. Under the reliable hypothesis that the transformation between the two images is a shift without rotations (i.e., the ship is only translating), two tie points are enough to estimate the transformation parameters (zero-order polynomial transformation) and to have redundancy. Thus tie points have been manually placed at the bow and the stern of each ship and the total displacement is computed. The final time lag, determined as the average of single-time lags computed for a sample of 28 vessels with known AIS speed, has been estimated in 350 ms, a value which is consistent with other studies made on different sensors.44,46 For all the analyzed cases, the displacement due to bands time lag is higher than band-to-band co-registration error, which, for bands B2 and B4 has been estimated in 0.19 pixel.47

The same methodology has been used for speed estimate for the second group of Sentinel-2 images. At this time, time lag is known from previous calibration, displacement results from image matching, and only speed has to be derived. AIS speed data have been used for validation. Results are shown in Fig. 5(b). The average error, computed as the difference between estimated and observed speed, is equal to 0.8  m/s whereas standard deviation is 1.8  m/s and almost 70% of residuals are comprised between ±2  m/s.

A similar approach could be reliable also for radar data. Due to its coherent nature, SAR is able to record information about the travel path of the radiation emitted, then scattered back and received from the targets during the acquisition time (i.e., at each instant). This peculiarity of the instrument allows to potentially measure also ship along-track velocity from ship displacement through acquisition time history and temporal lag. The higher the acquisition time, the more precise is the velocity measurement. Nevertheless, this approach has not been exploited within this work and only predefined speed ranges based on size have been assigned to vessels detected on SAR data.40

4.

Results

We used AIS information as ground truth for evaluating the performances from multispectral (Sentinel-2) and SAR (Sentinel-1 and COSMO-SkyMed) data processing. However, AIS data were recorded in a 4-min window around satellite observations and could have a small temporal shift with the image acquisition. Thus we supposed no speed and heading changes in this time span and shifted the positional information of each vessel to the image acquisition time (supposing a uniform rectilinear motion). In addition, we excluded all vessels known by AIS to be anchored in harbors, as they were not the target of our research. We also excluded from validation all the vessels having null values of length or heading in AIS data, being errors of the reporting system.

For SAR images, only we corrected the AIS positional information to account for the boat-off-the-wake distortion.13,48 According to the following equation: this effect displaces vessels (Δx) from their wakes in the sensor along-track direction due to a velocity component in the radar line-of-sight (LOS) direction (VLOS).

Eq. (3)

ΔxVLOSVsR0,
where Vs is the sensor velocity and R0 is the slant-range distance from the sensor to the vessel. This correction was needed to match AIS data with the detected vessels.

Finally, AIS records were not available for times and locations corresponding to WorldView-2, QuickBird, and GeoEye-1 surveys. In these cases, we made a manual check, retrieving ship’s length, and heading through manual image measurement.

Results, summarized in Table 5, have been retrieved through a regression analysis between estimated and AIS lengths and headings and are represented with confidence intervals for 95% and 99% of the estimates.

Table 5

Summary of results of the ship detection processing over high-resolution optical data, medium-resolution optical data, and radar data.

MS HRMS MRSAR
Number of observations5033716
LengthHeadingLengthHeadingLengthHeading
R20.870.990.700.730.940.99
Class 1Correctly classified (%)76327
Misclassified (%)244227
Not classified (%)05546
Class 2Correctly classified (%)782267
Misclassified (%)221922
Not classified (%)05911
Class 3Correctly classified (%)1008187
Misclassified (%)018
Not classified (%)0185

In addition, the percentage of correctly classified (estimated length belongs to the same class of the measured/AIS length), misclassified (estimated class is different from the measured/AIS class), and not classified ships (missed detection by the algorithm) has been determined by grouping available targets into three length classes (class 1: 0 to 15 m; class 2: 16 to 30 m; class 3: >31  m).

Ship position estimate is strictly dependent on image geolocation accuracy, declared in the data sheets of each single sensor (5 m for WorldView-2, 23 m for QuickBird-2, 5 m for GeoEye-1, 20 m for Sentinel-2, and 25 m for COSMO-SkyMed).

4.1.

Optical Remotely Sensed Images

Despite the high detail and accuracy achievable with high-resolution images, some errors in the estimate of ships’ length and heading can occur when using optical data. Fast moving vessels are often followed by extremely bright wakes. Being similar in reflectivity to the vessels themselves, wakes can be partially misclassified thus overestimating the ships’ length. On the other hand, very slow moving vessels can be followed by weak wakes. Being sometimes hardly distinguishable from sea background, their misclassification could compromise an accurate estimate of heading. In addition, small bright clouds, sea waves and crests usually occurring in open sea could be misclassified as vessels, thus generating some false positive detections.

4.1.1.

High spatial resolution images

Results for the detection in high spatial resolution images (WorldView-2, GeoEye-1, and QuickBird-2), referred to a sample of 50 ships, have shown an accuracy (R2) of 0.87 for lengths [Fig. 6(1a)]; estimates for ships smaller than 20 m are more dispersed than those for bigger ships, which are almost all underestimated. Nevertheless, the approach is reliable to detect ships <10-m long. Nearly 85% of estimates residuals respect to measured data is comprised between ±5  m [Fig. 6(1c)]. In addition, headings are estimated with a nearly perfect correlation with the measured ones (R2=0.99) [Fig. 6(1b)], with almost 90% of estimates residuals respect to measured data comprised between ±10  deg [Fig. 6(1d)].

Fig. 6

Scatterplots of measured and estimated ship’s (left) length and (right) heading for high spatial resolution optical images (1a, 1b), medium resolution optical images (2a, 2b), and full spatial resolution SAR images (3a, 3b). The gray areas represent upper and lower confidence bounds for the regression line at 95% and 99%. Graphs on the lower side show residuals of estimated (left) length and (right) headings respect to measured values for high spatial resolution optical images (1c, 1d), medium resolution optical images (2c, 2d), and full spatial resolution SAR images (3c, 3d). Histograms in Figs. 1(e)3(e) show the percentages of correctly classified, misclassified, and not classified vessels according to their length for three length classes (class 1: 0 to 15 m; class 2: 16 to 30 m; and class 3: >31  m), respectively, for high-resolution optical images, medium-resolution optical images, and SAR images at the full spatial resolution and at the degraded resolution by factor 5, 10, and 20.

JARS_13_1_014502_f006.png

Approximately 80% of class 1 and class 2 vessels and 100% of the biggest vessels belonging to class 3 have been correctly detected in high spatial resolution images. Less than 25% of class 1 and class 2 vessels has been misclassified; however, this information could be even important for maritime surveillance purposes, as it can provide knowledge of an unknown vessels, regardless its size. Dealing with very high-resolution data, no missed classifications for any of the considered class [Fig. 6(1e)] have been retrieved.

4.1.2.

Medium spatial resolution images

Slightly worse results respect to high-resolution optical images were obtained for the medium resolution (Sentinel-2), where vessels’ length estimate shows R2=0.70 [Fig. 6(2a)], whereas heading estimate shows R2=0.73 [Fig. 6(2b)]. Residual graphs on the lower side of Figs. 6(2c) and 6(2d) show that nearly 85% and 80% of residuals are comprised between ±50  m and ±45  deg for length and heading estimates, respectively.

These statistics have been computed from a larger sample of 337 vessels. The dataset available for the Sentinel-2 images comprises vessels of various size (from 10- to 400-m in length), detected with a unique set of parameters, and worse results compared to high resolution are probably due to the presence of outliers. In addition, among all the available data, only Sentinel-2 image has been acquired in open waters (Mediterranean Sea), were the effects of glint, wind, and waves are much more evident than in coastal areas, thus generating additional false alarms.

Being represented by almost a single pixel in the image, class 1 vessels have been quite completely undetected (55%) or misclassified (42%), but this can be a reliable information anyway, as stated in Sec. 4.1.1. The percentage of not detected vessels remains similar (59%) also for class 2 vessels, but in this case, the percentage of detected vessels increases to 22%, at the expenses of misclassified vessels percentage, which decreases to 19%. The majority of the vessels (81%) belonging to class 3 have been correctly classified and only 1% of the detected vessels have been misclassified [Fig. 6(2e)]. The missed detection for class 3 vessels could be due to areas of the image interested by high sea clutter, waves, or glint, but it is expected that this mainly interests vessels having a length closer to the lower limit of the class (30  m) than bigger vessels.

4.2.

SAR Remotely Sensed Images

Results over SAR images at the original full spatial resolution are shown in Fig. 6 on a sample of 16 vessels. Both length and heading are estimated with very high accuracy, resulting in R2 values, respectively, of 0.94 [Fig. 6(3a)] and 0.99 [Fig. 6(3b)], exceeding those retrieved for optical images. Good results are proved also by the residuals [Figs. 6(3c) and 6(3d)]; 94% of vessels have been detected with residuals value comprised between ±20  m and 100% of residuals for the heading are comprised between ±10  deg. However, the dataset available for SAR images include all big ships, resulting in an easier detection than for optical images. Some errors of vessel size and heading estimates can be due to the use of a Gaussian model of the sea statistics, which is not suited to properly deal with sea clutter respect to other more traditional distributions (i.e., K-distribution). As available full resolution SAR images include only very large vessels, percentages of correctly classified, misclassified, and not classified vessels have been retrieved considering the resampled images besides the full resolution images, in order to account also for smaller vessels. As in optical images, the percentage of misclassified and not classified vessels decreases while ship’s length increases [Fig. 6(3e)]. Almost 55% and 90% of class 1 and class 2 vessels, respectively, have been detected in available images, confirming that the present method is valuable to detect small vessels (10÷40  m) on SAR images. Almost 95% of class 3 vessels have been detected and the few missed detection could be due to the high clutter areas.

5.

Discussion

5.1.

Performances

The method proposed in this work relies on a parallel approach for object discrimination and movement parameters extraction applied on both optical and radar images. The approach is mainly based on OBIA, which has been extensively used for a wide number of application. Nevertheless, it is rarely applied on both optical and radar data in marine environments. On optical data, OBIA has been used to detect and classify vessels in Ref. 49 focusing on harbor areas, which are characterized by much different sea-state conditions respect to open seas. On SAR data, OBIA has never been applied in conjunction with CFAR, although the CFAR approach is one of the most popular methods for ship detection.

Although some more work should be done to refine results (e.g., detected target number balance for validation purposes, ship’s speed estimate refinement) and the proposed algorithm still has to be optimized to work automatically, only a few parameters are required in the segmentation and classification phases, making the method prone to be potentially adopted within maritime surveillance or emergency scenarios. In these contexts, primary limiting factors to accurate detections are represented by sea surface and atmospheric conditions, which can easily cause erroneous detections. Within this work, three strategies have been developed to reduce the number of false alarms and raise detection accuracy: (i) the denoising phase applied on optical images to separate targets from the background (i.e., sea and ships’ wakes); (ii) the object-based analysis applied on SAR data, to remove a considerable number of spurious objects identified by the CFAR detector; and (iii) the employment of a threshold-based rule-set founded on geometric characteristics of common real ships to isolate only potential targets.

5.2.

Comparison with the SUMO Software

Performances of the developed algorithm, called from here on out “Space Shepherd algorithm,” and its applicability in extensive monitoring of maritime areas, have been compared with a recently released software for ship detection. Search for unidentified maritime objects (SUMO) is an experimental software package for semi or fully automatic ship detection in satellite SAR images, developed for experimental use at European Commission’s Joint Research Centre (JRC) over the last 15 years. Maritime surveillance through satellite radar images is the aim that this software has been developed to, granting a semiautomatic processing with reduced human operator intervention, particularly in high sea surface inhomogeneity scenarios, which raise the complexity of ship detection tasks on radar images. Thus the presence of a trained operator is required to analyze results and to discard many of the false alarms deriving from sea surface, either natural, such as small islands, reefs, breaking waves, or man-made, such as piers or port-related constructions.4

The SUMO software works on images from most of the recent and contemporary SAR satellites (Sentinel-1, Radarsat-2, TerraSAR-X, Cosmo-SkyMed, ALOS-2 PALSAR-2, ERS-1, ERS-2, Radarsat-1, and ENVISAT ASAR). It receives as input one amplitude radar image and its metadata and produces as main output a list of detections together with their attributes in a XML file. A detailed description of the software is available in Ref. 4.

Alike the Space Shepherd algorithm, SUMO performs a CFAR detection on each polarimetric channel independently. Thus pixels having values exceeding a threshold computed from the local distribution are detected, and neighbor pixels are grouped in targets or ships.50 The Space Shepherd algorithm and the SUMO software mainly differ for a methodological aspect; while the former is based on a Gaussian distribution of sea statistics, the latter assumes that the sea clutter conforms to a K-distribution.4 Other differences concerning the processing are primarily related to input data and output target attributes.

SUMO is designed to work mainly with SAR data; tests have shown that it can perform with optical images under favourable circumstances, but no results are presented.4 Output attributes are related to targets’ position and geometric properties. In addition, basing on target’s attributes, SUMO assigns a reliability level (very likely to be a false alarm, probably a false alarm, probably a true ship, very likely to be a true ship). Detections deemed to be azimuth ambiguities are automatically flagged and assigned the lowest reliability.4,50 Differently from SUMO, applying the Space Shepherd algorithm could potentially provide a speed value for vessels identified on optical images.

Finally, respect to SUMO software, which has a completely developed user interface, the Space Shepherd algorithm misses of an implementation in a unique environment, as currently different phases require the use of different software.

The main dissimilarities between the two approaches are summarized in Table 6.

Table 6

Summary of the main differences between the SUMO software and the Space Shepherd algorithm in terms of input received data, target detection method, and extracted target attributes.

SUMO softwareSpace Shepherd algorithm
DataSAROptical and SAR
AlgorithmCFARCFAR + OBIA
Sea clutter distributionK-distributionGaussian distribution
Land maskingYesYes
Extraction of target attributesRow and pixel numberGeographic location
Geographic locationLength
LengthHeading w.r.t. north
Heading w.r.t. rangeSpeed
Number of detected pixels in the target signature
Maximum pixel value and detection significance
Developed user interfaceYesNo

A quantitative comparison has been performed by processing five Sentinel-1A data (Table 7), whose main characteristics have been already described in Sec. 2, with both the SUMO software and the Space Shepherd algorithm. Results have been validated through AIS data (Table 7).

Table 7

Sentinel-1 images used within this work for comparing results of the Space Shepherd algorithm to those of the SUMO software. Results are validated with AIS data available for each image.

Acquisition datePolarizationsNumber of available AIS data
Cross polCo-pol
03/12/2014HVHH16
04/12/2014VHVV18
29/12/2014HVHH24
10/12/2014VHVV27
11/12/2014VHVV5

The SUMO software allows tuning the PFAs and the detection threshold adjustment parameters for the K-distribution. Values of the other parameters can remain the same for all SAR sensors tested, radar bands (X, C, L), incidence angles, polarizations, wind speeds, sea states, geographic areas, and ship types.4 Within this comparison default, parameters defined by JRC have been used, as, according to the authors, they have been determined to work best from experience on a large set of images.4 Thus the PFA has been set to 107 and detection thresholds for co-pol and cross-pol channels have been set to 1.2 and 1.5, respectively.

On the contrary, Space Shepherd algorithm parameters have been scaled for the Sentinel-1 images, starting from the previously defined parameters for full resolution COSMO-SkyMed. The PFA has been set to 105, whereas the guard and the background windows have been set to 20×20 and 50×50  pixels, respectively. Parameters used for the OBIA processing over SAR images are listed in Table 8.

Table 8

Features and values used for the segmentation and classification steps.

FeaturesSegmentationClassification
Scale50
Shape0.1
Compactness0.5
Band value1
Area (pixel)0÷500
Length/width1÷5

Results of the comparison are hereby described in terms of: (i) number of detections, i.e., the total number of detection resulting from processing; (ii) number of correct detections, i.e., the number of correctly identified ships with respect to AIS data; (iii) number of false alarm, computed as the difference between the number of detections and the number of correct detections; (iv) number of missed detections, computed as the difference between the number of ground truth data and the number of correct detections; (v) number of ground truth targets, corresponding to available AIS data for each analyzed image; and (vi) probability of detection, defined as the ratio between the number of correct detections and the number of ground truth targets.

These statistics have been determined for each image in both polarizations merged, respectively, for cross-polarization and co-polarization images and represented in the histograms in Fig. 7.

Fig. 7

Results of the comparison between the SUMO software and the Space Shepherd algorithms in terms of total number of detections, number of correct detections, number of false alarms, number of missed detections, and number of ground truth targets for (a) cross-polarization images and (b) co-polarization images.

JARS_13_1_014502_f007.png

Results have shown that, while correct detection is almost equal for all tests, images acquired in cross polarization are characterized by a lower number of detections, with, consequently, a lower number of false alarms. On the contrary, co-polarization images show a considerable number of false alarms, particularly if processed with SUMO, reaching also one order of magnitude more than results from the Space Shepherd algorithm. This disagreement could be due to: (i) default parameters setting within SUMO and (ii) the application of OBIA after CFAR application within the Space Shepherd algorithm, which helps in removing isolated pixels and possible false alarms.

The average probability of detection is high for all images (Sentinel-1 and COSMO-SkyMed); it reaches 0.89 and 0.80 for cross-polarization images and 0.83 and 0.79 in co-polarization images, respectively, for the SUMO software and Space Shepherd algorithm. Consequently, the percentage of missed detection is low and keeps below 20% averaged for all tests.

In addition, AIS data have been used to validate results of the processing in terms of length and heading estimates, following the same methodology described in Sec. 3 (Fig. 8). Results of the tests on these specific Sentinel-1 images have not shown a tendency toward cross polarization or co-polarization, as R2 values are almost comparable for all the cases. In the case of tests with the Space Shepherd algorithm, results have shown a better estimate of heading respect to length, in line with results obtained from processing of the other images, as described in the previous sections. On the contrary, results obtained from the SUMO software have shown higher accuracies in length estimates respect to heading. This could derive from the corrections applied to heading estimates respect to range, which have been converted in heading respect to north in order to compare them with AIS data. An example of detection results of the Space Shepherd algorithm and of the SUMO software applied over the Sentinel-1 images used for these tests is represented in Fig. 9.

Fig. 8

Scatterplots of measured and estimated ship’s (a) and (b) length and (c) and (d) heading for (left) cross-polarization images and (right) co-polarization images acquired by Sentinel-1. The gray line represents the ideal perfect correspondence between real data and estimates (R2=1).

JARS_13_1_014502_f008.png

Fig. 9

Detection results of the Space Shepherd algorithm (in orange) and of the SUMO software (light blue) over a Sentinel-1 image (named E1EA) in both its (a) cross polarization and (b) co-polarization. AIS data used for validation are represented in yellow dots.

JARS_13_1_014502_f009.png

A more detailed analysis has been performed over false alarms for results on Sentinel-1 images of both the SUMO software and the Space Shepherd algorithm. The ratio between the number of false alarms and the portion of the image occupied by sea (km2) is lower for cross polarization (0.003, averaged on all tests) respect to co-polarization (0.110, averaged on all tests), denoting that, within these experiments, cross polarization is less sensitive to sea conditions or other causes of false alarms respect to co-polarization.

As AIS data could sometimes be characterized by poor quality or presence of technical errors51,52 and smaller ships are not obliged to exchange AIS information, some of these false alarms could instead imply the presence of small ships or a lack of received AIS data. To better understand the distribution of false alarms in the images and to correlate them to potential small ships or sea clutter, the percentage of false alarms respect to the total number of false alarms in the whole image has been determined within 10 km equally spaced distance zones from the coast (Table 9). Results have shown that the on cross-polarization images the percentage of false alarms decreases when leaving the coastline, while on co-polarization images false alarms seems to be more equally distributed over all distance zones. The higher concentration of false alarms nearby the coast could be addressed to the presence of small ships, which do not send AIS data. On the other hand, false alarms found far from the coast could be due to: (i) small or big ships, which do not send AIS data or are located in areas with poor AIS coverage; (ii) azimuth ambiguities generated by the SAR acquisition system; and (iii) sea clutter generated by adverse atmospheric and wind conditions.

Table 9

Results of false alarms analysis with respect to distance zones equally spaced of 10 km from the coast, in terms of percentage of false alarms and detections average length. Results are defined for both the tests with the SUMO software and the Space Shepherd algorithm (hereby defined as SS) applied on cross-polarization and co-polarization images. Results are determined as average for analysis on all Sentinel-1 images.

Distance from the coast (km)Cross polarizationCo-polarization
SUMOSSSUMOSS
100.610.360.110.22
200.050.110.160.07
300.010.030.150.07
400.100.120.120.15
500.100.260.140.33
600.080.080.110.12
700.030.010.110.03
800.010.030.070.00
900.000.000.060.00

Evidence from experiments has shown that targets with strong radar signal are detected in both cross polarization and co-polarization. Consequently, the percentage of detections correspondent for geographic position on both co- and cross-polarization images should represent vessels, misinterpreted for false alarms. In particular, detections sited within 10 km from the coast, could actually represent small vessels lacking of AIS data. Within these experiments, the percentage of false alarms detected in co-polarization images and corresponding to false alarms detected in cross-polarization images is equal on average to 1.6% and 29% of the total number of false alarms detected in co-polarization images, for tests with SUMO and Space Shepherd algorithm, respectively. In addition, the average length of targets addressed as false alarms in many cases is not negligible. This implies that if these detections represent ships rather than false alarms, AIS data are not available for a consistent number of medium/big size vessels, and supplementary monitoring systems could be required. Nevertheless, the subtraction of potential vessels lacking of AIS data from the number of false alarms does not influence the relative order of magnitude for co-polarization and cross polarization. Thus tests performed within this work have shown that, if on one side cross polarization and co-polarization give an equal contribution in terms of correct detection results, on the other side what changes most between the two polarizations is the number of false alarms. Different authors4,51 have, indeed, reported that cross polarization is better than co-polarization for ship surveillance, as: (i) ship signatures are better defined; (ii) backscatter from the ocean is smaller; and (iii) ship/sea contrast is higher for all incidence angles. The HV polarization, in particular, is optimal for detecting ships when SAR incidence angles are below 45 deg,16 as those characterizing the Sentinel-1A images employed within these tests.

In addition, a more detailed comparison of the two algorithms should include an assessment of the SUMO performances with respect to parameters tuning, particularly on co-pol images, which seem to be more sensitive to sea clutter respect to cross-pol images. As an example, to have 1% of false alarms among the total number of detections within their tests, Santamaria et al.50 set detection threshold adjustments to 10.0 and 2.0 for the co-pol and cross-pol channels, respectively. This operation on our tests has actually led to a reduction of false alarms of about 20%, averaged on all cross-polarization images, and of about 65%, averaged on co-polarization images. Nevertheless, also the number of correct detections has slightly decreased (nearly 12% for cross-pol and 16% for co-pol), as a consequence of the lower number of detections.

5.3.

Issues and Possible Improvements

Experiments developed within this work and literature have evidenced some issues in detecting vessels on both optical and radar images. Spatial resolution limits the minimum detectable ship size, despite the high detail and accuracy achievable with high-resolution optical images. On SAR data, minimum detectable ship size is difficult to quantify, as it does not depend only on the acquisition system, but it could also be influenced by target characteristics and environmental conditions (i.e., large nonmetallic ships have a very small radar echo, while objects smaller than image resolution fitted with strong radar reflectors could be easily detected).4

In addition, false alarms generated by natural environmental conditions (wind and waves) combined with acquisition inherent factors related to the targets (ship size and material), the sensor (resolution and polarization) and the imaging geometry (incidence angle and aspect angles), particularly on radar images, are difficult to overcome. Thus results of ship detection algorithms could be improved by false alarms (e.g., azimuth ambiguities, strong waves, unmasked islands, or coastal infrastructures) removal realized by experienced human operators.4 Consequently, detection and false alarm rates should be quantified as a function of all the above cited variables, taking into account the complexity of the acquiring system and of boundary conditions.4

This work can be improved by refining ships speed estimate and by formalizing a completely automatic processing. The image matching approach between the two single bands of the images used to derive ships speed (Sec. 3.3) needs to be converted into an automatic processing, selecting a proper operator (e.g., Forstner or Moravec) capable of identifying tie points along object borders through borders or gradient estimation. Once displacement is computed, speed can be automatically derived. In addition, in maritime surveillance or emergency monitoring perspective, it should be fundamental to automatize the whole processing, from images download and preprocessing, to vessels’ movement parameters estimate, in a unique system and to reduce the time interval between data acquisition and results availability to end users.

6.

Conclusion

In this paper, optical and radar data have been combined within the common framework of the OBIA to detect vessels moving on open seas. The complete processing (i.e., vessel detection and movement properties extraction) is simple and robust and has been applied in parallel on optical and radar images. OBIA has been selected as the core of the processing because of its adaptability to different conditions (sensor, acquisition place and date, atmospheric conditions, and sea surface roughness). In addition, OBIA helped in efficiently remove false alarms both on optical and radar data as the classification phase is mainly based on geometric properties of real vessels (area and length/width).

The proposed method allows identifying with a satisfying accuracy even small/medium size vessels in complex sea environments and different environmental conditions. Length and heading estimates, validated through manual measurements and available AIS data, have proved reliable accuracy of the method. Results have revealed R2=0.78 and R2=0.86 estimated on optical data and R2=0.94 and R2=0.99 on SAR data, for length and heading, respectively. Common feature to all the experiments is the better estimate of heading respect to length, even for very small ships, as wakes are characterized with bigger objects respect to vessels objects. Even though some missed detections have been identified in all images, the majority of imaged vessels have been detected.

In addition, results have shown that detection accuracy can be influenced by image spatial resolution and sea surface characteristics, which, in particular, are the main sources of false alarm both on optical and radar data.

Although requiring different preprocessing operations and filtering techniques, optical and radar data turn to be complementary for this and for similar applications. They could be properly combined in order to assure a continuous and reliable vessel detection, thanks to their technological peculiarity and specific sensor characteristics. In addition, high- and medium-resolution images can be conveniently adopted to reach high detail or to cover wide areas according to specific needs.

Findings of this work confirm that an effective monitoring system of maritime area should rely on a proper combination of optical and radar data, together with conventional tracking systems.

Although some more work should be done to remove remaining false alarms and to automatize the whole processing, in order to make it efficiently exploitable in different scenarios, retrieved results, compared to existing operational systems, are encouraging. The proposed method could have practical applications within programs, like the European Copernicus, which aims at promoting effective actions toward water environments monitoring, maritime security, surveillance, and humanitarian aid.

Acknowledgments

This work has been supported by the Politecnico di Milano through the Polisocial Award, Grant No. T5E4RIST00. We want to thank Francesco Topputo, Mauro Massari, Stefano Tebaldini, Francesco Banda, and Riccardo Lombardi e Andrea Marchesi for the work done for the “Space Shepherd” project. This paper is an expansion of the work presented at the SPIE Remote Sensing Conference 2017 (10428–9). The authors have no relevant financial interests in the manuscript and no other potential conflicts of interest to disclose.

References

1. 

Y. Fischer and A. Bauer, “Object-oriented sensor data fusion for wide maritime surveillance,” in Proc. IEEE Int. Waterside Security Conf. (WSS), 1 –6 (2010). https://doi.org/10.1109/WSSC.2010.5730244 Google Scholar

2. 

P. A. Mallas and H. C. Graber, “Imaging ships from satellites,” Oceanography, 26 (2), 150 –155 (2013). https://doi.org/10.5670/oceanog.2013.71 1042-8275 Google Scholar

3. 

S. Brusch et al., “Ship surveillance with TerraSAR-X,” IEEE Trans. Geosci. Remote Sens., 49 (3), 1092 –1103 (2011). https://doi.org/10.1109/TGRS.2010.2071879 IGRSD2 0196-2892 Google Scholar

4. 

H. Greidanus et al., “The SUMO ship detector algorithm for satellite radar images,” Remote Sens., 9 (3), 246 (2017). https://doi.org/10.3390/rs9030246 Google Scholar

5. 

N. Kourti et al., “Integrating remote sensing in fisheries control,” Fish. Manage. Ecol., 12 (5), 295 –307 (2005). https://doi.org/10.1111/fme.2005.12.issue-5 FMAEEL 1365-2400 Google Scholar

6. 

H. Greidanus et al., “Benchmarking operational SAR ship detection,” in Proc. IEEE Int. Geoscience and Remote Sensing Symp. IGARSS’04, 4215 –4218 (2004). https://doi.org/10.1109/IGARSS.2004.1370065 Google Scholar

7. 

D. Werle, “Radarsat SAR azimuth ambiguity patterns-the ghost fleet of Halifax Harbour and implications for applications,” in Int. Symp. Geomatics in the Era of RADARSAT (GER’97), 25 –30 (1997). Google Scholar

8. 

N. Proia and V. Pagé, “Characterization of a Bayesian ship detection method in optical satellite images,” IEEE Geosci. Remote Sens. Lett., 7 (2), 226 –230 (2010). https://doi.org/10.1109/LGRS.2009.203182610.1109/LGRS.2009.2031826 Google Scholar

9. 

J. K. Roskovensky, “Technique for ship/wake detection,” U.S. Patent No. 8, 170, 282, U.S. Patent and Trademark Office, Washington, DC (2012).

10. 

H. Bouma et al., “Segmentation and wake removal of seafaring vessels in optical satellite images,” Proc. SPIE, 8897 88970B (2013). https://doi.org/10.1117/12.2029791 PSISDG 0277-786X Google Scholar

11. 

H. Greidanus and N. Kourti, “A detailed comparison between radar and optical vessel signatures,” in Proc. Geoscience and Remote Sensing Symp. IGARSS’06, (2006). https://doi.org/10.1109/IGARSS.2006.839 Google Scholar

13. 

D. J. Crisp, “The state-of-the-art in ship detection in synthetic aperture radar imagery,” (2004). Google Scholar

14. 

C. H. Gierull and I. Sikaneta, “A compound-plus-noise model for improved vessel detection in non-Gaussian SAR imagery,” IEEE Trans. Geosci. Remote Sens., 56 (3), 1444 –1453 (2018). https://doi.org/10.1109/TGRS.2017.2763089 IGRSD2 0196-2892 Google Scholar

15. 

S. Song et al., “Ship detection in SAR imagery via variational Bayesian inference,” IEEE Geosci. Remote Sens. Lett., 13 (3), 319 –323 (2016). https://doi.org/10.1109/LGRS.2015.2510378 Google Scholar

16. 

M. Yeremy, “Ocean surveillance with polarimetric SAR,” Can. J. Remote Sens., 27 (4), 328 –344 (2001). https://doi.org/10.1080/07038992.2001.10854875 CJRSDP 0703-8992 Google Scholar

17. 

J. W. M. Campbell, “Ocean surface feature detection with the CCRS along-track InSAR,” Can. J. Remote Sens., 23 (l), 24 –37 (1997). https://doi.org/10.1080/07038992.1997.10874675 CJRSDP 0703-8992 Google Scholar

18. 

P. Partsinevelos and G. Miliaresis, “Ship extraction and categorization from ASTER imagery,” in Proc. 2nd Int. Conf. Remote Sensing and Geoinformation of Environment, (2014). Google Scholar

19. 

G. Yang et al., “Ship detection from optical satellite images based on sea surface analysis,” IEEE Geosci. Remote Sens. Lett., 11 (3), 641 –645 (2014). https://doi.org/10.1109/LGRS.2013.2273552 Google Scholar

20. 

U. Kanjir, H. Greidanus and K. Oštir, “Vessel detection and classification from spaceborne optical images: a literature survey,” Remote Sens. Environ., 207 1 –26 (2018). https://doi.org/10.1016/j.rse.2017.12.033 Google Scholar

21. 

G. Huang, Y. Wang, Y. Zhang and Y. Tian, “Ship detection using texture statistics from optical satellite images,” in Int. Conf. Digital Image Computing Techniques and Applications (DICTA), 507 –512 (2011). https://doi.org/10.1109/DICTA.2011.91 Google Scholar

22. 

C. Corbane et al., “A complete processing chain for ship detection using optical satellite imagery,” Int. J. Remote Sens., 31 (22), 5837 –5854 (2010). https://doi.org/10.1080/01431161.2010.512310 IJSEDK 0143-1161 Google Scholar

23. 

G. Saur et al., “Detection and classification of man-made offshore objects in TerraSAR-X and Rapideye imagery: selected results of the Demarine-DEKO project,” 1 –10 (2011). https://doi.org/10.1109/Oceans-Spain.2011.6003596 Google Scholar

24. 

J. Tang et al., “Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine,” IEEE Trans. Geosci. Remote Sens., 53 (3), 1174 –1185 (2015). https://doi.org/10.1109/TGRS.2014.2335751 IGRSD2 0196-2892 Google Scholar

25. 

Z. Song, H. Sui and Y. Wang, “Automatic ship detection for optical satellite images based on visual attention model and LBP,” in IEEE Workshop on Electronics, Computer and Applications, 722 –725 (2014). https://doi.org/10.1109/IWECA.2014.6845723 Google Scholar

26. 

C. Zhu et al., “A novel hierarchical method of ship detection from spaceborne optical image based on shape and texture features,” IEEE Trans. Geosci. Remote Sens., 48 (9), 3446 –3456 (2010). https://doi.org/10.1109/TGRS.2010.2046330 IGRSD2 0196-2892 Google Scholar

27. 

G. Liu et al., “A new method on inshore ship detection in high-resolution satellite images using shape and context information,” IEEE Geosci. Remote Sens. Lett., 11 (3), 617 –621 (2014). https://doi.org/10.1109/LGRS.2013.2272492 Google Scholar

28. 

S. Qi et al., “Unsupervised ship detection based on saliency and S-HOG descriptor from optical satellite images,” IEEE Geosci. Remote Sens. Lett., 12 (7), 1451 –1455 (2015). https://doi.org/10.1109/LGRS.2015.2408355 Google Scholar

29. 

H. He et al., “Inshore ship detection in remote sensing images via weighted pose voting,” IEEE Trans. Geosci. Remote Sens., 55 (6), 3091 –3107 (2017). https://doi.org/10.1109/TGRS.2017.2658950 IGRSD2 0196-2892 Google Scholar

30. 

H. Greidanus, “Assessing the operationality of ship detection from space,” in EURISY Symp. New Space Services for Maritime Users: The Impact of Satellite Technology on Maritime Legislation, 35 (2005). Google Scholar

33. 

T. Eriksen et al., “Tracking ship traffic with space-based AIS: experience gained in first months of operations,” in IEEE Int. Waterside Security Conf. (WSS), 1 –8 (2010). https://doi.org/10.1109/WSSC.2010.5730241 Google Scholar

34. 

P. Ramona et al., “Ship detection in SAR medium resolution imagery for maritime surveillance: algorithm validation using AIS data,” in IEEE Int. Geoscience and Remote Sensing Symp. (IGARSS), 3690 –3693 (2014). https://doi.org/10.1109/IGARSS.2014.6947284 Google Scholar

35. 

M. Vespe et al., “Maritime multi-sensor data association based on geographic and navigational knowledge,” in Radar Conf. RADAR’08, 1 –6 (2008). https://doi.org/10.1109/RADAR.2008.4720782 Google Scholar

36. 

F. Wu et al., “Vessel detection and analysis combining SAR images and AIS information,” in IEEE Int. Geoscience and Remote Sensing Symp. (IGARSS), 6633 –6636 (2016). https://doi.org/10.1109/IGARSS.2016.7730732 Google Scholar

37. 

R. Richter and D. Schläpfer, ATCOR-4 User Guide, Version 7.0.3, Wessling, Germany (20162018). https://doi.org/http://www.rese.ch/pdf/atcor4_manual.pdf Google Scholar

38. 

A. A. Green et al., “A transformation for ordering multispectral data in terms of image quality with implications for noise removal,” IEEE Trans. Geosci. Remote Sens., 26 (1), 65 –74 (1988). https://doi.org/10.1109/36.3001 IGRSD2 0196-2892 Google Scholar

39. 

M. Gianinetto et al., “Integration of COSMO-SkyMed and GeoEye-1 data with object-based image analysis,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (5), 2282 –2293 (2015). https://doi.org/10.1109/JSTARS.2015.2425211 Google Scholar

40. 

M. Aiello and M. Gianinetto, “A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis,” Proc. SPIE, 10428 104280A (2017). https://doi.org/10.1117/12.2277941 PSISDG 0277-786X Google Scholar

41. 

C. H. Gierull, “On the statistical examination of prevailing ship detection methodologies for space-based synthetic aperture radar imagery. Improved capabilities through rigourous mathematical treatment and a novel sea clutter model,” (2017). Google Scholar

42. 

T. M. Padmaja, P. R. Krishna and R. S. Bapi, “Majority filter-based minority prediction (MFMP): an approach for unbalanced datasets,” in TENCON 2008-2008 IEEE Region 10 Conf., 1 –6 (2008). https://doi.org/10.1109/TENCON.2008.4766705 Google Scholar

43. 

A. Marchesi et al., “Detection of moving vehicles with WorldView-2 satellite data,” in Proc. 33rd Asian Conf. Remote Sensing, 26 –30 (2012). Google Scholar

44. 

A. Kääb and S. Leprince, “Motion detection using near-simultaneous satellite acquisitions,” Remote Sens. Environ., 154 164 –179 (2014). https://doi.org/10.1016/j.rse.2014.08.015 Google Scholar

45. 

T. Krauss et al., “Traffic flow estimation from single satellite images,” in Proc. of the SMPR Conf. ISPRS Archives, 241 –246 (2013). Google Scholar

46. 

T. Krauss, “Exploiting satellite focal plane geometry for automatic extraction of traffic flow from single optical satellite imagery,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-1 (1), 179 –187 (2014). https://doi.org/10.5194/isprsarchives-XL-1-179-2014 1682-1750 Google Scholar

47. 

L. Barazzetti, B. Cuca and M. Previtali, “Evaluation of registration accuracy between Sentinel-2 and Landsat 8,” Proc. SPIE, 9688 968809 (2016). https://doi.org/10.1117/12.2241765 PSISDG 0277-786X Google Scholar

48. 

F. Banda, L. Ferro-Famil and S. Tebaldini, “Polarimetric time-frequency analysis of vessels in Spotlight SAR images,” in IEEE Int. Geoscience and Remote Sensing Symp. (IGARSS), 1033 –1036 (2014). https://doi.org/10.1109/IGARSS.2014.6946604 Google Scholar

49. 

G. Willhauck et al., “Object-oriented ship detection from VHR satellite images,” 1 –12 (2005). Google Scholar

50. 

C. Santamaria et al., “Mass processing of Sentinel-1 images for maritime surveillance,” Remote Sens., 9 (7), 678 (2017). https://doi.org/10.3390/rs9070678 Google Scholar

51. 

P. W. Vachon, R. A. English and J. Wolfe, “Ship signatures in synthetic aperture radar imagery,” in IEEE Int. Geoscience and Remote Sensing Symp. IGARSS 2007, 1393 –1396 (2007). https://doi.org/10.1109/IGARSS.2007.4423066 Google Scholar

52. 

A. Harati-Mokhtari et al., “Automatic identification system (AIS): data reliability and human error implications,” J. Navig., 60 (3), 373 –389 (2007). https://doi.org/10.1017/S0373463307004298 Google Scholar

Biography

Martina Aiello received her PhD in environmental and infrastructure engineering from the Politecnico di Milano in 2018 and her MSc degree in environmental and land planning engineering in 2014, specializing in environmental monitoring and diagnostics, with a thesis about retrieving bottom coverage and bathymetry properties in marine shallow waters from high-resolution satellite images through a bio-optical model. She is a research assistant in the Laboratory of Remote Sensing at Politecnico di Milano, Italy. Her research activity focuses on object-based classification techniques for water environments through high- and medium-resolution satellite images.

Biographies of the other authors are not available.

© 2019 Society of Photo-Optical Instrumentation Engineers (SPIE) 1931-3195/2019/$25.00 © 2019 SPIE
Martina Aiello, Renata Vezzoli, and Marco Gianinetto "Object-based image analysis approach for vessel detection on optical and radar images," Journal of Applied Remote Sensing 13(1), 014502 (11 January 2019). https://doi.org/10.1117/1.JRS.13.014502
Received: 9 April 2018; Accepted: 11 December 2018; Published: 11 January 2019
Lens.org Logo
CITATIONS
Cited by 17 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Synthetic aperture radar

Radar

Artificial intelligence

Spatial resolution

Target detection

Image analysis

Ocean optics

RELATED CONTENT

Kronecker STAP and SAR GMTI
Proceedings of SPIE (May 14 2016)
SAR imaging and detection of moving targets
Proceedings of SPIE (September 15 1998)
Coherent aspect-dependent SAR image formation
Proceedings of SPIE (June 09 1994)
Multiresolution FOPEN SAR image formation
Proceedings of SPIE (August 13 1999)

Back to Top