Synthetic aperture radar (SAR) has a special ability to work in any type of inclement weather, and is a very suitable tool for Ocean surveillance. Scene classification is an essential pre-task of other computer vision tasks for ocean monitoring. It is of great importance to develop scene classification technology of SAR sea images. Due to the excellent feature representation abilities of neural networks, the deep learning-based methods are far superior to the traditional methods based on manual features in scene classification task performance. Many lightweight classification networks have been proposed to improve the inference speed of the networks. But in comparison with ordinary CNNs, the lightweight networks have slightly lower accuracy for scene classification tasks. So in this article, we proposed an improved lightweight Convolutional Neural Network for scene classification of SAR sea images. First, in order to meet the real-time performance, we choose MobileNetv1 as the original classification network in this paper. Then, to compensate for the lack of accuracy, we use 1D asymmetric convolution kernels to strengthen each layer of the depthwise convolutions in the network. Finally, after training time, we merge the linear calculations of each layer of the network to convert it into the original structure. The experimental results show that the modified model has obtained an accuracy improvement than the original one on the scene classification of sea SAR images without extra computation.
Infrared image ship detection has important applications in military and civil affairs. Because infrared images are not easy to acquire in large quantities, deep neural networks cannot directly use infrared images for training; if the pre-trained model of visible light images is directly used for detection, the phenomenon of missed detection will be caused due to different imaging conditions. In response to this problem, this paper proposes a detection method that combines a deep convolutional neural network and salient region. Firstly, we proposed a method extracting salient region based on anchor and saliency map, then multiple new images are formed by salient regions, and the newly formed images and the original image are input to the deep convolutional neural network for parallel processing, and finally the results of the detection are integrated to produce the final detection results by the non-maximum suppression (NMS) method. The comparison results show that the method proposed in this paper can effectively reduce the rate of missed detection and thus improve the accuracy of detection.
Compared with short-term tracking, long-term tracking is a more challenging task. It need to have the ability to capture the target in long-term sequences, and undergo the frequent disappearance and re-appearance of target. Therefore, long-term tracking is much closer to realistic tracking system. But few long-term tracking algorithms have been done and few promising performance have been shown. In this paper, we focus on long-term visual tracking framework based on parts with multiple correlation filters. First of all, multiple correlation filters have been applied to locate the target collaboratively and address the partial occlusion issue in a local search region. Based on the confidence score between the consecutive frames, our tracker determines whether the current tracking result is reliable or not. In addition, an online SVM detector is trained by sampling positive and negative samples around the reliable tracking target. The local-to-global search region strategy is adopted to adapt the short-term tracking and long-term tracking. When heavy occlusion or out-of-view causes the tracking failure, the re-detection module will be activated. Extensive experimental results on tracking datasets show that our proposed tracking method performs favorably against state-of-the-art methods in terms of accuracy, and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.