With the popularity of unmanned aerial vehicle (UAV) technology, remote sensing target tracking in aerial videos from UAVs has drawn much attention. However, various problems occur, such as obvious target scale changes and frequent target shape updates in UAV aerial videos, and typical tracking algorithms have difficulties in solving these challenges. Therefore, we propose a remote sensing target tracking method for UAV videos based on a multiscale antideformation network (MSADN). This method uses the fully convolutional Siamese network (SiameseFC) as its basic architecture. First, in the target feature extraction stage, we use a receptive field block module to change the single convolutional layer of the network. And the multibranch structure can improve the algorithm’s adaptability to target scale changes. Then, in the tracking stage, to solve the problem of the lack of template updates, we integrate a template dynamic update module into the SiameseFC architecture. This module uses long short-term memory to generate control signals to dynamically update target shape information, and it will improve the algorithm’s ability to cope with frequent target shape updates. Finally, compared to state-of-the-art trackers, experimental results show that the MSADN algorithm can obtain better performance (75.9% Prec, 56.4% Succ) while ensuring higher efficiency (60 fps).
KEYWORDS: Visualization, Sensors, Information visualization, Cameras, Video surveillance, Information fusion, Imaging systems, Image classification, Data fusion
Anomaly detection with visual information by distributed deep learning is proposed in the paper. First, visual anomalies are defined in a special application domain, which are very important and critical for safe operation. Secondly, deep convolutional neural network is chosen as detector for visual anomalies. Thirdly, detection results from different visual sources are fused to get higher accuracies and lower false alarm rate. Experimental results demonstrate that the visual anomaly detection framework proposed can achieve high performance and provide satisfactory security assurance.
Being able to adapt all weather at all times, it has been a hot research topic that using Synthetic Aperture Radar(SAR) for remote sensing. Despite all the well-known advantages of SAR, it is hard to extract features because of its unique imaging methodology, and this challenge attracts the research interest of traditional Automatic Target Recognition(ATR) methods. With the development of deep learning technologies, convolutional neural networks(CNNs) give us another way out to detect and recognize targets, when a huge number of samples are available, but this premise is often not hold, when it comes to monitoring a specific type of ships. In this paper, we propose a method to enhance the performance of Faster R-CNN with limited samples to detect and recognize ships in SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.