In the context of the rapid development and widespread application of remote sensing technology, fine-grained aircraft classification is a research area with significant practical value. Most current classification algorithms, developed primarily for natural images, underperform when applied to remote sensing images. Additionally, fine-grained classification tasks inherently face challenges due to small inter-class differences and subtle discriminative features. To address these issues, this study designs an aircraft fine-grained network (AFG-Net) method, a network for fine-grained aircraft classification in remote sensing images. AFG-Net builds upon the ConvNext network, known for its superior classification capabilities over traditional CNNs and comparable to the Swin Transformer. Due to the influence of natural environment and complex imaging backgrounds on remote sensing images, this paper conducted data augmentation before training, which can help the network model better cope with interference and improve robustness. The following are the improvements made in this study: 1) Developing the ConvNext_s network, enhanced with the SimAm attention mechanism for better extraction of subtle, discriminative aircraft features. 2) Proposing a new composite loss function based on Mutual-Channel Loss, allowing the network to consider both global and local information more comprehensively, thereby improving aircraft classification performance and model robustness. 3) Demonstrating AFG-Net's applicability to fine-grained aircraft classification tasks in remote sensing images. Tested on the MTARSI and OPT-Aircraft_v1.0 datasets, AFG-Net achieves accuracies of 94.76% and 84.59%, respectively, outperforming existing advanced models in extensive experiments.
With the rapid development of remote sensing technology, remote sensing images play an important role in the agricultural field, geological field, and natural disaster detection. The size of aircraft in complex scenes in remote sensing images is extremely small, and aircraft of different models have similar shapes. Therefore, improving the accuracy of aircraft target recognition is a challenging task. We propose an improved aircraft small target recognition method based on yolov5, which can improve the recognition accuracy of aircraft targets while ensuring the speed of the model. The specific content is as follows: To address the problem of lack of remote sensing aircraft training sets, we use existing public remote sensing images to combine with aircraft model images. Most of the aircraft models only occupy a dozen to twenty pixels in the 1k*1k image, and perform Scale the generated data set; in order to better combine features of different scales and obtain higher-level feature fusion, introduce the BiFPN module with more residual connections and more complete feature fusion; use the SE attention mechanism to learn the weights of different features , extract information that is more important for detection and improve model performance; in view of the small size of the aircraft, a detection method based on Wasserstein distance such as NWD (Normalized Wasserstein Distance) is selected as the loss function.
In the context of the rapid development and widespread application of remote sensing technology, small object detection has become a prominent research focus. Despite the extensive use of the YOLOv5 network in the field of object detection, its performance in detecting small objects, especially in remote sensing images, remains unsatisfactory. Particularly, detecting and recognizing small objects, such as aircraft, pose greater challenges. The reasons for this include the small size of the targets, low contrast between targets and backgrounds, and the lack of comprehensive publicly available datasets. To address these issues, this study constructed a dataset of remote sensing images containing small aircraft targets, which facilitates the network in capturing fine-grained features and improving detection performance, thus compensating for the shortcomings of existing publicly available datasets. Based on the YOLOv5network model, this study proposed the following optimization measures: (1) To tackle the issue of small target sizes, the model structure was simplified to make the feature extraction network more suitable for small objects and to reduce the number of model parameters. (2) In response to the deficiencies in the original model's fusion method, a bidirectional Feature Pyramid Network (BiFPN) was introduced to enhance multi-level feature fusion capability. (3) To reduce the computational complexity of the model, reasonable anchor boxes were designed to enable the model to accurately focus on crucial information during the detection process. Experimental results demonstrate that the proposed algorithm improves the detection accuracy and speed of small aircraft targets in remote sensing images. On our custom dataset, the method achieves excellent results in terms of precision, computational efficiency, and parameter count.
Single-frame infrared small target detection plays a crucial role in various applications, such as monitoring critical areas at airports, ensuring flight safety, and surveillance of unmanned aerial vehicles (UAVs). However, detecting small targets in infrared images is challenging task due to factors like imaging distance, image quality, target environment, and diverse target types. To improve the detection of small targets within complex backgrounds, we redesign the backbone network and transform the detection task into an image binary segmentation problem. Firstly, we introduced Ublock module to capture contextual information surrounding small targets. Additionally, we employ deconvolution to enhance the resolution of downsampled images. Furthermore, we introduce multiple top-down and bottom-up pathways for feature fusion. Lastly, we incorporate attention modules after each Ublock module to highlight the small targets more effectively. Experimental results demonstrate the effectiveness of our proposed method in enhancing infrared small target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.