Paper
18 March 2022 Reinforcement learning technology for air combat confrontation of unmanned aerial vehicle
Huan Zhou, Xiaoyan Zhang, Zhuoran Zhang
Author Affiliations +
Proceedings Volume 12168, International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2021); 121681Y (2022) https://doi.org/10.1117/12.2631651
Event: International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2021), 2021, Harbin, China
Abstract
UAV air combat game confrontation requires a high level of intelligence and autonomy, and needs the support of artificial intelligence technology to effectively improve the ability of autonomous air combat. This paper summarizes the reinforcement learning methods for UAV air combat confrontation. Firstly, the problem and application background of UAV air combat confrontation are introduced. Then, the research status of reinforcement learning methods at home and abroad is analyzed, including basic reinforcement learning, reinforcement learning based on Markov chain and deep reinforcement learning algorithm. On this basis, this paper focuses on the reinforcement learning method of UAV air combat confrontation from the two aspects of existing research results and algorithm application advantages. Finally, the simulation is implemented, and results show the effectiveness of reinforcement learning algorithm in air combat confrontation.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Huan Zhou, Xiaoyan Zhang, and Zhuoran Zhang "Reinforcement learning technology for air combat confrontation of unmanned aerial vehicle", Proc. SPIE 12168, International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2021), 121681Y (18 March 2022); https://doi.org/10.1117/12.2631651
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Unmanned aerial vehicles

Evolutionary algorithms

Neural networks

Artificial intelligence

Detection and tracking algorithms

Analytical research

Computer simulations

Back to Top