A MADDPG-based multi-agent antagonistic algorithm for sea battlefield confrontation
Article Ecrit par: Chen, Wei ; Nie, Jing ;
Résumé: There is a concerted effort to build intelligent sea and numerous artificial intelligence technologies have been explored. At present, more and more people are engaged in the research of deep reinforcement learning algorithm, and its mainstream application is in the field of games. Reinforcement learning has conquered chess belonging to complete information game, and Texas poker belonging to incomplete information games. And it reached or even surpassed the highest player level of mankind in E-sports games with huge state space and complex action space. However, reinforcement learning algorithm still has great challenges in fields such as automatic driving. The main reason is that the training of reinforcement learning needs to build an environment for interacting with agents. However, it is very difficult to construct realistic simulation scenes, and there is no guarantee that we will not encounter the state that the agent has not seen. Therefore, it is necessary to explore the simulation scene first. Based on this, this paper mainly studies reinforcement learning in simulation scenario. There are huge challenges in migrating them to real scenario applications, especially in sea missions. Aiming at the heterogeneous multi-agent game confrontation scenario, this paper proposes a sea battlefield game confrontation decision algorithm based on multi-agent deep deterministic policy gradient. The algorithm combines long short-term memory and actor-critic, which not only realizes the convergence of the algorithm in huge state space and action space, but also solves the problem of sparse real rewards. At the same time, imitation learning is integrated into the decision algorithm, which not only improves the convergence speed of the algorithm, but also greatly improves the effectiveness of the algorithm. The results show that the algorithm can deal with a variety of different tactical sea battlefield scenarios, make flexible decisions according to the changes of the enemy, and the average winning rate is close to 90%.
Langue:
Anglais