Complex behaviours can make it difficult for human observers to maintain a coherent understanding of a highdimensional system’s state due to the large number of degrees of freedom that have to be monitored and reasoned about. This problem can lead to cognitive overload in operators who are monitoring these systems. An example of this is the problem of observing drone swarms to determine their behaviour and infer possible goals. Generative artificial intelligence techniques, such as variational autoencoders (VAEs), can be used to assist operators in understanding these complex behaviours by reducing the dimensionality of the observations.
This paper presents a modified boid simulation that produces data that is representative of a swarm of coordinated drones. A sensor model is employed to simulate observation noise. A VAE architecture is proposed that can encode data from observations of homogeneous swarms and produce visualisations detailing the potential states of the swarm, the current state of the swarm, and the goals that these states relate to. One of the challenges addressed in this paper is the permutation variance problem of working with large datasets of points which represent interchangeable, unlabelled objects. This is addressed by the proposed VAE architecture through the use of a PointNet-inspired layer that implements a symmetric function approximation, and chamfer distance loss function. An ablation study for the proposed permutation invariance modifications and a sensitivity analysis focused on the algorithm’s behaviour with respect to sensor noise are presented. The use of the decoder to create goal boundaries on the visualisation, the use of the visualisation for swarm trajectories, and the explainability of the visualisation are discussed.
|