KEYWORDS: Detection and tracking algorithms, Defense and security, Data modeling, Systems modeling, System identification, Intelligence systems, New and emerging technologies, Network architectures, Defense systems
Robust defenses to the new threats require early determination of the adversary’s plan of attack. At the same time, automated path-planning systems often behave predictably and produce paths with recognizable characteristic features. The increasing adoption of autonomous systems changes the defense landscape, thwarting traditional defenses with rapid decision speeds, but opening up new weaknesses. In this work, we investigate several possibilities to exploit artifacts of path planning algorithms that might be used as tells, giving a defending commander early warning and advantage in mounting a defense. With the application of an integrated air-defense system in mind, we examine a use case of several high-value targets which are obstructed by known defending threats. We use incoming time-series track data to predict the most probable targets and future trajectories of an enemy platform. One such approach is to attempt to directly learn the mapping from threat track to target. However, such an approach is likely brittle, requiring large volumes of data and substantial retraining for each target laydown. By contrast, we attempt to exploit predictable features from the path-planning algorithm itself, by first classifying the path-planning algorithm being used, and including that knowledge in our target prediction algorithm. We demonstrate that it is possible to differentiate classes of path planning algorithms with high accuracy based on track data alone. Utilizing the underlying model, we can then predict likely track updates and likely targets. We discuss strengths and limitations of this approach with respect to the aim of adding a robust tool to the air-defense use case.
Autonomous platforms are becoming ubiquitous in society, including UAVs, Roombas, and self-driving cars. With the increase in prevalence of autonomous platforms comes an increase in the threat of attacks against these platforms. These attacks can range from direct hacking to remotely take control of the platforms themselves [1], to attacks involving manipulation or deception such as spoofing or fooling sensor inputs [2, 3]. Ensuring autonomous systems are robust and resilient (R2) against these attacks will become an important challenge to overcome if they are to be trusted and widely adopted. This paper addresses the need to quantitatively define robustness and resilience against manipulation and deceptive attacks which are inherently harder to detect. We define a set of robust estimation metrics that are mathematically rigorous, can be applied to multiple algorithm use cases, and are easy to interpret. Since many of these functions are processed over time, the primary focus will be on process-based metrics. These metrics can be adapted over time by responding and reconfiguring at system runtime. This paper will: 1) provide background information on previous work in this area, including adversarial machine learning, robotics control, and engineering design. 2) Present the metrics and explain how to address our unique problem. 3) Apply these metrics to three different autonomy applications: target tracking, autonomous control, and automatic target recognition. 4) Discuss some additional caveats and potential areas for future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.