Deep neural networks for automatic target recognition (ATR) have been shown to be highly successful for a large variety of Synthetic Aperture Radar (SAR) benchmark datasets. However, the black box nature of neural network approaches raises concerns about how models come to their decisions, especially when in high-stake scenarios. Accordingly, a variety of techniques are being pursued seeking to offer understanding of machine learning algorithms. In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
The lack of large, relevant and labeled datasets for synthetic aperture radar (SAR) automatic target recognition (ATR) poses a challenge for deep neural network approaches. In the case of SAR ATR, transfer learning offers promise where models are pre-trained on either synthetic SAR, alternatively collected SAR, or non-SAR source data and then fine-tuned on a smaller target SAR dataset. The concept being that the neural network can learn fundamental features from the more abundant source domain resulting in high accuracy and robust models when fine-tuned on a smaller target domain. One open question with this transfer learning strategy is how to choose source datasets that will improve accuracy of a target SAR dataset when the model is fine-tuned. Here, we apply a set of model and dataset transferability analysis techniques to investigate the efficacy of transfer learning for SAR ATR. In particular, we examine Optimal Transport Dataset Distance (OTDD), Log Maximum Evidence (LogMe), Log Expected Empirical Prediction (LEEP), Gaussian Bhattacharyya Coefficient (GBC), and H-Score. These methods consider properties such as task relatedness, statistical analysis of learned embedding properties, as well as distribution distances of the source and target domains. We apply these transferability metrics to ResNet18 models trained on a set of Non-SAR as well as SAR datasets. Overall, we present an investigation into quantitatively analyzing transferability for SAR ATR.
Optical Diffractive Neural Networks (ODNNs) have emerged as a new class of AI systems that hold promise for fast and low energy classification of scenes. While these systems resemble electronic neural networks, they also have important differences because they need to satisfy constraints imposed by physical laws of light propagation and light-matter interactions. This brings a number of interesting fundamental questions regarding the ultimate performance that can be achieved, the optimal structure of materials, and even how effectively they can be trained. In this presentation, we will present our efforts to address these questions. In particular, we will discuss how co-design of the diffractive material, the system architecture, and the training algorithms is essential to achieve the best performance and also reveal underlying properties. For example, universal scaling of the performance emerges which differs from traditional electronic NNs. We will also discuss how the properties of the systems differs for coherent and incoherent light. Finally, the role of depth will also be addressed.
Deep neural networks have recently demonstrated state-of-the-art accuracy on public Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) benchmark datasets. While attaining competitive accuracy on benchmark datasets is a necessary feature, it is important to characterize other facets of new SAR ATR algorithms. We extend this recent work by demonstrating not only improved state-of-the-art accuracy, but that contemporary deep neural networks can achieve several algorithmic traits beyond competitive accuracy which are necessitated by operational deployment scenarios. First, we employ several saliency map algorithms to provide explainability and insight into understanding black-box classifier decisions. Second, we collect and implement numerous data augmentation routines and training improvements both from the computer vision literature and specific to SAR ATR data in order to further improve model domain adaptation performance from synthetic to measured data, achieving a 99.26% accuracy on SAMPLE validation with a simple network architecture. Finally, we survey model reproducibility and performance variability under domain adaptation from synthetic to measured data, demonstrating potential consequences of training on only synthetic data.
Spiking neural networks (SNNs) extend upon traditional artificial neural networks (ANNs) by incorporating increased biological fidelity. For example, this includes features such as event-driven operation, sparsity, spatial/temporal functionality, parallelism, and collocating processing and memory. These features can translate into efficient computing hardware design, and consequently SNNs offer potential advantages for SAR ATR.
Here we provide a wide exploration into several SNN approaches, both for algorithms and computing hardware. Using the MSTAR and SAMPLE benchmark datasets, we develop SAR ATR networks comparing SNN computational complexity tradeoffs and analyzing how respective neuromorphic architectural choices impact spiking neural ATR performance.
Neural network approaches have periodically been explored in the pursuit of high performing SAR ATR solutions. With deep neural networks (DNNs) now offering many state-of-the-art solutions to computer vision tasks, neural networks are once again being revisited for ATR processing. Here, we characterize and explore a suite of neural network architectural topologies. In doing so, we assess how different architectural approaches impact performance and consider the associated computational costs. This includes characterizing network depth, width, scale, connectivity patterns, as well as convolution layer optimizations. We have explored a suite of architectural topologies applied to both the canonical MSTAR dataset, as well as the more operationally realistic Synthetic and Measured Paired and Labeled Experiment (SAMPLE) dataset. The latter pairs high fidelity computational models of targets with actual measured SAR data. Effectively, this dataset offers the ability to train a DNN on simulated data and test the network performance on measured data. Not only does our in-depth architecture topology analysis offer insight into how different architectural approaches impact performance, but we also have trained DNNs attaining state-of-the-art performance on both datasets. Furthermore, beyond just accuracy, we also assess how efficiently an accelerator architecture executes these neural networks. Specifically, Using an analytical assessment tool, we forecast energy and latency for an edge TPU like architecture. Taken together, this tradespace exploration offers insight into the interplay of accuracy, energy, and latency for executing these networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.