Machine learning approaches, such as deep neural networks, have shown recent success for target detection and identification problems in hyperspectral imagery. However, when deployed “in the wild,” there are no guarantees about the behavior of these black box algorithms when encountering new materials or environmental conditions that were not part of the training data. In addition, neural networks typically lack properties of linear identification methods in that their predictions tend to select a single class with high confidence even when there are multiple classes that could match a given input spectrum. To provide estimates of confidence in neural network predictions (i.e., target identifications) and to produce indicators of uncertainty, we apply stateof-the-art uncertainty quantification techniques to neural networks trained on hyperspectral data. Specifically, we assess recently proposed methods from the machine learning community including Monte Carlo dropout, ensembles of neural networks, and variational Bayesian neural networks. We report not only the accuracy of the resulting model-averaged networks on in-distribution data, but also the usefulness of uncertainty metrics on noisy or out-of-distribution data. We also compare ensemble neural network target identification results to a linear method on airborne long-wave infrared (LWIR) hyperspectral data with real targets. Finally, we offer some guidelines for applying these methods to hyperspectral target detection/identification problems.
|