Paper
4 April 1997 Techniques for higher confidence target ID
Author Affiliations +
Abstract
Target identification decisions must be `positive'--accurate with high confidence. To achieve this, a classifier must build a model (train) with some data and then generalized its decisions to new data sets. The ability to accurately estimate the true classifier error (or predict classification performance) is contingent on the amount of data available. The more data, the better the error estimate and the more confident the resulting decisions. Ideally, one needs infinite data for modeling and assessing classifier performance; however, this is rarely the case. This paper investigates techniques for improving audio target identification accuracy and confidence with a Multilayer Perceptron (MLP). The first technique, bagging, is a combination of bootstrapping and (classifier) aggregation and the second, decision fusing, combines the decisions of multiple classifiers. The `bagged' identification performance for a subset of the Rome Laboratory Greenflag database is compared to the MLP performance without bagging and that of MLP whose decisions are combined with those of our other classifiers. Both techniques improved identification accuracy, albeit bagging did so only slightly. More importantly, the confidence of the identification decisions were significantly improved by the pooling of evidence inherent in both the bagging and decision fusion processes.
© (1997) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Laurie H. Fenstermacher "Techniques for higher confidence target ID", Proc. SPIE 3077, Applications and Science of Artificial Neural Networks III, (4 April 1997); https://doi.org/10.1117/12.271498
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Error analysis

Target recognition

Databases

Performance modeling

Back to Top