Paper
14 September 1989 Object Classification Using Multispectral Sensor Data Fusion
Michael P. Cain, Susan A. Stewart, Jeffrey B. Morse
Author Affiliations +
Abstract
In this paper, the potential benefits in applying sensor fusion to object classification are discussed. A specific example is presented that involves the fusion of multiple band IR and visible light data collected from co-located sensors. Pattern vectors describing the objects were based on features extracted form the simulated target signatures observed within the sensor wavebands individually and also by 'fusing' the multispectral data. The pattern vectors were then subjected to feature analysis using a variety of statistical pattern recognition techniques to determine the relative contribution of each feature to classification performance. Features selected through this process were then used in subsequent classification algorithms which established class boundaries, classified the objects, determined confidence levels, and calculated error probabilities. A neural network paradigm was also applied to the same data set to determine the relative merit of the features and to classify the objects. In particular, a competitive learning algorithm was used. Analysis methods and performance comparisons are presented.
© (1989) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Michael P. Cain, Susan A. Stewart, and Jeffrey B. Morse "Object Classification Using Multispectral Sensor Data Fusion", Proc. SPIE 1100, Sensor Fusion II, (14 September 1989); https://doi.org/10.1117/12.960481
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Infrared sensors

Neural networks

Image classification

Classification systems

Feature extraction

Detection and tracking algorithms

Back to Top