Fusion of Supervised Classifiers using Theory of Evidence

PDF

Authors
  1. Rhéaume, F.
  2. Jousselme, A-L.
  3. Bossé, É.
Corporate Authors
Defence R&D Canada - Valcartier, Valcartier QUE (CAN)
Abstract
In the field of pattern recognition, more specifically in the area of supervised and feature-vector-based classifications, various classification methods exist, but none of them is flawless given any data sources. Each classifier behaves differently, with its own strengths and weaknesses. Some are more efficient than others in particular situations. Performance of these individual classifiers can be improved by combining them into one multiple classifier. In order to make more realistic decisions, a multiple classifier may use some measures generated by each classifier and use some a priori knowledge, such as probability distributions, reliability rates and confusion matrices. Individual classifiers studied in this project are the Bayes, the k-nearest neighbors and the neural network classifiers. They are combined using Dempster-Shafer's theory. The problem simplifies in finding weights that best represent the evidences of individual classifier. We suggest basic probability assignments (BPAs) based on some measures which precede the decision step. Following the study of some classical multiple classifiers in the literacy, we compare them with our approach that separates in two distinct multiple classifiers. Tests are made on three different databases that are infrared images of ships, handwritten digits and satellite images of terrains. One of the suggested multiple classifier gives better results than all other classical multiple classifiers tested in this work.

Il y a un résumé en français ici.

Report Number
DRDC-VALCARTIER-TR-2003-318 — Technical Report
Date of publication
01 Sep 2006
Number of Pages
110
DSTKIM No
CA028006
CANDIS No
525907
Format(s):
CD ROM

Permanent link

Document 1 of 1

Date modified: