Pass-Fail Performance Testing for Detection Systems


  1. Kessel, R.T.
Corporate Authors
Defence Research Establishment Atlantic, Dartmouth NS (CAN)
Detection is an uncertain operation subject to many radom factors. The performance of a detection system is therefore specified probabilistically, by way of its probabilities of detection and false alarm, and the evaluation of a system's performance fails, unavoidably, within the scope of probability and statistics. In military applications, the central role probability and statistics has too often been upstaged by the novelty of new detection technology, which, to demonstrate in all of its features, typically leaves little time or inclination for a detailed treatment of performance probabilities. but a detailed statistical analysis performance is crucial for drawing objective conclusions from a performance test, such as whether the system passes or fails minimum operational requirements. The statistics of performance testing are reviewed here, as a means to manage the measurement uncertainties in pass-fail system testing. Two different decision methods are presented, hypothesis testing and Bayesian inference, each with their particular approach to manage uncertainties, yet both working toward the same end. Pass-fail judgements drawn from "perfect" test results (no missed targets, and to false-alarms) are given special consideration because they are often encountered in practice owing to small sample sizes. The minimum number of dummy targets required for a performance test is derived and serves as a rough guide when planning and evaluating performance demonstration.

Il y a un résumé en français ici.

Computer aided detection;Target identification;Classification algorithms;Field trials;Performance specification;ATR (Automatic Target Recognition);ATR evaluation;Signal detection theory
Report Number
DREA-TM-2002-205 — Technical Memorandum
Date of publication
01 Feb 2002
Number of Pages

Permanent link

Document 1 of 1

Date modified: