Measures of effectiveness and performance in tactical combat modeling


  1. Dobias, P.
  2. Sprague, K.
  3. Woodill, G.
  4. Cleophas, P.
  5. Noordkamp, W.
Corporate Authors
Defence R&D Canada - Centre for Operational Research and Analysis, Ottawa ON (CAN)
Computer simulations are often employed by operational research analysts to evaluate the relative effectiveness of various combinations of military equipment and tactics (i.e., options) for specific tasks within a conflict scenario. If the simulation environment is realistic enough, one can rank the options based on how effective they are when used to complete the assigned objective. The ranking process requires that measures of effectiveness (MOEs) be designed to capture the essence of how well the goal was achieved for any particular option. In this paper, it is shown that the relative ranking of options can be disturbed by omitting options or adding options, dependent on the method used for valuing the MOEs. This has implications for those relying on ranked options as part of a larger decision making process – the omission of one option due to, say, post-analysis logistical, political, budgetary or supply concerns can upset the balance of the remaining rankings and lead to an inappropriate decision if left unchecked. We discuss some circumstances under which rank-order switching can occur. Two methods of valuing MOEs aggregated through weighted sums to produce option rankings are compared and contrasted: 1) a simple Relative to Best scheme, and 2) Valuing with Objective Scales. The latter is shown to be a better choice when rank-order switching is at issue. Furthermore, it is argued that, in general, only a few MOEs are necessary and that too many can lead to undesirable

Il y a un résumé en français ici.

Report Number
DRDC-CORA-TM-2008-032 — Technical Memorandum
Date of publication
01 Oct 2008
Number of Pages
Electronic Document(PDF)

Permanent link

Document 1 of 1

Date modified: