A Quantitative Assessment of the Quality of Strategic Intelligence Forecasts


  1. Mandel, D.R.
  2. Barnes, A.
  3. Richards, K.
Corporate Authors
Defence R&D Canada - Toronto, Toronto ONT (CAN)
This report describes a field study of the quality of probabilistic forecasts made in Canadian strategic intelligence reports. The researchers isolated a set of 1,422 probabilistic forecasts from intelligence memoranda and interdepartmental committee reports for which outcome information about the forecasted events was available. These data were used to study forecast quality measures, including calibration and discrimination indices, commonly employed in other areas of expert judgment monitoring research (e.g., meteorology, medical diagnosis). Predictions were further categorized in terms of other variables, such as the organizational source, forecast difficulty, and forecast importance. Overall, the findings reveal a high degree of forecasting quality. This was evident in terms of calibration, which measures the concordance between an assigned probability level to forecasted outcomes and the relative frequency of observed outcomes within that assigned category. It was also evident in terms of adjusted normalized discrimination, which measures the proportion of outcome variance explained by analysts’ forecasts. The main source of bias detected in analytic forecasts was underconfidence: Analysts often rendered forecasts with greater degrees of uncertainty than were needed. Implications for developing outcome-oriented accountability systems, adaptive learning systems, and forecast optimization procedures to support effective decision-making are discussed.

Il y a un résumé en français ici.

forecasting, probability judgment, strategic intelligence, calibration, discrimi
Report Number
DRDC-TORONTO-TR-2013-036 — Technical Report
Date of publication
01 Mar 2014
Number of Pages
Electronic Document(PDF)

Permanent link

Document 1 of 1

Date modified: