“Calibeating”: Beating forecasters at their own game

Dean P. Foster, Sergiu Hart*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

To identify expertise, forecasters should not be tested by their calibration score, which can always be made arbitrarily small, but rather by their Brier score. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to “expertise.” This raises the question of whether one can gain calibration without losing expertise, which we refer to as “calibeating.” We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.

Original languageEnglish
Pages (from-to)1441-1474
Number of pages34
JournalTheoretical Economics
Volume18
Issue number4
DOIs
StatePublished - Nov 2023

Bibliographical note

Publisher Copyright:
Copyright © 2023 The Authors.

Keywords

  • Brier score
  • C1
  • C7
  • calibeating
  • Calibrated forecasts
  • calibration score
  • D8
  • experts
  • refinement score

Fingerprint

Dive into the research topics of '“Calibeating”: Beating forecasters at their own game'. Together they form a unique fingerprint.

Cite this