Consistency and Localizability

Alon Zakai*, Ya'acov Ritov

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

We show that all consistent learning methods-that is, that asymptotically achieve the lowest possible expected loss for any distribution on (X,Y)-are necessarily localizable, by which we mean that they do not significantly change their response at a particular point when we show them only the part of the training set that is close to that point. This is true in particular for methods that appear to be defined in a non-local manner, such as support vector machines in classification and least-squares estimators in regression. Aside from showing that consistency implies a specific form of localizability, we also show that consistency is logically equivalent to the combination of two properties: (1) a form of localizability, and (2) that the method's global mean (over the entire X distribution) correctly estimates the true mean. Consistency can therefore be seen as comprised of two aspects, one local and one global.

Original languageEnglish
Pages (from-to)827-856
Number of pages30
JournalJournal of Machine Learning Research
Volume10
StatePublished - Jan 2009

Keywords

  • Classification
  • Consistency
  • Local learning
  • Regression

Fingerprint

Dive into the research topics of 'Consistency and Localizability'. Together they form a unique fingerprint.

Cite this