Learnability beyond uniform convergence

Shai Shalev-Shwartz*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental result is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization. The equivalence of uniform convergence and learnability was formally established only in the supervised classification and regression setting. We show that in (even slightly) more complex prediction problems learnability does not imply uniform convergence. We discuss several alternative attempts to characterize learnability. This extended abstract summarizes results published in [5, 3].

Original languageAmerican English
Title of host publicationAlgorithmic Learning Theory - 23rd International Conference, ALT 2012, Proceedings
Number of pages4
StatePublished - 2012
Event23rd International Conference on Algorithmic Learning Theory, ALT 2012 - Lyon, France
Duration: 29 Oct 201231 Oct 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7568 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference23rd International Conference on Algorithmic Learning Theory, ALT 2012


Dive into the research topics of 'Learnability beyond uniform convergence'. Together they form a unique fingerprint.

Cite this