Abstract
We consider the question of why modern machine learning methods like support vector machines outperform earlier nonparametric techniques like k-NN. Our approach investigates the locality of learning methods, i.e., the tendency to focus mainly on the close-by part of the training set when constructing a new guess at a particular location. We show that, on the one hand, we can expect all consistent learning methods to be local in some sense; hence if we consider consistency a desirable property then a degree of locality is unavoidable. On the other hand, we also claim that earlier methods like k-NN are local in a more strict manner which implies performance limitations. Thus, we argue that a degree of locality is necessary but that this should not be overdone. Support vector machines and related techniques strike a good balance in this matter, which we suggest may partially explain their good performance in practice.
Original language | English |
---|---|
Pages | 205-216 |
Number of pages | 12 |
State | Published - 2008 |
Event | 21st Annual Conference on Learning Theory, COLT 2008 - Helsinki, Finland Duration: 9 Jul 2008 → 12 Jul 2008 |
Conference
Conference | 21st Annual Conference on Learning Theory, COLT 2008 |
---|---|
Country/Territory | Finland |
City | Helsinki |
Period | 9/07/08 → 12/07/08 |