Abstract
We study online learning of linear and kernel-based predictors, when individual examples are corrupted by random noise, and both examples and noise type can be chosen adversarially and change over time. We begin with the setting where some auxiliary information on the noise distribution is provided, and we wish to learn predictors with respect to the squared loss. Depending on the auxiliary information, we show how one can learn linear and kernel-based predictors, using just 1 or 2 noisy copies of each example. We then turn to discuss a general setting where virtually nothing is known about the noise distribution, and one wishes to learn with respect to general losses and using linear and kernel-based predictors. We show how this can be achieved using a random, essentially constant number of noisy copies of each example. Allowing multiple copies cannot be avoided: Indeed, we show that the setting becomes impossible when only one noisy copy of each instance can be accessed. To obtain our results we introduce several novel techniques, some of which might be of independent interest.
Original language | English |
---|---|
Article number | 6015553 |
Pages (from-to) | 7907-7931 |
Number of pages | 25 |
Journal | IEEE Transactions on Information Theory |
Volume | 57 |
Issue number | 12 |
DOIs | |
State | Published - Dec 2011 |
Bibliographical note
Funding Information:Manuscript received September 02, 2010; revised December 29, 2010; accepted July 08, 2011. Date of publication September 08, 2011; date of current version December 07, 2011. The material in this paper was presented at the COLT 2010 conference. This work was supported in part by the Israeli Science Foundation under Grant 590-10 and in part by the PASCAL2 Network of Excellence under EC Grant 216886.