Abstract
We consider algorithms for "smoothed online convex optimization" problems, a variant of the class of online convex optimization problems that is strongly related to metrical task systems. Prior literature on these problems has focused on two performance metrics: regret and the competitive ratio. There exist known algorithms with sublinear regret and known algorithms with constant competitive ratios; however, no known algorithm achieves both simultaneously. We show that this is due to a fundamental incompatibility between these two metrics - no algorithm (deterministic or randomized) can achieve sublinear regret and a constant competitive ratio, even in the case when the objective functions are linear. However, we also exhibit an algorithm that, for the important special case of one dimensional decision spaces, provides sublinear regret while maintaining a competitive ratio that grows arbitrarily slowly.
Original language | English |
---|---|
Pages (from-to) | 741-763 |
Number of pages | 23 |
Journal | Proceedings of Machine Learning Research |
Volume | 30 |
State | Published - 2013 |
Externally published | Yes |
Event | 26th Conference on Learning Theory, COLT 2013 - Princeton, NJ, United States Duration: 12 Jun 2013 → 14 Jun 2013 |