Abstract
Consider N cooperative but non-communicating players where each plays one out of M arms for T turns. Players have different utilities for each arm, represented as an N× M matrix. These utilities are unknown to the players. In each turn, players select an arm and receive a noisy observation of their utility for it. However, if any other players selected the same arm in that turn, all colliding players will receive zero utility due to the conflict. No communication between the players is possible. We propose two distributed algorithms which learn fair matchings between players and arms while minimizing the regret. We show that our first algorithm learns a max-min fairness matching with near- O\log T) regret (up to a log log T factor). However, if one has a known target Quality of Service (QoS) (which may vary between players) then we show that our second algorithm learns a matching where all players obtain an expected reward of at least their QoS with constant regret, given that such a matching exists. In particular, if the max-min value is known, a max-min fairness matching can be learned with O(1) regret.
Original language | English |
---|---|
Article number | 9404291 |
Pages (from-to) | 584-598 |
Number of pages | 15 |
Journal | IEEE Journal on Selected Areas in Information Theory |
Volume | 2 |
Issue number | 2 |
DOIs | |
State | Published - Jun 2021 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2020 IEEE.
Keywords
- Multi-player bandits
- distributed learning
- fairness
- online learning
- resource allocation