We show that a non-Bayesian learning procedure leads to very permissive implementation results concerning the efficient allocation of resources in a dynamic environment where impatient, privately informed agents arrive over time, and where the designer gradually learns about the distribution of agents' values. This contrasts the rather restrictive results that have been obtained for Bayesian learning in the same environment, and highlights the role of the learning procedure in dynamic mechanism design problems.
Bibliographical noteFunding Information:
We wish to thank Philippe Jehiel for helpful remarks. We are grateful to the German Science Foundation , and to the ERC for financial support. Gershkov’s research was partly supported by the Google Inter-University Center for Electronic Markets and Auctions .
- Dynamic mechanism design
- Optimal stopping