Abstract
Online platforms and regulators face a continuing problem of designing effective evaluation metrics. While tools for collecting and processing data continue to progress, this has not addressed the problem of unknown unknowns, or fundamental informational limitations on part of the evaluator. To guide the choice of metrics in the face of this informational problem, we turn to the evaluated agents themselves, who may have more information about how to measure their own outcomes. We model this interaction as an agency game, where we ask: When does an agent have an incentive to reveal the observability of a metric to their evaluator? We show that an agent will prefer to reveal metrics that differentiate the most difficult tasks from the rest, and conceal metrics that differentiate the easiest. We further show that the agent can prefer to reveal a metric garbled with noise over both fully concealing and fully revealing. This indicates an economic value to privacy that yields Pareto improvement for both the agent and evaluator. We demonstrate these findings on data from online rideshare platforms.
Original language | English |
---|---|
Title of host publication | WWW 2025 - Proceedings of the ACM Web Conference |
Publisher | Association for Computing Machinery, Inc |
Pages | 1468-1487 |
Number of pages | 20 |
ISBN (Electronic) | 9798400712746 |
DOIs | |
State | Published - 28 Apr 2025 |
Event | 34th ACM Web Conference, WWW 2025 - Sydney, Australia Duration: 28 Apr 2025 → 2 May 2025 |
Publication series
Name | WWW 2025 - Proceedings of the ACM Web Conference |
---|
Conference
Conference | 34th ACM Web Conference, WWW 2025 |
---|---|
Country/Territory | Australia |
City | Sydney |
Period | 28/04/25 → 2/05/25 |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s).
Keywords
- evaluation metrics
- information elicitation
- principal-agent games