Relying on the Metrics of Evaluated Agents

Serena Wang, Michael Jordan, Katrina Ligett, R. Preston McAfee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Online platforms and regulators face a continuing problem of designing effective evaluation metrics. While tools for collecting and processing data continue to progress, this has not addressed the problem of unknown unknowns, or fundamental informational limitations on part of the evaluator. To guide the choice of metrics in the face of this informational problem, we turn to the evaluated agents themselves, who may have more information about how to measure their own outcomes. We model this interaction as an agency game, where we ask: When does an agent have an incentive to reveal the observability of a metric to their evaluator? We show that an agent will prefer to reveal metrics that differentiate the most difficult tasks from the rest, and conceal metrics that differentiate the easiest. We further show that the agent can prefer to reveal a metric garbled with noise over both fully concealing and fully revealing. This indicates an economic value to privacy that yields Pareto improvement for both the agent and evaluator. We demonstrate these findings on data from online rideshare platforms.

Original languageEnglish
Title of host publicationWWW 2025 - Proceedings of the ACM Web Conference
PublisherAssociation for Computing Machinery, Inc
Pages1468-1487
Number of pages20
ISBN (Electronic)9798400712746
DOIs
StatePublished - 28 Apr 2025
Event34th ACM Web Conference, WWW 2025 - Sydney, Australia
Duration: 28 Apr 20252 May 2025

Publication series

NameWWW 2025 - Proceedings of the ACM Web Conference

Conference

Conference34th ACM Web Conference, WWW 2025
Country/TerritoryAustralia
CitySydney
Period28/04/252/05/25

Bibliographical note

Publisher Copyright:
© 2025 Copyright held by the owner/author(s).

Keywords

  • evaluation metrics
  • information elicitation
  • principal-agent games

Fingerprint

Dive into the research topics of 'Relying on the Metrics of Evaluated Agents'. Together they form a unique fingerprint.

Cite this