Skip to main navigation Skip to search Skip to main content

Algorithmic Discrimination Causes Less Moral Outrage Than Human Discrimination

  • Yochanan E. Bigman*
  • , Desman Wilson
  • , Mads N. Arnestad
  • , Adam Waytz
  • , Kurt Gray
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

123 Scopus citations

Abstract

Companies and governments are using algorithms to improve decision-making for hiring, medical treatments, and parole. The use of algorithms holds promise for overcoming human biases in decision-making, but they frequently make decisions that discriminate. Media coverage suggests that people are morally outraged by algorithmic discrimination, but here we examine whether people are less outraged by algorithmic discrimination than by human discrimination. Eight studies test this algorithmic outrage deficit hypothesis in the context of gender discrimination in hiring practices across diverse participant groups (online samples, a quasi-representative sample, and a sample of tech workers). We find that people are less morally outraged by algorithmic (vs. human) discrimination and are less likely to hold the organization responsible. The algorithmic outrage deficit is driven by the reduced attribution of prejudicial motivation to algorithms. Just as algorithms dampen outrage, they also dampen praise-companies enjoy less of a reputational boost when their algorithms (vs. employees) reduce gender inequality. Our studies also reveal a downstream consequence of algorithmic outrage deficit-people are less likely to find the company legally liable when the discrimination was caused by an algorithm (vs. a human). We discuss the theoretical and practical implications of these results, including the potential weakening of collective action to address systemic discrimination. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

Original languageEnglish
Pages (from-to)4-27
Number of pages24
JournalJournal of Experimental Psychology: General
Volume152
Issue number1
DOIs
StatePublished - 27 Jun 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2022 American Psychological Association

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 5 - Gender Equality
    SDG 5 Gender Equality
  2. SDG 10 - Reduced Inequalities
    SDG 10 Reduced Inequalities

Keywords

  • discrimination
  • human–robot interaction
  • moral outrage
  • motivation attribution

Fingerprint

Dive into the research topics of 'Algorithmic Discrimination Causes Less Moral Outrage Than Human Discrimination'. Together they form a unique fingerprint.

Cite this