People are averse to machines making moral decisions

Yochanan E. Bigman*, Kurt Gray

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

282 Scopus citations

Abstract

Do people want autonomous machines making moral decisions? Nine studies suggest that that the answer is ‘no’—in part because machines lack a complete mind. Studies 1–6 find that people are averse to machines making morally-relevant driving, legal, medical, and military decisions, and that this aversion is mediated by the perception that machines can neither fully think nor feel. Studies 5–6 find that this aversion exists even when moral decisions have positive outcomes. Studies 7–9 briefly investigate three potential routes to increasing the acceptability of machine moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’ perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9). Although some of these routes show promise, the aversion to machine moral decision-making is difficult to eliminate. This aversion may prove challenging for the integration of autonomous technology in moral domains including medicine, the law, the military, and self-driving vehicles.

Original languageEnglish
Pages (from-to)21-34
Number of pages14
JournalCognition
Volume181
DOIs
StatePublished - Dec 2018
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2018 Elsevier B.V.

Keywords

  • Autonomous machines
  • Mind perception
  • Moral agency
  • Morality
  • Robots
  • Skynet

Fingerprint

Dive into the research topics of 'People are averse to machines making moral decisions'. Together they form a unique fingerprint.

Cite this