Artificial Intelligence, Discrimination, Fairness, and Other Moral Concerns

Re’em Segev*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Should the input data of artificial intelligence (AI) systems include factors such as race or sex when these factors may be indicative of morally significant facts? More importantly, is it wrong to rely on the output of AI tools whose input includes factors such as race or sex? And is it wrong to rely on the output of AI systems when it is correlated with factors such as race or sex (whether or not its input includes such factors)? The answers to these questions are controversial. In this paper, I argue for the following claims. First, since factors such as race or sex are not morally significant in themselves, including such factors in the input data, or relying on output that includes such factors or is correlated with them, is neither objectionable (for example, unfair) nor commendable in itself. Second, sometimes (but not always) there are derivative reasons against such actions due to the relationship between factors such as race or sex and facts that are morally significant (ultimately) in themselves. Finally, even if there are such derivative reasons, they are not necessarily decisive since there are sometimes also countervailing reasons. Accordingly, the moral status of the above actions is contingent.

Original languageEnglish
Article number44
JournalMinds and Machines
Volume34
Issue number4
DOIs
StatePublished - Dec 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Keywords

  • AI
  • Discrimination
  • Fairness
  • Input
  • Output

Fingerprint

Dive into the research topics of 'Artificial Intelligence, Discrimination, Fairness, and Other Moral Concerns'. Together they form a unique fingerprint.

Cite this