Meaning multiplicity and valid disagreement in textual measurement: A plea for a revised notion of reliability

Christian Baden, Lillian Boxman-Shabta, Keren Tenenboim-Weinblat, Maximilian Overbeck, Tali Aharoni

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In quantitative content analysis, conventional wisdom holds that reliability, operationalized as agreement, is a necessary precondition for validity. Underlying this view is the assumption that there is a definite, unique way to correctly classify any instance of a measured variable. In this intervention, we argue that there are textual ambiguities that cause disagreement in classification that is not measurement error, but reflects true properties of the classified text. We introduce a notion of valid disagreement, a form of replicable disagreement that must be distinguished from replication failures that threaten reliability. We distinguish three key forms of meaning multiplicity that result in valid disagreement - ambiguity due to under-specification, polysemy due to excessive information, and interchangeability of classification choices - that are widespread in textual analysis, yet defy treatment within the confines of the existing content-analytic toolbox. Discussing implications, we present strategies for addressing valid disagreement in content analysis.

Original languageAmerican English
Pages (from-to)305-326
Number of pages22
JournalStudies in Communication and Media
Volume12
Issue number4
DOIs
StatePublished - 2023

Bibliographical note

Publisher Copyright:
© 2023 Slovene Society Informatika. All rights reserved.

Keywords

  • Content analysis
  • ambiguity
  • meaning multiplicity
  • measurement validity
  • polysemy
  • reliability

Fingerprint

Dive into the research topics of 'Meaning multiplicity and valid disagreement in textual measurement: A plea for a revised notion of reliability'. Together they form a unique fingerprint.

Cite this