TY - JOUR
T1 - Instructed to Bias
T2 - Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
AU - Itzhak, Itay
AU - Stanovsky, Gabriel
AU - Rosenfeld, Nir
AU - Belinkov, Yonatan
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024/6/4
Y1 - 2024/6/4
N2 - Recent studies show that instruction tuning (IT) and reinforcement learning from human feedback (RLHF) improve the abilities of large language models (LMs) dramatically. While these tuning methods can help align models with human objectives and generate highquality text, not much is known about their potential adverse effects. In this work, we investigate the effect of IT and RLHF on decision making and reasoning in LMs, focusing on three cognitive biases—the decoy effect, the certainty effect, and the belief bias—all of which are known to influence human decisionmaking and reasoning. Our findings highlight the presence of these biases in various models from the GPT-3, Mistral, and T5 families. Notably, we find a stronger presence of biases in models that have undergone instruction tuning, such as Flan-T5, Mistral-Instruct, GPT3.5, and GPT4. Our work constitutes a step toward comprehending cognitive biases in instruction-tuned LMs, which is crucial for the development of more reliable and unbiased language models.1.
AB - Recent studies show that instruction tuning (IT) and reinforcement learning from human feedback (RLHF) improve the abilities of large language models (LMs) dramatically. While these tuning methods can help align models with human objectives and generate highquality text, not much is known about their potential adverse effects. In this work, we investigate the effect of IT and RLHF on decision making and reasoning in LMs, focusing on three cognitive biases—the decoy effect, the certainty effect, and the belief bias—all of which are known to influence human decisionmaking and reasoning. Our findings highlight the presence of these biases in various models from the GPT-3, Mistral, and T5 families. Notably, we find a stronger presence of biases in models that have undergone instruction tuning, such as Flan-T5, Mistral-Instruct, GPT3.5, and GPT4. Our work constitutes a step toward comprehending cognitive biases in instruction-tuned LMs, which is crucial for the development of more reliable and unbiased language models.1.
UR - http://www.scopus.com/inward/record.url?scp=85196711716&partnerID=8YFLogxK
U2 - 10.1162/tacl_a_00673
DO - 10.1162/tacl_a_00673
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85196711716
SN - 2307-387X
VL - 12
SP - 771
EP - 785
JO - Transactions of the Association for Computational Linguistics
JF - Transactions of the Association for Computational Linguistics
ER -