Morphosyntactic probing of multilingual BERT models

Judit Acs*, Endre Hamerlik, Roy Schwartz, Noah A. Smith, Andras Kornai

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We introduce an extensive dataset for multilingual probing of morphological information in language models (247 tasks across 42 languages from 10 families), each consisting of a sentence with a target word and a morphological tag as the desired label, derived from the Universal Dependencies treebanks. We find that pre-trained Transformer models (mBERT and XLM-RoBERTa) learn features that attain strong performance across these tasks. We then apply two methods to locate, for each probing task, where the disambiguating information resides in the input. The first is a new perturbation method that masks various parts of context; the second is the classical method of Shapley values. The most intriguing finding that emerges is a strong tendency for the preceding context to hold more information relevant to the prediction than the following context.

Original languageEnglish
JournalNatural Language Engineering
Volume1
Issue number1
DOIs
StatePublished - 25 May 2023

Bibliographical note

Publisher Copyright:
© The Author(s), 2023. Published by Cambridge University Press.

Keywords

  • Language Models
  • Language Resources
  • Machine Learning
  • Morphology
  • Multilinguality

Fingerprint

Dive into the research topics of 'Morphosyntactic probing of multilingual BERT models'. Together they form a unique fingerprint.

Cite this