Recent advances in self-supervised modeling of text and images open new opportunities for computational models of child language acquisition, which is believed to rely heavily on cross-modal signals. However, prior studies have been limited by their reliance on vision models trained on large image datasets annotated with a pre-defined set of depicted object categories. This is (a) not faithful to the information children receive and (b) prohibits the evaluation of such models with respect to category learning tasks, due to the pre-imposed category structure. We address this gap, and present a cognitively-inspired, multimodal acquisition model, trained from image-caption pairs on naturalistic data using cross-modal self-supervision. We show that the model learns word categories and object recognition abilities, and presents trends reminiscent of those reported in the developmental literature. We make our code and trained models public for future reference and use.
|Original language||American English|
|Title of host publication||NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics|
|Subtitle of host publication||Human Language Technologies, Proceedings of the Conference|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||17|
|State||Published - 2022|
|Event||2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022 - Seattle, United States|
Duration: 10 Jul 2022 → 15 Jul 2022
|Name||NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference|
|Conference||2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022|
|Period||10/07/22 → 15/07/22|
Bibliographical noteFunding Information:
We would like to thank the anonymous reviewers for their helpful comments and feedback. This work was supported in part by the Israel Science Foundation (grant no. 2424/21), by a research gift from the Allen Institute for AI and by the HUJI-UoM joint PhD program.
© 2022 Association for Computational Linguistics.