Skip to Main Content

Advertisement

Skip Nav Destination

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?

Transactions of the Association for Computational Linguistics (2023) 11: 336–350.
This article has been cited by the following articles in journals that are participating in Crossref Cited-by Linking.
  • Kuan-Jung Huang
  • Suhas Arehalli
  • Mari Kugemoto
  • Christian Muxica
  • Grusha Prasad
  • Brian Dillon
  • Tal Linzen
Journal of Memory and Language (2024) 137: 104510.
  • Kyle Mahowald
  • Anna A. Ivanova
  • Idan A. Blank
  • Nancy Kanwisher
  • Joshua B. Tenenbaum
  • Evelina Fedorenko
Trends in Cognitive Sciences (2024)
  • Donald Dunagan
  • Miloš Stanojević
  • Maximin Coavoux
  • Shulin Zhang
  • Shohini Bhattasali
  • Jixing Li
  • Jonathan Brennan
  • John Hale
Neurobiology of Language (2023) 4 (3): 455.
  • Ethan G. Wilcox
  • Tiago Pimentel
  • Clara Meister
  • Ryan Cotterell
  • Roger P. Levy
Transactions of the Association for Computational Linguistics (2023) 11: 1451.
  • Tzu-Yun Tung
  • Jonathan R. Brennan
Neuropsychologia (2023) 190: 108680.
  • Iza Škrjanec
  • Frederik Yannick Broy
  • Vera Demberg
Procedia Computer Science (2023) 225: 3488.
  • Andrea Bruera
  • Yuan Tao
  • Andrew Anderson
  • Derya Çokal
  • Janosch Haber
  • Massimo Poesio
Cognitive Science (2023) 47 (12)
Close Modal

or Create an Account

Close Modal
Close Modal