Live presentation

Asynchronous discussion


Deep neural network-based Language Models for estimating word-Predictability

Alfredo Raul Umfurer

  • Buenos Aires,
  • Argentina
  • Alfredo Raul Umfurer ¹
  • , Juan E. Kamienkowski ¹
  • , Juan E. Kamienkowski ²
  • , Bruno Bianchi ¹
  • 1 Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, FCEyN, UBA - CONICET
  • 2 Departamento de Física, FCEyN, UBA

When we read printed text, we are continuously predicting upcoming words to integrate information and guide future eye movements. Thus, the Predictability of a given word (i.e. the probability of guessing it from its previous context) has become one of the most important variables when explaining human behavior and information processing during reading. In parallel, the Natural Language Processing (NLP) field evolved by developing a wide variety of applications.
In a previous study [Bianchi et al. 2020] we showed that using different word embeddings techniques (like Latent Semantic Analysis and FastText) and N-gram-based language models we are able to partially captured aspects of human-Predictability in long Spanish texts and to better understand the behavior of eye movements during reading.
In the present study, we aim to estimate a new computer-based Predictability using more complex and recent models. In particular, we used a deep neural network-based Language Model (AWD-LSTM), that can address more long-term dependencies on the text. We found that this model can also partially capture other aspects of the effect of human-Predictability on eye movements, and when added to the previously tested models, it can enhance their performance.