Learning to predict readability using eye-movement data from natives and learners

    6 Citations (Scopus)

    Abstract

    Readability assessment can improve the quality of assisting technologies aimed at language learners. Eye-tracking data has been used for both inducing and evaluating general-purpose NLP/AI models, and below we show that unsurprisingly, gaze data from language learners can also improve multi-task readability assessment models. This is unsurprising, since the gaze data records the reading difficulties of the learners. Unfortunately, eye-tracking data from language learners is often much harder to obtain than eye-tracking data from native speakers. We therefore compare the performance of deep learning readability models that use native speaker eye movement data to models using data from language learners. Somewhat surprisingly, we observe no significant drop in performance when replacing learners with natives, making approaches that rely on native speaker gaze information, more scalable. In other words, our finding is that language learner difficulties can be efficiently estimated from native speakers, which suggests that, more generally, readily available gaze data can be used to improve educational NLP/AI models targeted towards language learners.

    Original languageEnglish
    Title of host publication32nd AAAI Conference on Artificial Intelligence, AAAI 2018, Proceedings
    Number of pages7
    PublisherAAAI Press
    Publication date2018
    Pages5118-5124
    ISBN (Electronic)9781577358008
    Publication statusPublished - 2018
    Event32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, United States
    Duration: 2 Feb 20187 Feb 2018

    Conference

    Conference32nd AAAI Conference on Artificial Intelligence, AAAI 2018
    Country/TerritoryUnited States
    CityNew Orleans
    Period02/02/201807/02/2018
    SponsorAssociation for the Advancement of Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Learning to predict readability using eye-movement data from natives and learners'. Together they form a unique fingerprint.

    Cite this