Login | Register

Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes

Title:

Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes

de Almeida, Roberto G., Di Nardo, Julia C, Antal, Caitlyn and von Grünau, Michael W. (2019) Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes. Frontiers in Psychology, 10 . ISSN 1664-1078

[thumbnail of Publisher's Version]
Preview
Text (Publisher's Version) (application/pdf)
fpsyg-10-02162.pdf - Published Version
Available under License Creative Commons Attribution.
4MB

Official URL: https://doi.org/10.3389/fpsyg.2019.02162

Abstract

As Macnamara (1978) once asked, how can we talk about what we see? We report on a study manipulating realistic dynamic scenes and sentences aiming to understand the interaction between linguistic and visual representations in real-world situations. Specifically, we monitored participants’ eye movements as they watched video clips of everyday scenes while listening to sentences describing these scenes. We manipulated two main variables. The first was the semantic class of the verb in the sentence and the second was the action/motion of the agent in the unfolding event. The sentences employed two verb classes–causatives (e.g., break) and perception/psychological (e.g., notice)–which impose different constraints on the nouns that serve as their grammatical complements. The scenes depicted events in which agents either moved toward a target object (always the referent of the verb-complement noun), away from it, or remained neutral performing a given activity (such as cooking). Scenes and sentences were synchronized such that the verb onset corresponded to the first video frame of the agent motion toward or away from the object. Results show effects of agent motion but weak verb-semantic restrictions: causatives draw more attention to potential referents of their grammatical complements than perception verbs only when the agent moves toward the target object. Crucially, we found no anticipatory verb-driven eye movements toward the target object, contrary to studies using non-naturalistic and static scenes. We propose a model in which linguistic and visual computations in real-world situations occur largely independent of each other during the early moments of perceptual input, but rapidly interact at a central, conceptual system using a common, propositional code. Implications for language use in real world contexts are discussed.

Divisions:Concordia University > Faculty of Arts and Science > Psychology
Item Type:Article
Refereed:Yes
Authors:de Almeida, Roberto G. and Di Nardo, Julia C and Antal, Caitlyn and von Grünau, Michael W.
Journal or Publication:Frontiers in Psychology
Date:10 October 2019
Funders:
  • Fonds Québécois de la Recherche sur la Société et la Culture
  • Social Sciences and Humanities Research Council of Canada (SSHRC)
  • Natural Sciences and Engineering Research Council (NSERC)
  • Concordia Open Access Author Fund
Digital Object Identifier (DOI):10.3389/fpsyg.2019.02162
Keywords:situated language processing, visual world paradigm, eye movements, verb meaning, event comprehension, sentence comprehension, language-vision interaction, modularity
ID Code:988026
Deposited By: Joshua Chalifour
Deposited On:26 Feb 2021 20:19
Last Modified:26 Feb 2021 20:42
Related URLs:
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top