We use various methods to explore how children and adults understand and produce a range of structures. These are presented on computers and can be realized either online (for example via Zoom) or in the lab.

Visual-World Eye-tracking


In this task, participants see two images side-by-side on the computer screen. For instance, one image might show a boy eating an ice-cream, while another image might show a boy eating an apple. The images are accompanied by audio speech stimuli (e.g. “Here are two boys. Which boy in eating an apple?”). The participants’ eye-movements (i.e. where their eyes look on the screen) are tracked as they view two images and listen to the sentence. Participants then have to click on the image that best fits the sentence they have just heard. By tracking participants’ eye-movements, we can determine if and when participants look at the correct image (e.g. boy eating apple) as they process the spoken sentence. This method allows to examine how participants comprehend language in real-time (e.g. while they hear it) and how they use different types of linguistic information.

Elicited production

This task is designed to prompt or elicit specific linguistic responses from participants. We measure what specific structures participants prefer to produce given a previous context. For instance, participants see one image on the screen and hear a description of the characters in the image (“Here are a grandfather and a boy.”). They then have to orally answer a question (e.g. “What is happening to the boy?”). Participants can answer this question in different ways, but the most frequent answers would be either “The grandfather is covering him.” or “He is being covered by the grandfather”.  This method allows to observe how speakers generate language in response to specific stimuli or prompts and can inform us on the structures that are easier to produce, and ultimately to comprehend, for different types of speakers given a specific context.