Hello there! If you're looking for resources with a stronger focus on computational linguistics, you can check out the webpage right here!
Title: The brain on prosody: Linking language, speech, and music
Abstract: For decades, it has been a matter of debate whether and how music, speech and language are linked in the brain. Yet, despite all efforts and ever refined tools and methods, conclusions have remained mixed. In the present talk, I will draw on ideas of a common precursor of both domains—best captured by the term »prosody«. I will discuss whether prosody in its (i) phonological, (ii) syntactic, and (iii) pragmatic functions can build bridges between the two domains and may constitute the missing link that makes anatomical music-language overlap unnecessary and cross-domain interactions possible.
Title: Musicality & L2 Prosody: Different measures, different findings(?)
Abstract: One crucial aspect of acquiring a foreign language (L2) for effective communication with native speakers (L1) is mastering prosody, which encompasses stress, rhythm, and intonation. While learners often focus on practicing individual L2 phonemes, they typically receive minimal instruction on the form and function of prosodic features in their L2, such as using pitch accents to reflect discourse focus. Research indicates that both producing and perceiving L2 prosody is often hindered by the transfer of prosodic patterns from L1 to L2, and even proficient L2 learners frequently make prosodic errors. The musicality of a speaker is known to be a factor influencing language acquisition, and it may be particularly relevant in relation to L2 prosody acquisition as both music and prosody share a melodic and rhythmic nature. However, there is much that remains to be studied when in comes to the relationship between musicality and L2 prosody acquisition. In this presentation, I will discuss various studies that operationalized musicality in different ways (e.g., through aptitude or training) and examine the effects of musicality on the acquisition of prosodic features in the L2.
Dr Michele Gubian
Data Scientist, Institut für Phonetik und Sprachverarbeitung (IPS)
Title: Modelling multi-dimensional (speech-related) time-varying contours
Abstract: This workshop introduces functional PCA and landmark registration, two techniques that enable to perform effective statistical analysis of time-varying contours originating from acoustic or articulatory measurements of speech production, like f0, intensity, formants, EMA contours. These techniques, combined with linear (mixed-effects) regression, provide solutions to two problems that affect the statistical analysis of such type of time-varying contours. One is that it is not straightforward to model multiple contours jointly, e.g. formants F1 and F2 together, or EMA-captured tongue tip and dorsum trajectories. The other is that modelling methods such as GAMs do not offer ways to incorporate information about the location of segmental or syllabic boundaries, which vary across contours. However, such information is crucial to (i) anchor contour shapes to the underlying segmental material and (ii) capture co-variation of contour shapes and segmental durations.
Pre-requisites: Basic knowledge of R and tidyverse, as well as linear regression is required, optionally GAMs.
Where: LingLab (Online)
Dr Bodo Winter
Title: Data visualisation in ggplo2 for linguists
Abstract: The R package ggplot2 has become the lingua franca of data visualization in recent years, and data visualization is increasingly recognized as a crucial piece of data analysis, including in linguistics. This ggplot2 workshop provides hands-on guided exercises in which we will explore how to make beautiful and effective data visualizations of linguistic data. Importantly, attendees will learn about design principles that are evidence-based, building on existing research on the psychology of graph comprehension. This way, attendees will come home not only with an in-depth understanding of ggplot2, but also with a clear idea about how to approach data visualization problems that is in line with what we know actually works.
Dr Stefano Coretta
Senior Teaching Coordinator (Statistics) at the University of Edimburgh
Intro to Bayesian for Speech and Language Scientists
This workshop will introduce Bayesian inference for the quantification of phonetic data using a unified framework of statistical modelling using linear models. Until recently, Bayesian modelling was technically involved and computationally expensive. These challenges have now been overcome, making Bayesian inference conceptually, technically, and computationally feasible for researchers across disciplines. Furthermore, Bayesian inference more directly answers research questions typically asked in the speech sciences, compared to traditional Null Hypothesis Significance Testing, by quantifying magnitude and uncertainty of estimates of interest. A brief conceptual introduction will be followed by a walk through of a Bayesian statistical analysis using R and the package brms (Bürkner 2017). We will explain how to set up a Bayesian regression model (including setting appropriate priors), how to interpret the results inferentially, how to conduct model checks, and how to visualize and report the results. In hands-on exercises, the participants will immediately apply their knowledge to real data sets in R.
Dr Christopher Carignan
Lecturer in Speech Science @ University College London
Digital Signal Processing in R
In phonetics and speech science research, the R programming environment is commonly used for data wrangling and performing a vast array of statistical analyses. However, given the focus on using the R language for statistical modeling, it is not often used as an environment for primary data analysis. A typical workflow might consist of analyzing data in another language such as MATLAB, Python, or Praat and importing the processed data into R for statistical treatment. In this two-day workshop, you will learn how R can be used as an environment for primary analysis of a variety of data related to speech production, including speech acoustics, articulatory kinematics, and vocal tract imaging. This workshop is designed for participants who have some degree of experience in R and will therefore assume a basic level of knowledge of the R language.
Dr Patrycja Strycharczuk
Senior Lecturer in Linguistics & Quantitative Methods @ The University of Manchester
Forced alignment for speech research
This workshop will be an introduction to forced alignment, an automated procedure for phonetic segmentation of speech, which uses orthographic transcription as input. Forced alignment substantially speeds up the processing of sound files for analysis, and in some cases, it can be used to fully automate aspects of phonetic analysis. In the workshop, I will introduce several web-based forced alignment services, and provide hands-on training on how to use them. This approach does not require advanced computer skills. The workshop is intended for researchers who have no experience with forced alignment and who would like to use it for acoustic analysis. It will be introductory and therefore accessible for undergraduate students.