Prospective Graduate Students / Postdocs
This faculty member is currently not looking for graduate students or Postdoctoral Fellows. Please do not contact the faculty member with any such requests.
This faculty member is currently not looking for graduate students or Postdoctoral Fellows. Please do not contact the faculty member with any such requests.
Dr. Janet F. Werker is University Killam Professor and Canada Research Chair in the Department of Psychology at the University of British Columbia. Werker is internationally recognized for her research investigating the perceptual foundations of language acquisition in both monolingual and bilingual learning infants. Her over 150 papers and chapters have appeared in prestigious journals including Science, Nature, Nature Communications, Trends in Cognitive Sciences, Proceedings of the National Academy of Sciences, Journal of Neuroscience, Psychological Science, and Cognition as well as in the premier journals in developmental psychology, language, and perception. Her research is funded by NSERC, SSHRC, and CIFAR in Canada, and by the NIH in the U.S. Previous funding sources include the Human Frontiers Science Program, the James S. McDonnell Foundation, and NTT Laboratories.
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
An infant’s ability to acquire the speech of their native language is a foundational developmental milestone. Human speech is multisensory, such that speaking produces highly correlated changes in signals across modalities (auditory, visual and sensorimotor) that listeners can detect. The root of multisensory speech perceptual capacities – in particular, the role of sensorimotor influences on auditory speech perception – can provide immense insights to the development of the human speech and language systems. In this dissertation, I advance understanding of the role of sensorimotor information in speech acquisition in three domains: methodological, empirical and theoretical.On the methodological front, I develop a combined electroencephalogram (EEG) articulator-inhibition paradigm to examine the neural mechanisms underlying sensorimotor-auditory speech interactions in infants. The novel EEG methodology enabled me to examine how sensorimotor input influences phonetic processing. Empirically, in four experiments in Chapter 2, I provide new behavioural evidence that preventing the movement of the articulator required to produce particular phones disrupts infants’ auditory discrimination, and show that this applies not only to an acoustically difficult non-native but also to a native phonetic contrast that is more acoustically distinct. In Chapter 3, I present the first neuroimaging evidence that oral-motor inhibition disrupts a phonetic discriminative response to a contrast with a place-of-articulation concordant to the inhibited articulator movement. The timing of the response shows the disruption to be simultaneous to phonetic processing. These empirical contributions with infants at two developmental timepoints support the hypothesis that sensitivity to the sensorimotor features corresponding to the articulatory dimensions that results in a particular acoustic speech output exist prior to relevant production experience.In Chapter 4, I propose a novel framework for understanding these effects. While preverbal infants have yet to accrue experience-based knowledge of the sensorimotor consequences of speech, sensorimotor mapping of the articulatory space that corresponds to the acoustic dimensions of speech may be available to young infants through early emerging, some endogenous, motor movements. Combined with the anatomical and functional human speech and language network present from the third-trimester, the mapping of this articulatory space may be an important foundation for acquisition of speech and language.
View record
The first months of life are a sensitive period for the development of visual processing, and face processing in particular. The main goal of this thesis is to examine the influence of infant sex on the development of visual processing. The overarching hypothesis was that 5-month-olds would differ in performance on tasks related to the higher levels of the ventral processing stream, with females showing more advanced ventral visual processing. To begin tracing the developmental trajectory of these differences, another group was tested at 7 to 8 months, after major changes in face processing abilities occur. An exploratory look was taken throughout at two factors that may influence face processing development – the size of the social environment, and locomotion level. In Chapter 2, 5-month-olds were tested on detection of an eye expression change, from smiling to neutral, following infant-controlled habituation. As predicted, females outperformed males in evidencing a novelty preference. In Chapter 3, 7- to 8-month-olds were tested on the same task. For females, a developmental change from novelty to familiarity preference was found. For males no indication of eye expression discrimination at either age was found. In Chapter 4, both age groups were tested on discriminating a featural change in internal features (eyes, nose, mouth). A female advantage was found in 5-month-olds, but disappeared by 7 to 8 months. Chapter 5 replicated the Chapter 2 findings of female superiority in eye expression discrimination at 5 months. Contrary to prediction, females did not show greater mirror image confusion. Laterality effects for both eye expression and mirror image discrimination were found in females, and a negative relation between mirror image and eye expression discrimination was found in males. Finally, effects of the social environment on male face processing and of locomotion level on female face processing were found. The results support the hypothesis of a sex difference in the development of ventral stream processing. They inform the fields of visual/face processing development and of sex differences, showing a sex difference in infant development of processing of internal facial features and identifying additional factors involved, and have implications for studies of autism.
View record
The capacity to acquire language is believed to be deeply embedded in our biology. As such, it has been proposed that humans have evolved to respond specially to language from the first days and months of life. The present thesis explores this hypothesis, examining the early neural and social processing of speech in young infants. In Experiments 1-4, Near-Infrared Spectroscopy is used to measure neural activation in classic “language areas” of the cortex to the native language, to a rhythmically distinct unfamiliar language, and to a non-speech whistled surrogate language in newborn infants (Experiments 1 & 2) as well as infants at 4 months of age (Experiments 3 & 4) in. Results revealed that at birth, the brain responds specially to speech: bilateral anterior areas are activated to both familiar and unfamiliar spoken language, but not to the whistled surrogate form. Different patterns were observed in 4 month-old infants, demonstrating how language experience influences the brain response to speech and non-speech signals. Experiments 5-7 then turn to infants’ perception of language as a marker of social group, asking whether infants at 6 and 11 month-olds associate the speakers of familiar and unfamiliar language with individuals of different ethnicities. Infants at 11 months—but not at 6 months—are found to look more to Asian versus Caucasian faces when paired with Cantonese versus English language (Experiments 5, 7). However, infants at the same age did not show any difference in looking to Asian versus Caucasian faces when paired with English versus Spanish (Experiment 6). Together, these results suggest that the 11 month-old infants tested have learned a specific association between Asian individuals and Cantonese language. The experiments presented in this thesis thus demonstrate that from early in development, infants are tuned to language. Such sensitivity is argued to be of critical importance, as it may serve to direct young learners to potential communicative partners.
View record
The perception of speech involves the integration of both heard and seen signals. Increasing evidence indicates that even young infants are sensitive to the correspondence between these sensory signals, and adding visual information to the auditory speech signal can change infants’ perception. Nonetheless, important questions remain regarding the nature of and limits to early audiovisual speech perception. In the first set of experiments in this thesis, I use a novel eyetracking method to investigate whether English-learning six-, nine-, and 11-month-olds detect content correspondence in auditory and visual information when perceiving non-native speech. Six- and nine-month-olds, prior to and in the midst of perceptual attunement, switch their face-scanning patterns in response to incongruent speech, evidence that infants at these ages detect audiovisual incongruence even in non-native speech. I then probe whether this familiarization, to congruent or incongruent speech, affects infants’ perception such that auditory-only phonetic discrimination of the non-native sounds is changed. I find that familiarization to incongruent speech changes—but does not entirely disrupt—six-month-olds’ auditory discrimination. Nine- and 11-month-olds, in the midst and at the end of perceptual attunement, do not discriminate the non-native sounds regardless of familiarization condition. In the second set of experiments, I test how temporal information and phonetic content information may both contribute to an infant’s use of auditory and visual information in the perception of speech. I familiarize six-month-olds to audiovisual Hindi speech sounds in which the auditory and visual signals of the speech are incongruent in content and, in two conditions, are also temporally asynchronous. I hypothesize that, when presented with temporally synchronous, incongruent stimuli, infants rely on either the auditory or the visual information in the signal and use that information to categorize the speech event. Further, I predict that the addition of a temporal offset to this incongruent speech changes infants’ use of the auditory and visual information. Although the main results of this latter study are inconclusive, post-hoc analyses suggest that when visual information is presented first or synchronously with auditory information, as is the case in the environment, infants exhibit a moderate matching preference for auditory information at test.
View record
The multisensory nature of speech, and in particular, the modulatory influence of one’s own articulators during speech processing, is well established in adults. However, the origins of the sensorimotor influence on auditory speech perception are largely unknown, and require the examination of a population in which a link between speech perception and speech production is not well-defined; by studying preverbal infant speech perception, such early links can be characterized. Across three experimental chapters, I provide evidence that articulatory information selectively affects the perception of speech sounds in preverbal infants, using both neuroimaging and behavioral measures. In Chapter 2, I use a looking time procedure to show that in 6-month-old infants, articulatory information can impede the perception of a consonant contrast when the related articulator is selectively impaired. In Chapter 3, I use the high-amplitude suck (HAS) procedure to show that neonates are able to discriminate and exhibit memory for the vowels /u/ and /i/; however, the information from the infants’ articulators (a rounded lip shape) seems to only marginally affect behavior during the learning of these vowel sounds. In Chapter 4, I co-register HAS with a neuroimaging technique – Near Infrared Spectroscopy (NIRS) – and identify underlying neural networks in newborn infants that are sensitive to the sensorimotor-auditory match, in that the vowel which matches the lip shape (/u/) is processed differently than the vowel that is not related to the lip shape (/i/). Together, the experiments reported in this dissertation suggest that even before infants gain control over their articulators and speak their first words, their sensorimotor systems are interacting with their perceptual systems as they process auditory speech information.
View record
To rise to the challenge of acquiring their native language, infants must deploy tools to support their learning. This thesis compared infants growing up in two very different language environments, monolingual and bilingual, to better understand these tools and how their development and use changes with the context of language acquisition.The first set of studies − Chapter 2 − showed that infants adapt very early-developing tools to the context of their prenatal experience. Newborns born to bilingual mothers directed their attention to both of their native languages, while monolinguals preferred listening to their single native language. However, prenatal bilingual experience did not result in language confusion, as language discrimination was robustly maintained in both monolinguals and bilinguals. Thus, learning mechanisms allow experience-based listening preferences, while enduring perceptual sensitivities support language discrimination even in challenging language environments.Chapter 3 investigated a fundamental word learning tool: the ability to associate word and object. Monolinguals and bilinguals showed an identical developmental trajectory, suggesting that, unlike some aspects of word learning, this associative ability is equivalent across different types of early language environments. Chapters 4 and 5 explored the development of a heuristic for learning novel words. Disambiguation is the strategy of associating a novel word with a novel object, rather than a familiar one. In Chapter 4, disambiguation was robustly demonstrated by 18-month-old monolinguals, but not by age-matched bilinguals and trilinguals. The results supported the “lexicon structure hypothesis”, that disambiguation develops with mounting evidence for a one-to-one mapping between words and their referents, as is typical for monolinguals. For bilinguals, translation equivalents (cross-language synonyms) represent a departure from one-to-one mapping. Chapter 5 directly tested the lexicon structure hypothesis, by comparing subgroups of bilinguals who knew few translation equivalents to bilinguals who knew many. Only the former group showed disambiguation, supporting the lexicon structure hypothesis.The series of studies presented in this thesis provides a window into language acquisition across all infants. Whether growing up monolingual or bilingual, infants harmonize their development and use of the tools of language acquisition to the particular challenges mounted by their language environment.
View record
Myriad factors influence perceptual processing, but “embodied” approaches assert that sensorimotor information about bodily movements plays an especially critical role. This view has precedence in speech research, where it has often been assumed that the movements of one’s articulators (i.e., the tongue, lips, jaw, etc.) are closely related to perceiving speech. Indeed, previous work has shown that speech perception is influenced by concurrent stimulation of speech motor cortex or by silently making articulatory motions (e.g., mouthing “pa”) when hearing speech sounds. Critics of embodied approaches claim instead that so-called articulatory influences are attributed to other processes (e.g., auditory imagery or feedback from phonological categories), which are also activated when making speech articulations. This dissertation explores the embodied basis of speech perception, and further investigates its ontogenetic development. Chapter 2 reports a study where adults made silent and synchronous speech-like articulations while listening to and identifying speech sounds. Results show that sensorimotor aspects of these movements (i.e., articulatory-motor information) are a robust source of perceptual modulation, independent from auditory imagery or phonological activation. Chapter 3 reports that even low-level, non-speech articulatory-motor information (i.e., holding one’s breath at a particular position in the vocal tract) can exert a subtle influence on adults’ perception of related speech sounds. Chapter 4 investigates the developmental origins of these influences, showing that low-level articulatory information can influence 4.5-month-old infants’ audiovisual speech perception. Specifically, achieving lip-shapes related to /i/ and /u/ vowels (while chewing or sucking, respectively) is shown to disrupt infants’ ability to match auditory speech information about these vowels to visual displays of talking faces. Together, these chapters show that aspects of speech processing are embodied and follow a pattern of differentiation in development. Before infants produce clear speech, links between low-level articulatory representations and speech perception are already in place. As adults, these links become more specific to sensorimotor information in dynamically coordinated articulations, but vestigial links to low-level articulatory-motor information remain from infancy.
View record
Phonetic perception becomes native-like by 10 months of age. A potential mechanism of change, distributional learning, affects the perception of 6-8-month-old infants (Maye et al., 2002). However, it was anticipated that perception may be more difficult to change by 10 months of age, after native categories have developed. In fact, some evidence suggests that by this age, the presence of social interaction may be an important element in infants’ phonetic change (Kuhl et al., 2003). The current work advances the hypothesis that infants’ level of attention, which tends to be higher with social interaction, may be a salient factor facilitating phonetic change. Three experiments were designed to test infants’ phonetic plasticity at 10 months, after phonetic categories have formed. A non-social distributional learning paradigm was chosen, and infants’ attention was monitored to probe whether a facilitating role would be revealed. In Experiment 1, 10-month-old English-learning infants heard tokens from along a continuum that is no longer discriminated at this age that formed a distribution suggestive of a category boundary (useful distinction). The results failed to reveal evidence of discrimination, suggesting that the distributional information did not have any effect. A second experiment used slightly different sound tokens, ones that are farther from the typical English pronunciation and are heard less frequently in the language environment. Infants still failed to discriminate the sounds following the learning period. However, a median split revealed that the high attending infants evinced learning. Experiment 3 increased the length of the learning phase to allow all infants to become sufficiently high attending, and revealed phonetic change. Thus, after phonetic categories have formed, attention appears to be important in learning.
View record
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Is information from vision and audition mutually facilitative to categorization in infants? Ten-month-old infants can detect categories on the basis of correlations of five attributes of visual stimuli; four- and seven-month-olds are sensitive only to the specific attributes, rather than the correlations. If younger infants can detect specific attributes of visual stimuli, is there a way to facilitate the perception of these attributes as a meaningful correlation, and hence, as a category? The current studies investigate whether integrating information from two domains—speech within the auditory system together with shapes in the visual domain—could facilitate categorization. I hypothesized that 4-month-old infants could categorize audio-visual information by pairing correlation-based stimuli in the auditory domain (monosyllables) with correlation-based stimuli in the visual domain (line-drawn animals). In Experiment 1, infants were exposed to a series of line-drawn animals whose features were correlated to form two animal categories. During test, infants experienced three test trials: a novel member of a previously-shown category, a non-member of the categories (that shared similar features), and a completely novel animal. Experiment 2 used the same animals and paradigm, but each animal was presented with a speech stimulus (a repeating monosyllable) whose auditory features were correlated in order to form two categories. In Experiment 3, categorization of the auditory stimuli was investigated in the absence of the correlated visual information. Experiment 4 addressed some potential confounds of the findings from Experiment 2. Results from this series of studies show that 4-month-olds fail categorize in both visual-only and auditory-only conditions. However, when each visual exemplar is paired with a corresponding, correlated speech exemplar, infants can categorize; they look longer at a new, within-category exemplar than a new, category violator. These findings provide evidence that infants extract correlated information from two domains, enabling cross-modal categorization at a very young age. Infants’ sensitivity to correlated attributes across two domains and the implications for categorization are discussed.
View record
Language is a conventional system: the use of words is shared within a language community. Even further, each language community has conventions regarding what “forms” may serve as words. A form (the phonological sounds or hand movements that make up a word) used in one community may not be proper in another. It is therefore important that when young language learners acquire a language, they adhere to both the general conventionality of language and the word-form conventions of their particular language(s).Previous research has demonstrated a developmental narrowing in the word-forms that infants are willing to accept as conventional labels. Younger word-learning infants view a wider range of symbols as potential labels than do older infants. The present study takes this research further, and specifies the nature of this developmental narrowing. Two potential word-learning constraints are explored: a Linguistic word-learning constraint, in which infants limit the symbols they view as potential labels according to whether the label-form consists of components that occur in at least one of the world’s languages, versus a more restrictive Native Language Assimilation constraint, in which infants limit symbols according to whether the components within the label-forms assimilate into native language speech categories. In addition, this research probed whether the development of such constraints is related to infants’ vocabulary acquisition.In the present study, I explored infants’ ability to learn unassimilable yet linguistic click words as object labels. In Experiment 1, I first established the effectiveness of the novel two-object Referential Switch paradigm, demonstrating that 14-month-old infants succeed in learning unassimilable click words as object labels in this task. In Experiment 2, I then tested 20-month-old infants to investigate the development of a Linguistic versus Native Language Assimilation. I found that while 20-month-old infants with smaller vocabularies were able to learn the unassimilable click words as labels, infants with larger vocabularies were not. These results suggest that the narrowing that occurs between 14 and 20 months of age in infants’ awareness of word-form conventions is best explained by the development of a Native Language Assimilation word-learning constraint.
View record
If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.