Carla Hudson Kam
Relevant Degree Programs
Graduate Student Supervision
Doctoral Student Supervision (Jan 2008 - Mar 2019)
This dissertation seeks to understand the underlying mechanism(s) of statistical learning (SL), defined as the capacity to extract structure from a perceptual stream by relying on the statistical properties of that stream (e.g., Aslin, 2017). I approach this question in two ways: by examining (1) the output representations of statistical learning (i.e., the quality of representations that emerge from a SL experience), and (2) the effect of input representations on SL (i.e., whether and how an individual’s prior knowledge filters and shapes SL). I hypothesized that (i) learners’ prior knowledge would impact the accessibility of units to SL, and thereby modify the process of learning; (ii) that SL is composed of more than veridical tracking of transitional probabilities between sounds; and (iii) that the interaction of prior knowledge and the underlying mechanisms of SL would relate to differences in learning outcomes across development.To test these hypotheses, I created a novel testing paradigm of the word segmentation SL task, in which participants’ knowledge of trisyllabic nonce words that were embedded in a continuous familiarization stream is probed by manipulating the nature of syllables in particular ordinal positions. Adult subjects were then tested on streams of speech that incurred varying degrees of perceptual load, either via the nature of the phonetic elements, or via an external and unrelated task. Children were similarly exposed to and tested on a stream of familiar sounds; I predicted that their performance should parallel that of adults under conditions of greater perceptual load.The results of these experiments confirm that underlying perceptual representations impact learners’ capacity for SL, and that the output of auditory SL tasks reflects more than the underlying statistics embedded in a continuous stream. Performance does not rest on underlying phonetic representations alone; rather, differences in executive function skills additionally impact the SL process.
Sound systems are a basic building block of any human language. An integral part of the acquisition of sound systems is the learning of allophony. In sound systems, some segments are used as allophones, or contextually-conditioned variants of a single phoneme, and learners need to figure out whether given segments are different phonemes or allophones. There is a growing interest in the question of how allophony is learned from speech input (e.g., Seidl and Cristia, 2012). This dissertation investigates the mechanisms behind the learning of allophony. Whether given segments are different phonemes or allophones of a single phoneme is partly determined by the contextual distribution of the segments. When segments occur in overlapping contexts and their occurrences are not predictable from the contexts, they are likely to be different phonemes. When segments occur in mutually exclusive contexts, and their occurrences are predictable from the contexts (i.e., they are in complementary distribution), the segments are likely to be allophones. This dissertation starts with the hypothesis that allophony can be learned from the complementary distribution of segments in input. With data collected in a series of laboratory experiments with adult English speakers, I make the following claims. First, adults can learn allophony from the complementary distribution of segments in input. The results of Experiment 1 showed that participants learned to treat two segments as something like allophones when they were exposed to input in which the segments were in complementary distribution. Second, the learning of allophony is constrained by the phonetic naturalness of the patterns of complementary distribution. The results of Experiment 2 showed that the learning of allophony happened only when participants were exposed to input in which relevant segments occurred in phonetically natural complementary contexts. Third, the learning of allophony involves the learning of the context-dependent perception of relevant segments. The results of Experiment 3 showed that, through exposure to input, participants’ perception of the relevant segments became more dependent on context such that they perceived the segments as being more similar to each other when they heard the segments in phonetically natural complementary contexts.