Learning to pronounce first words in three languages: an investigation of caregiver and infant behavior using a computational model of an infant.
Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce.
|dc.format.extent||e110334 - ?||en|
|dc.title||Learning to pronounce first words in three languages: an investigation of caregiver and infant behavior using a computational model of an infant.||en|
|plymouth.organisational-group||/Plymouth/00 Groups by role|
|plymouth.organisational-group||/Plymouth/00 Groups by role/Academics|
|plymouth.organisational-group||/Plymouth/Faculty of Health and Human Sciences|
|plymouth.organisational-group||/Plymouth/Faculty of Health and Human Sciences/School of Psychology|
|plymouth.organisational-group||/Plymouth/Faculty of Science and Engineering|
|plymouth.organisational-group||/Plymouth/Faculty of Science and Engineering/School of Computing, Electronics and Mathematics|
|plymouth.organisational-group||/Plymouth/REF 2021 Researchers by UoA|
|plymouth.organisational-group||/Plymouth/REF 2021 Researchers by UoA/UoA11 Computer Science and Informatics|