Show simple item record

dc.contributor.authorHoward, ISen
dc.contributor.authorMessum, Pen

Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce.

dc.format.extente110334 - ?en
dc.subjectComputer Simulationen
dc.subjectInfant Behavioren
dc.subjectSpeech Perceptionen
dc.titleLearning to pronounce first words in three languages: an investigation of caregiver and infant behavior using a computational model of an infant.en
dc.typeJournal Article
plymouth.publication-statusPublished onlineen
plymouth.journalPLoS Oneen
plymouth.organisational-group/Plymouth/00 Groups by role
plymouth.organisational-group/Plymouth/00 Groups by role/Academics
plymouth.organisational-group/Plymouth/Faculty of Health and Human Sciences
plymouth.organisational-group/Plymouth/Faculty of Health and Human Sciences/School of Psychology
plymouth.organisational-group/Plymouth/Faculty of Science and Engineering
plymouth.organisational-group/Plymouth/Faculty of Science and Engineering/School of Computing, Electronics and Mathematics
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA/UoA11 Computer Science and Informatics
dc.publisher.placeUnited Statesen
dc.rights.embargoperiodNot knownen
rioxxterms.typeJournal Article/Reviewen

Files in this item


This item appears in the following Collection(s)

Show simple item record

All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
@mire NV