ORCID

Abstract

Pronunciation is an important part of speech acquisition, but little attention has been given to the mechanism or mechanisms by which it develops. Speech sound qualities, for example, have just been assumed to develop by simple imitation. In most accounts this is then assumed to be by acoustic matching, with the infant comparing his output to that of his caregiver. There are theoretical and empirical problems with both of these assumptions, and we present a computational model- Elija-that does not learn to pronounce speech sounds this way. Elija starts by exploring the sound making capabilities of his vocal apparatus. Then he uses the natural responses he gets from a caregiver to learn equivalence relations between his vocal actions and his caregiver's speech. We show that Elija progresses from a babbling stage to learning the names of objects. This demonstrates the viability of a non-imitative mechanism in learning to pronounce.

Publication Date

2011-01-01

Publication Title

Motor Control

Volume

15

Issue

1

ISSN

1087-1640

Organisational Unit

School of Engineering, Computing and Mathematics

Keywords

Humans, Imitative Behavior, Infant, Language Development, Neural Networks (Computer), Phonation, Phonetics, Reinforcement (Psychology), Speech, Speech Perception, Speech Recognition Software, Verbal Behavior, Verbal Learning, Vocabulary

First Page

85

Last Page

117

Share

COinS