ORCID

Abstract

We previously proposed a non-imitative account of learning to pronounce, implemented computationally using discovery and mirrored interaction with a caregiver. Our model used an infant vocal tract synthesizer and its articulators were driven by a simple motor system. During an initial phase, motor patterns develop that represent potentially useful speech sounds. To increase the realism of this model we now include some of the constraints imposed by speech breathing. We also implement a more sophisticated motor system. Firstly, this can independently control articulator movement over different timescales, which is necessary to effectively control respiration as well as prosody. Secondly, we implement a two-tier hierarchical representation of motor patterns so that more complex patterns can be built up from simpler sub-units. We show that our model can learn different onset times and durations for articulator movements and synchronize its respiratory cycle with utterance production. Finally we show that the model can pronounce utterances composed of sequences of speech sounds.

Publication Date

2008-01-01

Publication Title

Proceedings of ISSP 2008 - 8th International Seminar on Speech Production

Organisational Unit

School of Engineering, Computing and Mathematics

First Page

165

Last Page

168

Share

COinS