Abstract

Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing.

DOI

10.1371/journal.pcbi.1000301

Publication Date

2009-03-06

Publication Title

PLoS Comput Biol

Volume

5

Issue

3

Publisher

Public Library of Science (PLoS)

ISSN

1553-7358

Embargo Period

2024-11-22

Comments

PMCID: PMC2639722

Keywords

Auditory Perception, Feedback, Humans, Models, Theoretical, Music, Speech

First Page

e1000301

Share

COinS