ConversationPiece II: Displaced and Rehacked
MetadataShow full item record
Abstract: Conversations are amazing! Although we usually find the experience enjoyable and even relaxing, when one considers the difficulties of simultaneously generating sig- nals that convey an intended message while at the same time trying to understand the messages of another, then the pleasures of conversation may seem rather surprising. We manage to communicate with each other without knowing quite what will happen next. We quickly manufacture precisely timed sounds and gestures on the fly, which we exchange with each other without clashing—even managing to slip in some imita- tions as we go along! Yet usually meaning is all we really notice. In the Conversa- tionPiece project, we aim to transform conversations into musical sounds using neuro-inspired technology to expose the amazing world of sounds people create when talking with others. Sounds from a microphone are separated into different fre- quency bands by a computer-simulated “ear” (more precisely “basilar membrane”) and analyzed for tone onsets using a lateral-inhibition network, similar to some cor- tical neural networks. The detected events are used to generate musical notes played on a synthesizer either instantaneously or delayed. The first option allows for ex- changing timed sound events between two speakers with a speech-like structure, but without conveying (much) meaning. Delayed feedback further allows self-exploration of one’s own speech. We discuss the current setup (ConversationPiece version II), in- sights from first experiments, and options for future applications.
AVANT. Trends in Interdisciplinary Studies
The following license files are associated with this item: