Show simple item record

dc.contributor.authorEmmett, MH
dc.contributor.authorwennekers, T
dc.contributor.authordenham, S
dc.date.accessioned2017-11-20T14:19:43Z
dc.date.available2017-11-20T14:19:43Z
dc.date.issued2017-11-21
dc.identifier.issn2082-6710
dc.identifier.issn2082-6710
dc.identifier.urihttp://hdl.handle.net/10026.1/10233
dc.descriptionAbstract: Conversations are amazing! Although we usually find the experience enjoyable and even relaxing, when one considers the difficulties of simultaneously generating sig- nals that convey an intended message while at the same time trying to understand the messages of another, then the pleasures of conversation may seem rather surprising. We manage to communicate with each other without knowing quite what will happen next. We quickly manufacture precisely timed sounds and gestures on the fly, which we exchange with each other without clashing—even managing to slip in some imita- tions as we go along! Yet usually meaning is all we really notice. In the Conversa- tionPiece project, we aim to transform conversations into musical sounds using neuro-inspired technology to expose the amazing world of sounds people create when talking with others. Sounds from a microphone are separated into different fre- quency bands by a computer-simulated “ear” (more precisely “basilar membrane”) and analyzed for tone onsets using a lateral-inhibition network, similar to some cor- tical neural networks. The detected events are used to generate musical notes played on a synthesizer either instantaneously or delayed. The first option allows for ex- changing timed sound events between two speakers with a speech-like structure, but without conveying (much) meaning. Delayed feedback further allows self-exploration of one’s own speech. We discuss the current setup (ConversationPiece version II), in- sights from first experiments, and options for future applications.
dc.description.abstract

Conversations are amazing! Although we usually find the experience enjoyable and even relaxing, when one considers the difficulties of simultaneously generating signals that convey an intended message while at the same time trying to understand the messages of another, then the pleasures of conversation may seem rather surprising. We manage to communicate with each other without knowing quite what will happen next. We quickly manufacture precisely timed sounds and gestures on the fly, which we exchange with each other without clashing-even managing to slip in some imitations as we go along! Yet usually meaning is all we really notice. In the ConversationPiece project, we aim to transform conversations into musical sounds using neuro-inspired technology to expose the amazing world of sounds people create when talking with others. Sounds from a microphone are separated into different frequency bands by a computer-simulated "ear" (more precisely "basilar membrane") and analyzed for tone onsets using a lateral-inhibition network, similar to some cortical neural networks. The detected events are used to generate musical notes played on a synthesizer either instantaneously or delayed. The first option allows for exchanging timed sound events between two speakers with a speech-like structure, but without conveying (much) meaning. Delayed feedback further allows self-exploration of one's own speech. We discuss the current setup (ConversationPiece version II), insights from first experiments, and options for future applications.

dc.format.extent205-211
dc.language.isoen
dc.publisherThe Centre for Philosophical Research
dc.subjectconversation
dc.subjectdialogue
dc.subjectperformance
dc.subjectsonification
dc.subjectsound analysis
dc.titleConversationPiece II: Displaced and Rehacked
dc.typejournal-article
dc.typeArticle
plymouth.issueSpecial
plymouth.volumeVIII
plymouth.publication-statusPublished online
plymouth.journalAVANT. Trends in Interdisciplinary Studies
dc.identifier.doi10.26913/80s02017.0111.0019
plymouth.organisational-group/Plymouth
plymouth.organisational-group/Plymouth/Admin Group - REF
plymouth.organisational-group/Plymouth/Admin Group - REF/REF Admin Group - FoSE
plymouth.organisational-group/Plymouth/Faculty of Arts, Humanities and Business
plymouth.organisational-group/Plymouth/Faculty of Health
plymouth.organisational-group/Plymouth/Faculty of Health/School of Psychology
plymouth.organisational-group/Plymouth/Faculty of Science and Engineering
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA/UoA04 Psychology, Psychiatry and Neuroscience
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA/UoA11 Computer Science and Informatics
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA/UoA32 Art and Design: History, Practice and Theory
plymouth.organisational-group/Plymouth/Research Groups
plymouth.organisational-group/Plymouth/Research Groups/Centre for Brain, Cognition and Behaviour (CBCB)
plymouth.organisational-group/Plymouth/Research Groups/Centre for Brain, Cognition and Behaviour (CBCB)/Brain
plymouth.organisational-group/Plymouth/Users by role
plymouth.organisational-group/Plymouth/Users by role/Academics
dcterms.dateAccepted2017-09-26
dc.identifier.eissn2082-6710
dc.rights.embargoperiodNot known
rioxxterms.versionofrecord10.26913/80s02017.0111.0019
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserved
rioxxterms.licenseref.startdate2017-11-21
rioxxterms.typeJournal Article/Review


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV