ORCID

Abstract

This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.

Publication Date

2017-05-15

Publication Title

Proceedings of NIME 2017 : New Interfaces for Musical Expression

Organisational Unit

School of Art, Design and Architecture

Share

COinS