Abstract
Brain-computer interfaces (BCIs) aim to find a communication path with computers that dispenses with motor interaction using merely brain activity. This project investigates how BCI can have music as its destination, the system would allow people with motor impairments to get close to creative activities and offer musicians a novel dimension of performance. BCI systems present multiple challenges when attempting approaches in real-world circumstances. These challenges are encompassed in the transfer information rate and the deployability of such systems. BCI research is mainly medically focused therefore the brain activity measurement devices are designed under lab conditions, making them not portable for musical scenarios. This research focuses on electroencephalogram (EEG) as being one of the widest used methods, among the different EEG paradigms and decoding processes, this research evaluates the most feasible pipeline to implement a prototype that could activate a musical instrument. Speech Imagery (SI) is the selected EEG paradigm, as in comparison with other known methods, it is a non-stimulus-based evoked potential, therefore the SI decoder aims to extract spatial information about the signal differences between attempts from imagine to pronounce vowels or phonemes aloud. The project proposes the use of 8 dry electrodes set to increment system portability. The research starts to evaluate between a traditional decoding pipeline with Common Spatial Patterns and Support Vector Machine, with a relatively new decoder based on Riemannian metric classification of Covariance Matrices. Selecting the latter as the most optimal method for online implementation, this research then evaluates a set of different imagery tasks to analyze the decoding performance and select the most optimal configurations. The final prototype is built having as speech imagery tasks the rhythmic imaginations of vowel /e/, vowel /o/ and single imagery of word /mid/, the system converted 1 second of EEG into a covariance matrix which distance to each class centroid, obtained in offline analysis, was then compared in a decision three that selects a MIDI outcome to an external analogue synthesizer. The system achieved optimal accuracies, when detecting resting state against any imaginary task, of up to 87%, the overall accuracy of the online system for 4 classes was 57% 3 significantly higher than the 42% chance level, the system had a transfer rate of 3.4 bit/min that allowed the participant to have a novel musical experience.
Keywords
Brain Computer Interface, Brain Computer Music Interface, Speech Imagery
Document Type
Thesis
Publication Date
2022
Recommended Citation
Tates Puetate, A. (2022) Investigation into a Brain-Computer Interface System for Music with Speech Imagery. Thesis. University of Plymouth. Retrieved from https://pearl.plymouth.ac.uk/sc-theses/50