Show simple item record

dc.contributor.supervisorKirke, Alexis
dc.contributor.authorPearce-Davies, Samuel Louis
dc.contributor.otherFaculty of Arts, Humanities and Businessen_US
dc.date.accessioned2019-12-10T15:03:41Z
dc.date.available2019-12-10T15:03:41Z
dc.date.issued2019
dc.identifier10599194en_US
dc.identifier.urihttp://hdl.handle.net/10026.1/15240
dc.description.abstract

This thesis presents efforts to lay the foundations for an Artificial-Intelligence musical compositional system conceived on similar principles to DeepDream, a revolutionary computer vision process. This theoretical system would be designed to engage in stylistic feature transfer between existing musical pieces, and eventually to compose original music either autonomously or in collaboration with human musicians and composers. In this thesis, construction of the analysis and feature recognition systems necessary for this long-term goal is achieved through the use of neural networks.

Originally, DeepDream came about as a way of visualising the weights inside neural network layers – matrices of variables containing the data that determines what information the network has learned – for better understanding of training and trouble-shooting of such networks that have been trained to classify images. This approach spawned an unexpectedly artistic process whereby feature recognition could be used to alter images in a dreamlike fashion, akin to seeing shapes in clouds.

The proposed musical version of this process involves analysing sound files and generating spectrograms – pictures of the sound that could be manipulated in much the same ways as regular images. As described in this thesis, a sizeable bank of sound samples has been gathered – of individual musical notes from a selection of instruments – in pursuit of this application of the DeepDream architecture.

These samples are curated, edited and analysed to produce spectrograms that make up a dataset for neural network training. Using the Python programming language and its machine learning library ‘Scikit Learn’, a rudimentary deep learning system is constructed to be trained on the sample spectrograms and learn to classify them. Once this is complete, additional tests are performed to determine the validity and effectiveness of the approach.

en_US
dc.language.isoen
dc.publisherUniversity of Plymouth
dc.subject.classificationResMen_US
dc.titleSonic Analysis for Machine Learning: Multi-Layer Perceptron Training using Spectrogramsen_US
dc.typeThesis
plymouth.versionpublishableen_US
dc.identifier.doihttp://dx.doi.org/10.24382/628
dc.rights.embargoperiodNo embargoen_US
dc.type.qualificationMastersen_US
rioxxterms.versionNA


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV