Show simple item record

dc.contributor.authorVenkatesh, S
dc.contributor.authorMoffat, D
dc.contributor.authorMiranda, Eduardo
dc.date.accessioned2022-09-26T18:31:25Z
dc.date.issued2022-09-12
dc.identifier.issn1549-4950
dc.identifier.urihttp://hdl.handle.net/10026.1/19638
dc.description.abstract

In recent years, machine learning has been widely adopted to automate the audio mixing process. Automatic mixing systems have been applied to various audio effects such as gain-adjustment, equalization, and reverberation. These systems can be controlled through visual interfaces, providing audio examples, using knobs, and semantic descriptors. Using semantic descriptors or textual information to control these systems is an effective way for artists to communicate their creative goals. In this paper, we explore the novel idea of using word embeddings to represent semantic descriptors. Word embeddings are generally obtained by training neural networks on large corpora of written text. These embeddings serve as the input layer of the neural network to create a translation from words to EQ settings. Using this technique, the machine learning model can also generate EQ settings for semantic descriptors that it has not seen before. We compare the EQ settings of humans with the predictions of the neural network to evaluate the quality of predictions. The results showed that the embedding layer enables the neural network to understand semantic descriptors. We observed that the models with embedding layers perform better than those without embedding layers, but still not as good as human labels.

dc.format.extent753-763
dc.language.isoen
dc.publisherAudio Engineering Society
dc.subjectAudio Mixing
dc.subjectAutomatic Mixing
dc.subjectEqualization
dc.subjectSemantic Word Vectors
dc.titleWord Embeddings for Automatic Equalization in Audio Mixing
dc.typejournal-article
dc.typeJournal Article
plymouth.issue9
plymouth.volume70
plymouth.publisher-urlhttp://www.aes.org/e-lib/browse.cfm?elib=21887
plymouth.publication-statusPublished online
plymouth.journalJournal of the Audio Engineering Society
dc.identifier.doi10.17743/jaes.2022.0047
plymouth.organisational-group/Plymouth
plymouth.organisational-group/Plymouth/Faculty of Arts, Humanities and Business
plymouth.organisational-group/Plymouth/Users by role
plymouth.organisational-group/Plymouth/Users by role/Academics
dcterms.dateAccepted2022-07-25
dc.rights.embargodate2022-10-1
dc.rights.embargoperiodNot known
rioxxterms.funderEngineering and Physical Sciences Research Council
rioxxterms.identifier.projectRadio Me: Real-time Radio Remixing for people with mild to moderate dementia who live alone, incorporating Agitation Reduction, and Reminders
rioxxterms.versionVersion of Record
rioxxterms.versionofrecord10.17743/jaes.2022.0047
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserved
rioxxterms.licenseref.startdate2022-09-12
rioxxterms.typeJournal Article/Review
plymouth.funderRadio Me: Real-time Radio Remixing for people with mild to moderate dementia who live alone, incorporating Agitation Reduction, and Reminders::Engineering and Physical Sciences Research Council


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV