Show simple item record

dc.contributor.authorVenkatesh, Sen
dc.contributor.authorMoffat, Den
dc.contributor.authorKirke, Aen
dc.contributor.authorShakeri, Gen
dc.contributor.authorBrewster, Sen
dc.contributor.authorFachner, Jen
dc.contributor.authorOdell-Miller, Hen
dc.contributor.authorStreet, Aen
dc.contributor.authorFarina, Nen
dc.contributor.authorBanerjee, Sen
dc.contributor.authorMiranda, ERen
dc.date.accessioned2021-03-09T18:10:52Z
dc.date.issued2021-05-07en
dc.identifier.urihttp://hdl.handle.net/10026.1/16930
dc.descriptionNo embargo required.en
dc.description.abstract

Segmenting audio into homogeneous sections such as music and speech helps us understand the content of audio. It is useful as a pre-processing step to index, store, and modify audio recordings, radio broadcasts and TV programmes. Deep learning models for segmentation are generally trained on copyrighted material, which cannot be shared. Annotating these datasets is time-consuming and expensive and therefore, it significantly slows down research progress. In this study, we present a novel procedure that artificially synthesises data that resembles radio signals. We replicate the workflow of a radio DJ in mixing audio and investigate parameters like fade curves and audio ducking. We trained a Convolutional Recurrent Neural Network (CRNN) on this synthesised data and outperformed state-of-the-art algorithms for music-speech detection. This paper demonstrates the data synthesis procedure as a highly effective technique to generate large training sets for deep neural networks.

en
dc.language.isoenen
dc.publisherIEEEen
dc.subjectAudio Classificationen
dc.subjectAudio Segmentationen
dc.subjectDeep Learningen
dc.subjectMusic-speech Detectionen
dc.subjectTraining Set Synthesisen
dc.titleArtificially Synthesising Data for Audio Classification and Segmentation to Improve Speech and Music Detection in Radio Broadcasten
dc.typeConference Contribution
plymouth.date-start2021-06-06en
plymouth.date-finish2021-06-11en
plymouth.date-finish2021-06-11en
plymouth.publisher-urlhttps://ieeexplore.ieee.org/document/9413597en
plymouth.conference-nameIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)en
dc.identifier.doi10.1109/ICASSP39728.2021.9413597en
plymouth.organisational-group/Plymouth
plymouth.organisational-group/Plymouth/Faculty of Arts, Humanities and Business
plymouth.organisational-group/Plymouth/Faculty of Arts, Humanities and Business/School of Society and Culture
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA
plymouth.organisational-group/Plymouth/REF 2021 Researchers by UoA/UoA33 Music, Drama, Dance, Performing Arts, Film and Screen Studies
plymouth.organisational-group/Plymouth/Users by role
plymouth.organisational-group/Plymouth/Users by role/Academics
plymouth.organisational-group/Plymouth/Users by role/Post-Graduate Research Students
dcterms.dateAccepted2021-02-01en
dc.rights.embargodate2021-07-17en
dc.identifier.eissn2379-190Xen
dc.rights.embargoperiodNot knownen
rioxxterms.versionofrecord10.1109/ICASSP39728.2021.9413597en
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2021-05-07en
rioxxterms.typeConference Paper/Proceeding/Abstracten


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
@mire NV