Artificially Synthesising Data for Audio Classification and Segmentation to Improve Speech and Music Detection in Radio Broadcast
dc.contributor.author | Venkatesh, S | |
dc.contributor.author | Moffat, David | |
dc.contributor.author | Kirke, Alexis | |
dc.contributor.author | Shakeri, G | |
dc.contributor.author | Brewster, S | |
dc.contributor.author | Fachner, J | |
dc.contributor.author | Odell-Miller, H | |
dc.contributor.author | Street, A | |
dc.contributor.author | Farina, Nicolas | |
dc.contributor.author | Banerjee, Sube | |
dc.contributor.author | Miranda, Eduardo | |
dc.date.accessioned | 2021-03-09T18:10:52Z | |
dc.date.issued | 2021-05-07 | |
dc.identifier.issn | 1520-6149 | |
dc.identifier.issn | 2379-190X | |
dc.identifier.uri | http://hdl.handle.net/10026.1/16930 | |
dc.description.abstract |
Segmenting audio into homogeneous sections such as music and speech helps us understand the content of audio. It is useful as a pre-processing step to index, store, and modify audio recordings, radio broadcasts and TV programmes. Deep learning models for segmentation are generally trained on copyrighted material, which cannot be shared. Annotating these datasets is time-consuming and expensive and therefore, it significantly slows down research progress. In this study, we present a novel procedure that artificially synthesises data that resembles radio signals. We replicate the workflow of a radio DJ in mixing audio and investigate parameters like fade curves and audio ducking. We trained a Convolutional Recurrent Neural Network (CRNN) on this synthesised data and outperformed state-of-the-art algorithms for music-speech detection. This paper demonstrates the data synthesis procedure as a highly effective technique to generate large training sets for deep neural networks. | |
dc.format.extent | 636-640 | |
dc.language.iso | en | |
dc.publisher | IEEE | |
dc.subject | Audio Classification | |
dc.subject | Audio Segmentation | |
dc.subject | Deep Learning | |
dc.subject | Music-speech Detection | |
dc.subject | Training Set Synthesis | |
dc.title | Artificially Synthesising Data for Audio Classification and Segmentation to Improve Speech and Music Detection in Radio Broadcast | |
dc.type | conference | |
dc.type | Conference Proceeding | |
plymouth.author-url | https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000704288400128&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=11bb513d99f797142bcfeffcc58ea008 | |
plymouth.date-start | 2021-06-06 | |
plymouth.date-finish | 2021-06-11 | |
plymouth.volume | 2021-June | |
plymouth.publisher-url | https://ieeexplore.ieee.org/document/9413597 | |
plymouth.conference-name | IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) | |
plymouth.publication-status | Published | |
plymouth.journal | ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) | |
dc.identifier.doi | 10.1109/ICASSP39728.2021.9413597 | |
plymouth.organisational-group | /Plymouth | |
plymouth.organisational-group | /Plymouth/Faculty of Arts, Humanities and Business | |
plymouth.organisational-group | /Plymouth/Faculty of Arts, Humanities and Business/School of Society and Culture | |
plymouth.organisational-group | /Plymouth/Faculty of Health | |
plymouth.organisational-group | /Plymouth/Faculty of Health/Peninsula Medical School | |
plymouth.organisational-group | /Plymouth/Faculty of Health/Peninsula Medical School/PMS - Manual | |
plymouth.organisational-group | /Plymouth/REF 2021 Researchers by UoA | |
plymouth.organisational-group | /Plymouth/REF 2021 Researchers by UoA/UoA03 Allied Health Professions, Dentistry, Nursing and Pharmacy | |
plymouth.organisational-group | /Plymouth/REF 2021 Researchers by UoA/UoA33 Music, Drama, Dance, Performing Arts, Film and Screen Studies | |
plymouth.organisational-group | /Plymouth/Users by role | |
plymouth.organisational-group | /Plymouth/Users by role/Academics | |
plymouth.organisational-group | /Plymouth/Users by role/Researchers in ResearchFish submission | |
dcterms.dateAccepted | 2021-02-01 | |
dc.rights.embargodate | 2021-7-17 | |
dc.identifier.eissn | 2379-190X | |
dc.rights.embargoperiod | Not known | |
rioxxterms.funder | Engineering and Physical Sciences Research Council | |
rioxxterms.identifier.project | Radio Me: Real-time Radio Remixing for people with mild to moderate dementia who live alone, incorporating Agitation Reduction, and Reminders | |
rioxxterms.versionofrecord | 10.1109/ICASSP39728.2021.9413597 | |
rioxxterms.licenseref.uri | http://www.rioxx.net/licenses/all-rights-reserved | |
rioxxterms.licenseref.startdate | 2021-05-07 | |
rioxxterms.type | Conference Paper/Proceeding/Abstract | |
plymouth.funder | Radio Me: Real-time Radio Remixing for people with mild to moderate dementia who live alone, incorporating Agitation Reduction, and Reminders::Engineering and Physical Sciences Research Council | |
plymouth.funder | Radio Me: Real-time Radio Remixing for people with mild to moderate dementia who live alone, incorporating Agitation Reduction, and Reminders::Engineering and Physical Sciences Research Council | |
atmire.cua.enabled |