CochleaNet: A robust language-independent audio-visual model for real-time speech enhancement
Name:
Publisher version
View Source
Access full-text PDFOpen Access
View Source
Check access options
Check access options
Abstract
© 2020 Elsevier B.V. Noisy situations cause huge problems for the hearing-impaired, as hearing aids often make speech more audible but do not always restore intelligibility. In noisy settings, humans routinely exploit the audio-visual (AV) nature of speech to selectively suppress background noise and focus on the target speaker. In this paper, we present a novel language-, noise- and speaker-independent AV deep neural network (DNN) architecture, termed CochleaNet, for causal or real-time speech enhancement (SE). The model jointly exploits noisy acoustic cues and noise robust visual cues to focus on the desired speaker and improve speech intelligibility. The proposed SE framework is evaluated using a first of its kind AV binaural speech corpus, ASPIRE, recorded in real noisy environments, including cafeteria and restaurant settings. We demonstrate superior performance of our approach in terms of both objective measures and subjective listening tests, over state-of-the-art SE approaches, including recent DNN based SE models. In addition, our work challenges a popular belief that scarcity of a multi-lingual, large vocabulary AV corpus and a wide variety of noises is a major bottleneck to build robust language, speaker and noise-independent SE systems. We show that a model trained on a synthetic mixture of the benchmark GRID corpus (with 33 speakers and a small English vocabulary) and CHiME 3 noises (comprising bus, pedestrian, cafeteria, and street noises) can generalise well, not only on large vocabulary corpora with a wide variety of speakers and noises, but also on completely unrelated languages such as Mandarin.Citation
Gogate, M., Dashtipour, K., Adeel, A. and Hussain, A. (2020) CochleaNet: A robust language-independent audio-visual model for real-time speech enhancement, Information Fusion, 63, pp. 273-285.Publisher
ElsevierJournal
Information FusionType
Journal articleLanguage
enDescription
This is an accepted manuscript of an article published by Elsevier in Information Fusion on 21/04/2020, available online: https://doi.org/10.1016/j.inffus.2020.04.001 The accepted version of the publication may differ from the final published versionISSN
1566-2535EISSN
1872-6305ae974a485f413a2113503eed53cd6c53
10.1016/j.inffus.2020.04.001
Scopus Count
Collections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/