Lip-reading driven deep learning approach for speech enhancement
dc.contributor.author | Adeel, Ahsan | |
dc.contributor.author | Gogate, Mandar | |
dc.contributor.author | Hussain, Amir | |
dc.contributor.author | Whitmer, William M | |
dc.date.accessioned | 2019-10-22T11:16:30Z | |
dc.date.available | 2019-10-22T11:16:30Z | |
dc.date.issued | 2019-09-05 | |
dc.identifier.citation | Adeel, A., Gogate, M., Hussain, A. and Whitmer, W. M. (2019) Lip-reading driven deep learning approach for speech enhancement, IEEE Transactions on Emerging Topics in Computational Intelligence (forthcoming) | en |
dc.identifier.issn | 2471-285X | en |
dc.identifier.doi | 10.1109/tetci.2019.2917039 | en |
dc.identifier.uri | http://hdl.handle.net/2436/622874 | |
dc.description.abstract | This paper proposes a novel lip-reading driven deep learning framework for speech enhancement. The proposed approach leverages the complementary strengths of both deep learning and analytical acoustic modelling (filtering based approach) as compared to recently published, comparatively simpler benchmark approaches that rely only on deep learning. The proposed audio-visual (AV) speech enhancement framework operates at two levels. In the first level, a novel deep learning-based lip-reading regression model is employed. In the second level, lip-reading approximated clean-audio features are exploited, using an enhanced, visually-derived Wiener filter (EVWF), for the clean audio power spectrum estimation. Specifically, a stacked long-short-term memory (LSTM) based lip-reading regression model is designed for clean audio features estimation using only temporal visual features considering different number of prior visual frames. For clean speech spectrum estimation, a new filterbank-domain EVWF is formulated, which exploits estimated speech features. The proposed EVWF is compared with conventional Spectral Subtraction and Log-Minimum Mean-Square Error methods using both ideal AV mapping and LSTM driven AV mapping. The potential of the proposed speech enhancement framework is evaluated under different dynamic real-world commercially-motivated scenarios (e.g. cafe, public transport, pedestrian area) at different SNR levels (ranging from low to high SNRs) using benchmark Grid and ChiME3 corpora. For objective testing, perceptual evaluation of speech quality is used to evaluate the quality of restored speech. For subjective testing, the standard mean-opinion-score method is used with inferential statistics. Comparative simulation results demonstrate significant lip-reading and speech enhancement improvement in terms of both speech quality and speech intelligibility. | en |
dc.description.sponsorship | UK Engineering and Physical Sciences Research Council (EPSRC) Grant No. EP/M026981/1. | en |
dc.format | application/PDF | en |
dc.language.iso | en | en |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en |
dc.relation.url | https://ieeexplore.ieee.org/document/8825842 | en |
dc.subject | lip reading | en |
dc.subject | stacked long-short-term memory | en |
dc.subject | enhanced visually-derived Wiener filtering | en |
dc.subject | context-aware audio-visual speech enhancement | en |
dc.subject | audio-visual ChiME3 corpus | en |
dc.title | Lip-reading driven deep learning approach for speech enhancement | en |
dc.type | Journal article | en |
dc.identifier.journal | IEEE Transactions on Emerging Topics in Computational Intelligence | en |
dc.date.updated | 2019-09-29T16:19:34Z | |
dc.date.accepted | 2019-04-28 | |
rioxxterms.funder | University of Wolverhampton | en |
rioxxterms.identifier.project | EP/M026981/1 | en |
rioxxterms.version | AM | en |
rioxxterms.licenseref.uri | https://creativecommons.org/licenses/by/4.0/ | en |
rioxxterms.licenseref.startdate | 2019-10-22 | en |
dc.source.volume | abs/1808.00046 | |
dc.source.beginpage | 1 | |
dc.source.endpage | 10 | |
dc.description.version | Published version | |
refterms.dateFCD | 2019-10-22T11:16:15Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2019-10-22T11:16:30Z |