Show simple item record

dc.contributor.authorGajbhiye, Amit
dc.contributor.authorFomicheva, Marina
dc.contributor.authorAlva-Manchego, Fernando
dc.contributor.authorBlain, Frederic
dc.contributor.authorObamuyide, Abiola
dc.contributor.authorAletras, Nikolaos
dc.contributor.authorSpecia, Lucia
dc.date.accessioned2021-06-08T09:45:56Z
dc.date.available2021-06-08T09:45:56Z
dc.date.issued2021-08-01
dc.identifier.citationGajbhiye, A., Fomicheva, M., Alva-Manchego, F., Blain, F., Obamuyide, A., Aletras, N. & Specia, L. (2021) Knowledge distillation for quality estimation. In: Zong, C., Xia, F., Li, W. and Navigli, R., (eds.) Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 01-06 Aug 2021, Bangkok, Thailand (virtual conference). Association for Computational Linguistics (ACL) , pp. 5091-5099.en
dc.identifier.doi10.18653/v1/2021.findings-acl.452
dc.identifier.urihttp://hdl.handle.net/2436/624102
dc.description© 2021 The Authors. Published by ACL. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://aclanthology.org/2021.findings-acl.452en
dc.description.abstractQuality Estimation (QE) is the task of automatically predicting Machine Translation quality in the absence of reference translations, making it applicable in real-time settings, such as translating online social media conversations. Recent success in QE stems from the use of multilingual pre-trained representations, where very large models lead to impressive results. However, the inference time, disk and memory requirements of such models do not allow for wide usage in the real world. Models trained on distilled pre-trained representations remain prohibitively large for many usage scenarios. We instead propose to directly transfer knowledge from a strong QE teacher model to a much smaller model with a different, shallower architecture. We show that this approach, in combination with data augmentation, leads to light-weight QE models that perform competitively with distilled pre-trained representations with 8x fewer parameters.en
dc.formatapplication/pdfen
dc.language.isoenen
dc.publisherAssociation for Computational Linguisticsen
dc.relation.urlhttps://2021.aclweb.org/en
dc.subjectquality estimationen
dc.subjectmachine translationen
dc.subjectknowledge distillationen
dc.titleKnowledge distillation for quality estimationen
dc.typeConference contributionen
dc.date.updated2021-06-07T13:44:55Z
dc.conference.name59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)
pubs.finish-date2021-08-04
pubs.start-date2021-08-02
dc.date.accepted2021-05-06
rioxxterms.funderUniversity of Wolverhamptonen
rioxxterms.identifier.projectUOW08062021FBen
rioxxterms.versionVoRen
rioxxterms.licenseref.urihttps://creativecommons.org/licenses/by/4.0/en
rioxxterms.licenseref.startdate2021-08-01en
dc.source.beginpage5091
dc.source.endpage5099
refterms.dateFCD2021-06-08T09:43:44Z
refterms.versionFCDVoR
refterms.dateFOA2021-08-01T00:00:00Z


Files in this item

Thumbnail
Name:
2021.findings-acl.452.pdf
Size:
350.7Kb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by/4.0/
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by/4.0/