AbstractPredicting Machine Translation (MT) quality can help in many practical tasks such as MT post-editing. The performance of Quality Estimation (QE) methods has drastically improved recently with the introduction of neural approaches to the problem. However, thus far neural approaches have only been designed for word and sentence-level prediction. We present a neural framework that is able to accommodate neural QE approaches at these fine-grained levels and generalize them to the level of documents. We test the framework with two sentence-level neural QE approaches: a state of the art approach that requires extensive pre-training, and a new light-weight approach that we propose, which employs basic encoders. Our approach is significantly faster and yields performance improvements for a range of document-level quality estimation tasks. To our knowledge, this is the first neural architecture for document-level QE. In addition, for the first time we apply QE models to the output of both statistical and neural MT systems for a series of European languages and highlight the new challenges resulting from the use of neural MT.
CitationIve, J., Blain, F. and Specia, L. (2018) deepQuest: a framework for neural-based quality estimation. In, Proceedings of the 27th International Conference on Computational Linguistics, Bender, E. M., Derczynski, L., Isabelle, P. (eds.), Stroudburg, PA: Association for Computational Linguistics.
Description© 2018 The Authors. Published by Association for Computational Linguistics. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://www.aclweb.org/anthology/C18-1266/ Code available at: https://github.com/sheffieldnlp/deepQuest
SponsorsThe development of deepQuest received funding from the European Association for Machine Translation and the Amazon Academic Research Awards program. The first author worked on this paper during a research stay at the University of Sheffield.
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/