Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Issue Date
2020-07
Metadata
Show full item recordAbstract
We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE. We compare various multimodality integration and fusion strategies. For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality.Citation
Okabe, S., Blain, F. and Specia, L. (2020) Multimodal quality estimation for machine translation. In, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), Jurafsky, D., Chai, J., Schluter, N. and Tetreault, J. (eds.) Stroudsburg, PA: Association for Computational Linguistics. pp. 1233-1240Additional Links
https://www.aclweb.org/anthology/2020.acl-main.114/Type
Conference contributionLanguage
enDescription
© 2020 The Authors. Published by Association for Computational Linguistics. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://dx.doi.org/10.18653/v1/2020.acl-main.114ISBN
9781952148255Sponsors
This work was supported by funding from both the Bergamot project (EU H2020 Grant No. 825303) and the MultiMT project (EU H2020 ERC Starting Grant No. 678017).ae974a485f413a2113503eed53cd6c53
10.18653/v1/2020.acl-main.114
Scopus Count
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/