Enhancing unsupervised sentence similarity methods with deep contextualised word representations
MetadataShow full item record
AbstractCalculating Semantic Textual Similarity (STS) plays a significant role in many applications such as question answering, document summarisation, information retrieval and information extraction. All modern state of the art STS methods rely on word embeddings one way or another. The recently introduced contextualised word embeddings have proved more effective than standard word embeddings in many natural language processing tasks. This paper evaluates the impact of several contextualised word embeddings on unsupervised STS methods and compares it with the existing supervised/unsupervised STS methods for different datasets in different languages and different domains.
CitationRanashinghe, T., Orasan, C. and Mitkov, R. (2019) Enhancing unsupervised sentence similarity methods with deep contextualised word representations, RANLP 2019, 2nd-4th September, 2019, Varna, Bulgaria.
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/