Self attended stack pointer networks for learning long term dependencies
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Issue Date
2021-03-31
Metadata
Show full item recordAbstract
We propose a novel deep neural architecture for dependency parsing, which is built upon a Transformer Encoder (Vaswani et al., 2017) and a Stack Pointer Network (Ma et al., 2018). We first encode each sentence using a Transformer Network and then the dependency graph is generated by a Stack Pointer Network by selecting the head of each word in the sentence through a head selection process. We evaluate our model on Turkish and English treebanks. The results show that our transformer-based model learns long term dependencies efficiently compared to sequential models such as recurrent neural networks. Our self attended stack pointer network improves UAS score around 6% upon the LSTM based stack pointer (Ma et al., 2018) for Turkish sentences with a length of more than 20 words.Citation
Tuc, S. and Can, B. (in press) Self attended stack pointer networks for learning long term dependencies, ICON 2020: 17th International Conference on Natural Language Processing, 18th-21st December, 2020. Online.Additional Links
http://www.iitp.ac.in/~ai-nlp-ml/icon2020/proceedings.htmlType
Conference contributionLanguage
enDescription
This is an accepted manuscript of an article published by ACL in the proceedings of the 17th International Conference on Natural Language Processing (in press). The accepted version of the publication may differ from the final published version.Collections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by/4.0/