Loading...
Self attended stack pointer networks for learning long term dependencies
Tuç, Salih ; Can, Burcu
Tuç, Salih
Can, Burcu
Authors
Editors
Other contributors
Affiliation
Epub Date
Issue Date
2020-12-31
Submitted date
Subjects
Alternative
Abstract
We propose a novel deep neural architecture for dependency parsing, which is built upon a Transformer Encoder (Vaswani et al., 2017) and a Stack Pointer Network (Ma et al., 2018). We first encode each sentence using a Transformer Network and then the dependency graph is generated by a Stack Pointer Network by selecting the head of each word in the sentence through a head selection process. We evaluate our model on Turkish and English treebanks. The results show that our transformer-based model learns long term dependencies efficiently compared to sequential models such as recurrent neural networks. Our self attended stack pointer network improves UAS score around 6% upon the LSTM based stack pointer (Ma et al., 2018) for Turkish sentences with a length of more than 20 words.
Citation
Tuc, S. and Can, B. (2020) Self attended stack pointer networks for learning long term dependencies, ICON 2020: 17th International Conference on Natural Language Processing, pp. 90-100. 18th-21st December, 2020. Patna, India. Online.
Journal
Research Unit
DOI
PubMed ID
PubMed Central ID
Embedded videos
Additional Links
Type
Conference contribution
Language
en
Description
© 2020 The Authors. Published by ACL. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: https://aclanthology.org/2020.icon-main.12