Publication: On Biasing Transformer Attention Towards Monotonicity
On Biasing Transformer Attention Towards Monotonicity
Date
Date
Date
Citations
Rios, A., Amrhein, C., Aepli, N., & Sennrich, R. (2021). On Biasing Transformer Attention Towards Monotonicity. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4474–4488. https://www.aclweb.org/anthology/2021.naacl-main.354
Abstract
Abstract
Abstract
Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining. In this work, we introduce a monotonicity loss function that is compatible with standard attention mechanisms and test it on several sequence-to-sequence tasks: grapheme-to-phoneme conversion, morphological inflection, transliteration, and dialect normalization.
Metrics
Downloads
Views
Additional indexing
Creators (Authors)
Event Title
Event Title
Event Title
Event Location
Event Location
Event Location
Event Start Date
Event Start Date
Event Start Date
Event End Date
Event End Date
Event End Date
Page range/Item number
Page range/Item number
Page range/Item number
Page end
Page end
Page end
Item Type
Item Type
Item Type
In collections
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Language
Language
Language
Date available
Date available
Date available
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Metrics
Downloads
Views
Citations
Rios, A., Amrhein, C., Aepli, N., & Sennrich, R. (2021). On Biasing Transformer Attention Towards Monotonicity. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4474–4488. https://www.aclweb.org/anthology/2021.naacl-main.354