Publication: When does aggregating multiple skills with multi-task learning work? A case study in financial NLP
When does aggregating multiple skills with multi-task learning work? A case study in financial NLP
Date
Date
Date
Citations
Leippold, M., Ni, J., Jin, Z., Wang, Q., & Sachan, M. (2023). When does aggregating multiple skills with multi-task learning work? A case study in financial NLP. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 1, 7465–7488. https://aclanthology.org/2023.acl-long.412/
Abstract
Abstract
Abstract
Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work – sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well explored. In this work, we conduct a case study in Financial NLP where multiple datasets exist for s
Metrics
Downloads
Views
Additional indexing
Creators (Authors)
Event Title
Event Title
Event Title
Event Location
Event Location
Event Location
Event Country
Event Country
Event Country
Event Start Date
Event Start Date
Event Start Date
Event End Date
Event End Date
Event End Date
Page range/Item number
Page range/Item number
Page range/Item number
Page end
Page end
Page end
Item Type
Item Type
Item Type
In collections
Scope
Scope
Scope
Language
Language
Language
Date available
Date available
Date available
Series Name
Series Name
Series Name
Number
Number
Number
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Other Identification Number
Other Identification Number
Other Identification Number
Metrics
Downloads
Views
Citations
Leippold, M., Ni, J., Jin, Z., Wang, Q., & Sachan, M. (2023). When does aggregating multiple skills with multi-task learning work? A case study in financial NLP. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 1, 7465–7488. https://aclanthology.org/2023.acl-long.412/