Publication: Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering
Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering
Date
Date
Date
Citations
Schimanski, T., Ni, J., Kraus, M., Ash, E., & Leippold, M. (2024). Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 1, 1913–1931. https://doi.org/10.18653/v1/2024.acl-long.105
Abstract
Abstract
Abstract
Advances towards more faithful and traceable answers of Large Language Models (LLMs) are crucial for various research and practical endeavors. One avenue in reaching this goal is basing the answers on reliable sources. However, this Evidence-Based QA has proven to work insufficiently with LLMs in terms of citing the correct sources (source quality) and truthfully representing the information within sources (answer attributability). In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and
Metrics
Additional indexing
Creators (Authors)
Event Title
Event Title
Event Title
Event Location
Event Location
Event Location
Event Country
Event Country
Event Country
Event Start Date
Event Start Date
Event Start Date
Event End Date
Event End Date
Event End Date
Page range/Item number
Page range/Item number
Page range/Item number
Page end
Page end
Page end
Item Type
Item Type
Item Type
In collections
Scope
Scope
Scope
Language
Language
Language
Date available
Date available
Date available
Series Name
Series Name
Series Name
Number
Number
Number
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Publisher DOI
Metrics
Citations
Schimanski, T., Ni, J., Kraus, M., Ash, E., & Leippold, M. (2024). Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 1, 1913–1931. https://doi.org/10.18653/v1/2024.acl-long.105