Publication:

Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering

Date

Date

Date
2024
Conference or Workshop Item
Published version
cris.lastimport.scopus2025-06-29T03:44:53Z
cris.virtual.orcidhttps://orcid.org/0000-0001-5983-2360
cris.virtualsource.orcid0331cda6-e903-4e22-9b44-f89f54f581dc
dc.contributor.institutionUniversity of Zurich
dc.date.accessioned2025-02-03T13:26:44Z
dc.date.available2025-02-03T13:26:44Z
dc.date.issued2024-08-31
dc.description.abstract

Advances towards more faithful and traceable answers of Large Language Models (LLMs) are crucial for various research and practical endeavors. One avenue in reaching this goal is basing the answers on reliable sources. However, this Evidence-Based QA has proven to work insufficiently with LLMs in terms of citing the correct sources (source quality) and truthfully representing the information within sources (answer attributability). In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and answer attributability. Specifically, we introduce a data generation pipeline with automated data quality filters, which can synthesize diversified high-quality training and testing data at scale. We further introduce four test sets to benchmark the robustness of fine-tuned specialist models. Extensive evaluation shows that fine-tuning on synthetic data improves performance on both in- and out-of-distribution. Furthermore, we show that data quality, which can be drastically improved by proposed quality filters, matters more than quantity in improving Evidence-Based QA.

dc.identifier.doi10.18653/v1/2024.acl-long.105
dc.identifier.scopus2-s2.0-85204490687
dc.identifier.urihttps://www.zora.uzh.ch/handle/20.500.14742/227662
dc.language.isoeng
dc.subject.ddc330 Economics
dc.title

Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering

dc.typeconference_item
dcterms.accessRightsinfo:eu-repo/semantics/openAccess
dcterms.bibliographicCitation.journaltitleProceedings of the Annual Meeting of the Association for Computational Linguistics
dcterms.bibliographicCitation.number1
dcterms.bibliographicCitation.originalpublishernameAssociation for Computational Linguistics
dcterms.bibliographicCitation.pageend1931
dcterms.bibliographicCitation.pagestart1913
dspace.entity.typePublicationen
oairecerif.event.countryThailand
oairecerif.event.endDate2024-08-16
oairecerif.event.placeBangkok
oairecerif.event.startDate2024-08-11
uzh.contributor.affiliationUniversity of Zurich
uzh.contributor.affiliationUniversity of Zurich, ETH Zürich
uzh.contributor.affiliationUniversität Regensburg
uzh.contributor.affiliationETH Zürich
uzh.contributor.affiliationUniversity of Zurich, Swiss Finance Institute
uzh.contributor.authorSchimanski, Tobias
uzh.contributor.authorNi, Jingwei
uzh.contributor.authorKraus, Mathias
uzh.contributor.authorAsh, Elliott
uzh.contributor.authorLeippold, Markus
uzh.contributor.correspondenceYes
uzh.contributor.correspondenceNo
uzh.contributor.correspondenceNo
uzh.contributor.correspondenceNo
uzh.contributor.correspondenceNo
uzh.document.availabilitypublished_version
uzh.eprint.datestamp2025-02-03 13:26:44
uzh.eprint.lastmod2025-02-04 21:01:18
uzh.eprint.statusChange2025-02-03 13:26:44
uzh.event.presentationTypepaper
uzh.event.titleThe 62nd Annual Meeting of the Association for Computational Linguistics
uzh.event.typeconference
uzh.harvester.ethYes
uzh.harvester.nbNo
uzh.identifier.doi10.5167/uzh-270644
uzh.jdb.eprintsId48195
uzh.oastatus.unpaywallgreen
uzh.oastatus.zoraGreen
uzh.publication.citationSchimanski, Tobias; Ni, Jingwei; Kraus, Mathias; Ash, Elliott; Leippold, Markus (2024). Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering. In: The 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 11 August 2024 - 16 August 2024. Association for Computational Linguistics, 1913-1931.
uzh.publication.freeAccessAtdoi
uzh.publication.originalworkoriginal
uzh.publication.publishedStatusfinal
uzh.publication.scopedisciplinebased
uzh.publication.seriesTitleProceedings of the Annual Meeting of the Association for Computational Linguistics
uzh.scopus.impact3
uzh.scopus.subjectsComputer Science Applications
uzh.scopus.subjectsLinguistics and Language
uzh.scopus.subjectsLanguage and Linguistics
uzh.workflow.chairSubjectoecIBF1
uzh.workflow.doajuzh.workflow.doaj.false
uzh.workflow.eprintid270644
uzh.workflow.fulltextStatuspublic
uzh.workflow.revisions20
uzh.workflow.rightsCheckoffen
uzh.workflow.sourceCrossref:10.18653/v1/2024.acl-long.105
uzh.workflow.statusarchive
Files

Original bundle

Name:
2024.acl_long.105.pdf
Size:
667.76 KB
Format:
Adobe Portable Document Format
Publication available in collections: