Header

UZH-Logo

Maintenance Infos

From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users


Leiser, Florian; Eckhardt, Sven; Knaeble, Merlin; Maedche, Alexander; Schwabe, Gerhard; Sunyaev, Ali (2023). From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users. In: MuC '23: Mensch und Computer 2023, Rapperswil Switzerland, 3 September 2023 - 6 September 2023. ACM Digital library, 81-90.

Abstract

Large language models (LLMs) like ChatGPT recently gained interest across all walks of life with their human-like quality in textual responses. Despite their success in research, healthcare, or education, LLMs frequently include incorrect information, called hallucinations, in their responses. These hallucinations could influence users to trust fake news or change their general beliefs. Therefore, we investigate mitigation strategies desired by users to enable identification of LLM hallucinations. To achieve this goal, we conduct a participatory design study where everyday users design interface features which are then assessed for their feasibility by machine learning (ML) experts. We find that many of the desired features are well-perceived by ML experts but are also considered as difficult to implement. Finally, we provide a list of desired features that should serve as a basis for mitigating the effect of LLM hallucinations on users.

Abstract

Large language models (LLMs) like ChatGPT recently gained interest across all walks of life with their human-like quality in textual responses. Despite their success in research, healthcare, or education, LLMs frequently include incorrect information, called hallucinations, in their responses. These hallucinations could influence users to trust fake news or change their general beliefs. Therefore, we investigate mitigation strategies desired by users to enable identification of LLM hallucinations. To achieve this goal, we conduct a participatory design study where everyday users design interface features which are then assessed for their feasibility by machine learning (ML) experts. We find that many of the desired features are well-perceived by ML experts but are also considered as difficult to implement. Finally, we provide a list of desired features that should serve as a basis for mitigating the effect of LLM hallucinations on users.

Statistics

Citations

Dimensions.ai Metrics
2 citations in Web of Science®
1 citation in Scopus®
Google Scholar™

Altmetrics

Downloads

1 download since deposited on 07 Feb 2024
1 download since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), not_refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Physical Sciences > Human-Computer Interaction
Physical Sciences > Computer Networks and Communications
Physical Sciences > Computer Vision and Pattern Recognition
Physical Sciences > Software
Scope:Discipline-based scholarship (basic research)
Language:English
Event End Date:6 September 2023
Deposited On:07 Feb 2024 16:13
Last Modified:06 Mar 2024 14:41
Publisher:ACM Digital library
Series Name:Mensch und Computer Conference Proceedings
ISBN:979-8-4007-0771-1
OA Status:Closed
Publisher DOI:https://doi.org/10.1145/3603555.3603565
Other Identification Number:merlin-id:24361