Header

UZH-Logo

Maintenance Infos

“Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence


Eckhardt, Sven; Knaeble, Merlin; Bucher, Andreas; Staehelin, Dario; Dolata, Mateusz; Agotai, Doris; Schwabe, Gerhard (2023). “Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence. In: INTERACT 2023 19th IFIP TC13 International Conference, York, UK, 28 August 2023 - 1 September 2023. Springer, 27-48.

Abstract

Current HCI research often focuses on mitigating algorithmic biases. While such algorithmic fairness during model training is worthwhile, we see fit to mitigate human cognitive biases earlier, namely during data entry. We developed a conversational agent with voice-based data entry and visualization to support financial consultations, which are human-human settings with information asymmetries. In a pre-study, we reveal data-entry biases in advisors by a quantitative analysis of 5 advisors consulting 15 clients in total. Our main study evaluates the conversational agent with 12 advisors and 24 clients. A thematic analysis of interviews shows that advisors introduce biases by “feeling” and “forgetting” data. Additionally, the conversational agent makes financial consultations more transparent and automates data entry. These findings may be transferred to various dyads, such as doctor visits. Finally, we stress that AI not only poses a risk of becoming a mirror of human biases but also has the potential to intervene in the early stages of data entry.

Abstract

Current HCI research often focuses on mitigating algorithmic biases. While such algorithmic fairness during model training is worthwhile, we see fit to mitigate human cognitive biases earlier, namely during data entry. We developed a conversational agent with voice-based data entry and visualization to support financial consultations, which are human-human settings with information asymmetries. In a pre-study, we reveal data-entry biases in advisors by a quantitative analysis of 5 advisors consulting 15 clients in total. Our main study evaluates the conversational agent with 12 advisors and 24 clients. A thematic analysis of interviews shows that advisors introduce biases by “feeling” and “forgetting” data. Additionally, the conversational agent makes financial consultations more transparent and automates data entry. These findings may be transferred to various dyads, such as doctor visits. Finally, we stress that AI not only poses a risk of becoming a mirror of human biases but also has the potential to intervene in the early stages of data entry.

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Downloads

7 downloads since deposited on 10 Oct 2023
7 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Conference or Workshop Item (Paper), refereed, original work
Communities & Collections:03 Faculty of Economics > Department of Informatics
Dewey Decimal Classification:000 Computer science, knowledge & systems
Scopus Subject Areas:Physical Sciences > Theoretical Computer Science
Physical Sciences > General Computer Science
Scope:Discipline-based scholarship (basic research)
Language:English
Event End Date:1 September 2023
Deposited On:10 Oct 2023 10:03
Last Modified:06 Mar 2024 14:40
Publisher:Springer
Series Name:Lecture Notes in Computer Science
Number:14144
ISSN:0302-9743
ISBN:978-3-031-42286-7
OA Status:Closed
Publisher DOI:https://doi.org/10.1007/978-3-031-42286-7_2
Other Identification Number:merlin-id:24108