Abstract
While artificial intelligence (AI) is increasingly applied for decision- making processes, ethical decisions pose challenges for AI applica- tions. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collabora- tion? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that partici- pants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ re- liance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are per- ceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.