Publication: Emotional prompting amplifies disinformation generation in AI large language models
Emotional prompting amplifies disinformation generation in AI large language models
Date
Date
Date
Citations
Vinay, R., Spitale, G., Biller-Andorno, N., & Germani, F. (2025). Emotional prompting amplifies disinformation generation in AI large language models. Frontiers in Artificial Intelligence, 8, 1543603. https://doi.org/10.3389/frai.2025.1543603
Abstract
Abstract
Abstract
INTRODUCTION: The emergence of artificial intelligence (AI) large language models (LLMs), which can produce text that closely resembles human-written content, presents both opportunities and risks. While these developments offer significant opportunities for improving communication, such as in health-related crisis communication, they also pose substantial risks by facilitating the creation of convincing fake news and disinformation. The widespread dissemination of AI-generated disinformation adds complexity to the existing challenges
Additional indexing
Creators (Authors)
Volume
Volume
Volume
Page range/Item number
Page range/Item number
Page range/Item number
Item Type
Item Type
Item Type
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Dewey Decimal Classifikation
Language
Language
Language
Publication date
Publication date
Publication date
Date available
Date available
Date available
ISSN or e-ISSN
ISSN or e-ISSN
ISSN or e-ISSN
OA Status
OA Status
OA Status
Free Access at
Free Access at
Free Access at
Publisher DOI
Citations
Vinay, R., Spitale, G., Biller-Andorno, N., & Germani, F. (2025). Emotional prompting amplifies disinformation generation in AI large language models. Frontiers in Artificial Intelligence, 8, 1543603. https://doi.org/10.3389/frai.2025.1543603