Abstract
Large language models like ChatGPT can be used to generate seemingly human-like text. However, it is still not well understood how their output differs from text written by humans, and to what degree prompting influences their linguistic profile. In our paper, we instruct ChatGPT to complete, explain and create texts in English and German across journalistic, scientific, and clinical domains. We assign corpus-specific personas to the system setting as part of the prompt within each task. We extract a large number of linguistic features and perform statistical and qualitative comparison across text pairs. Our results show that prompting makes a larger impact on English output than on German. Most basic features such as mean word length distinctly set human and generated texts apart. Readability metrics indicate that ChatGPT overcomplicates English texts, particularly in the clinical domain, while German-generated texts suffer from excessive morpho-syntactic standardization coupled with lexical simplification.