Abstract
Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularize it – removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularizing behavior in the learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularization reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidgin/creole formation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularization. Here we provide the first systematic comparison of the strength of regularization across these two linguistic levels. In line with previous studies, we find that the presence of a favored variant can induce different degrees of regularization. However, when input languages are carefully matched – with comparable initial variability, and no variant-specific biases – regularization can be comparable across morphology and word order. This is the case regardless of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularizing mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggest this overarching mechanism is driven by production.