The choice of generalized linear mixed models is difficult, because it involves the selection of both fixed and random effects. Classical criteria like Akaike’s information criterion (AIC) are often not suitable for the latter task, and others which are useful in linear mixed models are difficult to extend to the generalized case, especially for overdispersed data. A predictive leave-one-out crossvalidation approach is suggested that can be applied for choosing both fixed and random effects, even in models with overdispersion, and is based on proper scoring rules. An attractive feature of this approach is the fact that the model has to be fitted just once to the data set, which makes computations fast and convenient. As the calculation of the leave-one-out predictive distribution is not possible analytically, it is shown how an iteratively weighted least squares algorithm combined with some analytic approximations can be used for this task. A simulation study and two applications of the methodology to binary and count data are provided, as well as comparisons with two other methods.