"Publication and citation rankings have become major indicators of the scientificnworth of universities and countries, and determine to a large extent the career ofnindividual scholars. We argue that such rankings do not effectively measure researchnquality, which should be the essence of evaluation. For that reason, an alternativenranking is developed as a quality indicator, based on membership on academic editorialnboards of professional journals. It turns out that especially the ranking of individual scholars is far from objective. The results differ markedly, depending on whethernresearch quantity or research quality is considered. Even quantity rankings are notnobjective; two citation rankings, based on different samples, produce entirely different results. It follows that any career decisions based on rankings are dominated by chance and do not reflect research quality. Instead of propagating a ranking based on board membership as the gold standard, we suggest that committees make use of this quality indicator to find members who, in turn, evaluate the research quality of individual scholars."