Header

UZH-Logo

Maintenance Infos

Is more structure really better? A comparison of frame-of-reference training and descriptively anchored rating scales to improve interviewers' rating quality


Melchers, K G; Lienhardt, N; Von Aarburg, M; Kleinmann, Martin (2011). Is more structure really better? A comparison of frame-of-reference training and descriptively anchored rating scales to improve interviewers' rating quality. Personnel Psychology, 64(1):53-87.

Abstract

This study provides the first comparison of 2 methods proposed to in- crease the structure of selection interviews: frame-of-reference (FOR) rater training for interviewers and providing interviewers with descrip- tively anchored rating scales. In contrast to descriptively anchored rat- ing scales, evidence for the efficacy of FOR training for interviewers is still missing even though its effects have been established in other domains. To evaluate the effectiveness of the 2 methods, we used a 2 × 2 design in which both methods were manipulated independently. Participants observed and rated different interviewees’ performance in a set of videotaped interviews. We found that both methods led to sub- stantial, and comparable, improvements in both rating accuracy and interrater reliability in comparison to a control condition in which nei- ther method was used. Furthermore, even though both methods have the same aim (i.e., enhancing the evaluation process by providing a common evaluative standard for raters), combining both methods led to further improvements in rating accuracy beyond the effects of the individual methods. Practical implications for selection interviews are discussed.

Abstract

This study provides the first comparison of 2 methods proposed to in- crease the structure of selection interviews: frame-of-reference (FOR) rater training for interviewers and providing interviewers with descrip- tively anchored rating scales. In contrast to descriptively anchored rat- ing scales, evidence for the efficacy of FOR training for interviewers is still missing even though its effects have been established in other domains. To evaluate the effectiveness of the 2 methods, we used a 2 × 2 design in which both methods were manipulated independently. Participants observed and rated different interviewees’ performance in a set of videotaped interviews. We found that both methods led to sub- stantial, and comparable, improvements in both rating accuracy and interrater reliability in comparison to a control condition in which nei- ther method was used. Furthermore, even though both methods have the same aim (i.e., enhancing the evaluation process by providing a common evaluative standard for raters), combining both methods led to further improvements in rating accuracy beyond the effects of the individual methods. Practical implications for selection interviews are discussed.

Statistics

Citations

13 citations in Web of Science®
14 citations in Scopus®
Google Scholar™

Altmetrics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:06 Faculty of Arts > Institute of Psychology
Dewey Decimal Classification:150 Psychology
Language:English
Date:2011
Deposited On:07 Nov 2011 12:17
Last Modified:05 Apr 2016 15:05
Publisher:Wiley-Blackwell
ISSN:0031-5826
Publisher DOI:https://doi.org/10.1111/j.1744-6570.2010.01202.x

Download

Full text not available from this repository.
View at publisher

TrendTerms

TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents.
You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.

Author Collaborations