Header

UZH-Logo

Maintenance Infos

Assessment center construct-related validity : a look from different angles


Wirz-Rodella, Andreja. Assessment center construct-related validity : a look from different angles. 2012, Universität Zürich, Faculty of Arts.

Abstract

Assessment Centers (ACs) are a diagnostic tool that serve as a basis for decisions in the context of personnel selection and employee development. In view of the far-reaching consequences that AC ratings can have, it is important that these ratings are accurate. Therefore, we need to understand what AC ratings measure and how the measurement of dimensions, that is, construct-related validity, can be improved. The aims of this thesis are to contribute to the understanding of the construct-related validity of ACs and to provide practical guidance in this regard. Three studies that offer different perspectives on rating accuracy and AC construct-related validity, respectively, were conducted. The first study investigated whether increasing assessor team size can compensate for missing assessor expertise (i.e., assessor training and assessor background) and vice versa to improve rating accuracy. On the basis of dimension ratings from a laboratory setting (N = 383), we simulated assessor teams of different sizes. Of the factors considered, assessor training was most effective in improving rating accuracy and it could only partly be compensated for by increasing assessor team size. In contrast, increasing the size of the assessor team could compensate for missing expertise related to assessor background. In the second study, the effects of exercise similarity on AC construct-related and criterion-related validity were examined simultaneously. Data from a simulated graduate AC (N = 92) revealed that exercise similarity was beneficial for construct-related validity, but that it did not affect criterion-related validity. These results indicate that improvements in one aspect of validity are not always paralleled by improvements in the other aspect of validity. The third study examined whether relating AC overall dimension ratings to external evaluations of the same dimensions can provide evidence for construct-related validity of ACs. Confirmatory factor analyses of data from three independent samples (Ns = 428, 121, and 92) yielded source factors but no dimension factors in the latent factor structure of AC overall dimension ratings and external dimension ratings. This means that different sources provide different perspectives on candidates’ performance, and that AC overall dimension ratings and external dimensions ratings cannot be attributed to the purported dimensions. Taken as a whole, this thesis looked at AC construct-related validity from different angles. The reported findings contribute to the understanding of rating accuracy and construct-related validity of ACs.

Abstract

Assessment Centers (ACs) are a diagnostic tool that serve as a basis for decisions in the context of personnel selection and employee development. In view of the far-reaching consequences that AC ratings can have, it is important that these ratings are accurate. Therefore, we need to understand what AC ratings measure and how the measurement of dimensions, that is, construct-related validity, can be improved. The aims of this thesis are to contribute to the understanding of the construct-related validity of ACs and to provide practical guidance in this regard. Three studies that offer different perspectives on rating accuracy and AC construct-related validity, respectively, were conducted. The first study investigated whether increasing assessor team size can compensate for missing assessor expertise (i.e., assessor training and assessor background) and vice versa to improve rating accuracy. On the basis of dimension ratings from a laboratory setting (N = 383), we simulated assessor teams of different sizes. Of the factors considered, assessor training was most effective in improving rating accuracy and it could only partly be compensated for by increasing assessor team size. In contrast, increasing the size of the assessor team could compensate for missing expertise related to assessor background. In the second study, the effects of exercise similarity on AC construct-related and criterion-related validity were examined simultaneously. Data from a simulated graduate AC (N = 92) revealed that exercise similarity was beneficial for construct-related validity, but that it did not affect criterion-related validity. These results indicate that improvements in one aspect of validity are not always paralleled by improvements in the other aspect of validity. The third study examined whether relating AC overall dimension ratings to external evaluations of the same dimensions can provide evidence for construct-related validity of ACs. Confirmatory factor analyses of data from three independent samples (Ns = 428, 121, and 92) yielded source factors but no dimension factors in the latent factor structure of AC overall dimension ratings and external dimension ratings. This means that different sources provide different perspectives on candidates’ performance, and that AC overall dimension ratings and external dimensions ratings cannot be attributed to the purported dimensions. Taken as a whole, this thesis looked at AC construct-related validity from different angles. The reported findings contribute to the understanding of rating accuracy and construct-related validity of ACs.

Statistics

Downloads

417 downloads since deposited on 17 Dec 2012
70 downloads since 12 months
Detailed statistics

Additional indexing

Item Type:Dissertation
Referees:Kleinmann Martin, Lievens Filip
Communities & Collections:06 Faculty of Arts > Institute of Psychology
Dewey Decimal Classification:150 Psychology
Language:English
Date:2012
Deposited On:17 Dec 2012 18:51
Last Modified:05 Apr 2016 16:13

Download

Download PDF  'Assessment center construct-related validity : a look from different angles'.
Preview
Filetype: PDF
Size: 491kB