Header

UZH-Logo

Maintenance Infos

Comparing estimators for latent interaction models under structural and distributional misspecifications


Brandt, Holger; Umbach, Nora; Kelava, Augustin; Bollen, Kenneth A (2019). Comparing estimators for latent interaction models under structural and distributional misspecifications. Psychological Methods:Epub ahead of print.

Abstract

Estimation methods for structural equation models with interactions of latent variables were compared in several studies. Yet none of these studies examined models that were structurally misspecified. Here, the model-implied instrumental variable 2-stage least square estimator (MIIV-2SLS; Bollen, 1995; Bollen & Paxton, 1998), the 2-stage method of moments estimator (2SMM; Wall & Amemiya, 2003), the nonlinear structural equation mixture model approach (NSEMM; Kelava, Nagengast, & Brandt, 2014), and the unconstrained product indicator approach (UPI; Marsh, Wen, & Hau, 2004) were compared in a Monte Carlo simulation. The design included structural misspecifications in the measurement model involving the scaling indicator or not, the size of the misspecification, normal and nonnormal data, the indicators' reliability, and sample size. For the structural misspecifications that did not involve the scaling indicator, we found that MIIV-2SLS' parameter estimates were less biased compared with 2SMM, NSEMM, and UPI. If the reliability was high, the RMSE for all approaches was very similar; for low reliability, MIIV-2SLS' RMSE became larger compared with the other approaches. If the structural misspecification involved the scaling indicator, all estimators were seriously biased, with the largest bias for MIIV-2SLS. In most scenarios, this bias was more severe for the linear effects than for the interaction effect. The RMSE for conditions with misspecified scaling indicators was smallest for 2SMM, especially for low reliability scenarios, but the overall magnitude of bias was such that we cannot recommend any of the estimators in this situation. Our article showed the damage done when researchers omit cross-loadings of the scaling indicator and the importance of giving more attention to these indicators particularly if the indicators' reliability is low. It also showed that no one estimator is superior to the others across all conditions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Abstract

Estimation methods for structural equation models with interactions of latent variables were compared in several studies. Yet none of these studies examined models that were structurally misspecified. Here, the model-implied instrumental variable 2-stage least square estimator (MIIV-2SLS; Bollen, 1995; Bollen & Paxton, 1998), the 2-stage method of moments estimator (2SMM; Wall & Amemiya, 2003), the nonlinear structural equation mixture model approach (NSEMM; Kelava, Nagengast, & Brandt, 2014), and the unconstrained product indicator approach (UPI; Marsh, Wen, & Hau, 2004) were compared in a Monte Carlo simulation. The design included structural misspecifications in the measurement model involving the scaling indicator or not, the size of the misspecification, normal and nonnormal data, the indicators' reliability, and sample size. For the structural misspecifications that did not involve the scaling indicator, we found that MIIV-2SLS' parameter estimates were less biased compared with 2SMM, NSEMM, and UPI. If the reliability was high, the RMSE for all approaches was very similar; for low reliability, MIIV-2SLS' RMSE became larger compared with the other approaches. If the structural misspecification involved the scaling indicator, all estimators were seriously biased, with the largest bias for MIIV-2SLS. In most scenarios, this bias was more severe for the linear effects than for the interaction effect. The RMSE for conditions with misspecified scaling indicators was smallest for 2SMM, especially for low reliability scenarios, but the overall magnitude of bias was such that we cannot recommend any of the estimators in this situation. Our article showed the damage done when researchers omit cross-loadings of the scaling indicator and the importance of giving more attention to these indicators particularly if the indicators' reliability is low. It also showed that no one estimator is superior to the others across all conditions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Statistics

Citations

Dimensions.ai Metrics

Altmetrics

Additional indexing

Item Type:Journal Article, refereed, original work
Communities & Collections:06 Faculty of Arts > Institute of Psychology
Dewey Decimal Classification:150 Psychology
Language:English
Date:31 October 2019
Deposited On:27 Nov 2019 15:43
Last Modified:29 Feb 2020 08:25
Publisher:American Psychological Association
ISSN:1082-989X
OA Status:Closed
Publisher DOI:https://doi.org/10.1037/met0000231
PubMed ID:31670539

Download

Full text not available from this repository.
View at publisher

Get full-text in a library