doi:

DOI: 10.3724/SP.J.1041.2019.00383

Acta Psychologica Sinica (心理学报) 2019/51:3 PP.383-392

A general simulation comparison of the predictive validity between bifactor and high-order factor models


Abstract:
Mathematically, a high-order factor model is nested within a bifactor model, and the two models are equivalent with a set of proportionality constraints of loadings. In applied studies, they are two alternative models. Using a true model with the proportional constraints to create simulation data (thus both the bifactor model and high-order factor model fitted the true model), Xu, Yu and Li (2017) studied structural coefficients based on bifactor models and high-order factor models by comparing the goodness of fit indexes and the relative bias of the structural coefficient in a simulation study. However, a bifactor model usually doesn't satisfy the proportionality constraints, and it is very difficult to find a multidimensional construct that is well fitted by a bifactor model with the proportionality constraints. Hence their simulation results couldn't extend to general situations.
Using a true model with the proportionality constraints (thus both the bifactor model and high-order factor model fitted the true model) and a true model without the proportionality constraints (thus the bifactor model fitted the true model, whereas the high-order factor model fitted a misspecified model), this Monte Carlo study investigated structural coefficients based on bifactor models and high-order factor models for either a latent or manifest variable as the criterion. Experiment factors considered in the simulation design were:(a) the loadings on the general factor, (b) the loadings on the domain specific factors, (c) the magnitude of the structural coefficient, (d) sample size. When the true model without proportionality constraints, only factors (a), (c) and (d) were considered because the loadings on domain specific factors were fixed to different levels (0.4, 0.5, 0.6, 0.7) that assured the model does not satisfy the proportionality constraints.
The main findings were as follows. (1) When the proportionality constraints were held, the high-order factor model was preferred, because it had smaller relative bias of the structural coefficient, and lower type Ⅰ error rates (but also lower statistical power, which was not a problem for a large sample). (2) When the proportionality constraints were not held, however, the bifactor model was better, because it had smaller relative bias of the structural coefficient, and higher statistical power (but also higher type Ⅰ error rates, which was not a problem for a large sample). (3) Bi-factor models fitted the simulation data better than high-order factor models in terms of fit indexes CFI, TLI, RMSEA, and SRMR whether the proportionality constraints were held or not. However, the bifactor models were less fitted according to information indexes (i.e., AIC, ABIC) when the proportionality constraints were held. (4) Whether the criterion was a manifest variable or a latent variable, the results were similar. However, for the manifest criterion variable, the relative bias of the structural coefficient was smaller.
In conclusion, a high-order factor model could be the first choice to predict a criterion under the condition of proportionality constraints or well fitted for the sake of parsimony. Otherwise, a bifactor model is better for studying structural coefficients. The sample size should be large enough (e.g., 500+) no matter which model is employed.

Key words:structural coefficient,bifactor model,high-order factor model,proportionality constraints

ReleaseDate:2019-03-01 06:48:11



Beaujean, A. A., Parkin, J., & Parker, S. (2014). Comparing Cattell-Horn-Carroll factor models:Differences between bifactor and higher order factor models in predicting language achievement. Psychological Assessment, 26(3), 789-805.

Bentler, P. M. (1995). EQS 6 structural equations program manual. Encino, CA:Multivariate Software.

Bradley, & James, V. (1978). Robustness? British Journal of Mathematical & Statistical Psychology, 31, 144-152.

Burnham, K. P., & Anderson, D. R. (1998). Model selection and inference:A practical information-theoretic approach. New York, NY:Springer.

Chen, F. F., Hayes, A., Carver, C. S., Laurenceau, J-P., & Zhang, Z. (2012). Modeling general and specific variance in multifaceted constructs:A comparison of the bifactor model to other approaches. Journal of Personality, 80(1), 219-251.

Chen, F. F., Jing, Y., Hayes, A., & Lee, J. M. (2013). Two concepts or two approaches? A bifactor analysis of psychological and subjective well-being. Journal of Happiness Studies, 14(3), 1033-1068.

Chen, F. F., West, S. G., & Sousa, K. H. (2006). A comparison of bifactor and second-order models of quality of life. Multivariate Behavioral Research, 41(2), 189-225.

Cucina, J., & Byle, K., (2017). The bifactor model fits better than the higher-order model in more than 90% of comparisons for mental abilities test batteries. Journal of Intelligence, 5(3), 27.

Demars, C. E. (2006). Application of the bi-factor multidimensional item response theory model to testlet-based tests. Journal of Educational Measurement, 43(2), 145-168.

Distefano, C., Greer, F. W., & Kamphaus, R. W. (2013). Multifactor modeling of emotional and behavioral risk of preschool-age children. Psychological Assessment, 25(2), 467-476.

Gignac, G. E. (2008). Higher-order models versus direct hierarchical models:A superordinate or breadth factor?. Psychology Science Quarterly, 50(1), 21-43.

Gu, H., & Wen, Z. (2017). Reporting and interpreting multidimensional test scores:A bi-factor perspective. Psychological Development and Education, 33, 504-512.[顾红磊, 温忠麟. (2017). 多维测验分数的报告与解释:基于双因子模型的视角. 心理发展与教育, 33(4), 504-512.]

Gu, H., Wen, Z., & Fan, X. (2017a). Structural validity of the Machiavellian personality scale:A bifactor exploratory structural equation modeling approach. Personality and Individual Differences, 105, 116-123.

Gu, H., Wen, Z., & Fan, X. (2017b). Examining and controlling for wording effect in a self-report measure:A Monte Carlo simulation study. Structural Equation Modeling:A Multidisciplinary Journal, 24(4), 545-555.

Gustafsson, J. E., & Balke, G., (1993). General and specific abilities as predictors of school achievement. Multivariate Behavioral Research, 28(4), 407-434.

Hau, K. T., Wen, Z., Cheng, Z. (2004). Structural equation model and its applications. Beijing, China:Educational Science Publishing House.[侯杰泰, 温忠麟, 成子娟. (2004). 结构方程模型及其应用. 北京:教育科学出版社.]

Hoogland, J. J., & Boomsma, A. (1998). Robustness studies in covariance structure modeling:An overview and a meta-analysis. Sociological Methods & Research, 26(3), 329-368.

Howard, J. L., Gagné, M., Morin, A. J. S., & Forest, J. (2018). Using bifactor exploratory structural equation modeling to test for a continuum structure of motivation. Journal of Management. 44(7), 2638-2664.

Hyland, P., Boduszek, D., Dhingra, K., Shevlin, M., & Egan, A. (2014). A bifactor approach to modelling the Rosenberg Self Esteem Scale. Personality and Individual Differences, 66, 188-192.

Mackinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect:Distribution of the product and resampling methods. Multivariate Behavioral Research, 39(1), 99-128.

Marsh, H. W., Hau, K. T., & Wen, Z. L. (2004). In search of golden rules:Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Structural Equation Modeling:A Multidisciplinary Journal, 11(3), 320-341.

Muthén, L. K., & Muthén, B. O. (2012). Mplus user's guide (7th ed.). Los Angeles, CA:Muthén & Muthén.

Reise, S. P., Scheines, R., Widaman, K. F., & Haviland, M. G. (2013). Multidimensionality and structural coefficient bias in structural equation modeling:A bifactor perspective. Educational and Psychological Measurement, 73(1), 5-26.

Salerno, L., Ingoglia, S., & Coco, G. L. (2017). Competing factor structures of the Rosenberg Self-Esteem Scale (RSES) and its measurement invariance across clinical and non-clinical samples. Personality and Individual Differences, 113, 13-19.

Schmid, J., & Leiman, J. M. (1957). The development of hierarchical factor solutions. Psychometrika, 22(1), 53-61.

Wang, M. T., Fredricks, J. A., Ye, F., Hofkens, T. L., & Linn, J. S. (2016). The math and science engagement scales:Scale development, validation, and psychometric properties. Learning and Instruction, 43, 16-26.

Wen, Z., Hau, K.T., & Marsh, H.W. (2004). Structural equation model testing:Cutoff criteria for goodness of fit indices and chi-square test. Acta Psychologica Sinica, 36(2), 186-194.[温忠麟, 侯杰泰, 马什赫伯特. (2004). 结构方程模型检验:拟合指数与卡方准则. 心理学报, 36(2), 186-194.]

Wu, Y., Wen, Z., Marsh, H. W., & Hau, K-T., (2013). A comparison of strategies for forming product indicators for unequal numbers of items in structural equation models of latent interactions. Structural Equation Modeling:A Multidisciplinary Journal, 20(4), 551-567.

Xu, S. X., Yu, Z. H., & Li, Y. M. (2017). Simulated data comparison of the predictive validity between bi-factor and high-order models. Acta Psychologica Sinica, 49(8), 1125-1136.[徐霜雪, 俞宗火, 李月梅. (2017). 预测视角下双因子模型与高阶模型的模拟比较. 心理学报, 49(8), 1125-1136.]

Ye, B., & Wen, Z. (2012) Estimating homogeneity coefficient and its confidence interval. Acta Psychologica Sinica, 44(12), 1687-1694.[叶宝娟, 温忠麟.(2012). 测验同质性系数及其区间估计. 心理学报, 44(12), 1687-1694.]

Yung, Y-F., Thissen, D., & Mcleod, L. D. (1999). On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika, 64(2), 113-128.