Why one size does not fit all: Evaluating the validity of fixed cutoffs for model fit indices and developing new alternatives


Groskurth, Katharina


[img] PDF
dissertation.pdf - Veröffentlichte Version

Download (8MB)

URN: urn:nbn:de:bsz:180-madoc-647075
Dokumenttyp: Dissertation
Erscheinungsjahr: 2023
Ort der Veröffentlichung: Mannheim
Hochschule: Universität Mannheim
Gutachter: Meiser, Thorsten
Datum der mündl. Prüfung: 22 März 2023
Sprache der Veröffentlichung: Englisch
Einrichtung: Fakultät für Sozialwissenschaften > Psychologische Methodenlehre u. Diagnostik (Meiser 2009-)
Lizenz: CC BY 4.0 Creative Commons Namensnennung 4.0 International (CC BY 4.0)
Fachgebiet: 150 Psychologie
Freie Schlagwörter (Englisch): fit indices, cutoffs , confirmatory factor analysis , structural equation modeling
Abstract: Model evaluation is a central topic in structural equation modeling. Researchers commonly evaluate whether a model fits their data with fixed cutoffs for fit indices (e.g., CFI ≥ .95 for good model fit). Researchers apply the same fixed cutoffs in various empirical settings, even if those settings diverge from the simulated scenarios the cutoffs originated from. In this thesis, I outlined why this one-size-fits-all usage of fixed cutoffs is invalid and proposed alternative approaches for model evaluation. In the first manuscript, I investigated the fit indices’ sensitivity to misspecification in confirmatory factor models and their susceptibility to various model, data, and estimation characteristics in a large-scale simulation. Several characteristics (especially the factor correlation and the type of estimator) strongly influenced fit indices. They interacted in complex ways implying that cutoffs for fit indices are only valid for the context from which they originate. Based on the large-scale simulation, I developed two approaches to generate cutoffs tailored to empirical settings resembling the simulated scenarios. Researchers can read out scenario-specific cutoffs from large-scale tables. Alternatively, researchers can use regression formulae and plug in characteristics of interest to calculate scenario-specific cutoffs. In the second manuscript, I reviewed and discussed all approaches to tailored cutoffs proposed in the literature. Based on the literature review, I developed a new approach that combines a Monte Carlo simulation with receiver operating characteristic (ROC) analysis. The so-called simulation-cum-ROC approach generates cutoffs for various fit indices tailored to the setting of interest. Uniquely, it guides researchers on which fit index best evaluates whether the model fits the data (or not) in the setting of interest. In the third manuscript, I focused on a specific area in which binary decisions on model fit abound: measurement invariance testing. I developed effect size measures, so-called Measurement Invariance Violation Indices (MIVIs), for items and item sets that continuously quantify non-invariance (i.e., misfit) if identified by binary cutoffs. MIVIs quantify non-invariant parameter differences in units of the latent variable’s pooled standard deviation. This thesis demonstrated that cutoffs must be tailored to the setting of interest for valid model evaluation. I outlined and developed various approaches that differ in their flexibility to obtain scenario-specific cutoffs. Newly developed effect size measures allow researchers to continuously quantify misfit (i.e., non-invariance) in addition to cutoffs following the binary fit-misfit logic. This research is a step towards more valid model evaluation techniques.




Dieser Eintrag ist Teil der Universitätsbibliographie.

Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.




Metadaten-Export


Zitation


+ Suche Autoren in

+ Download-Statistik

Downloads im letzten Jahr

Detaillierte Angaben



Sie haben einen Fehler gefunden? Teilen Sie uns Ihren Korrekturwunsch bitte hier mit: E-Mail


Actions (login required)

Eintrag anzeigen Eintrag anzeigen