Experiment-based calibration is a method for validation of psychological measurement.
•Calibration benefits from low variance of sample estimators.
•Equiprobable standard values reduce estimator variance.
•Reducing random aberration does not always reduce estimator variance.
•Systematic aberration is particularly problematic when inverse sigmoid.
AbstractPsychological theories are often formulated at the level of latent, not directly observable, variables. Empirical measurement of latent variables ought to be valid. Classical psychometric validity indices can be difficult to apply in experimental contexts. A complementary validity index, termed retrodictive validity, is the correlation of theory-derived predicted scores with actually measured scores, in specifically designed calibration experiments. In the current note, I analyse how calibration experiments can be designed to maximise the information garnered and specifically, how to minimise the sample variance of retrodictive validity estimators. First, I harness asymptotic limits to analytically derive different distribution features that impact on estimator variance. Then, I numerically simulate various distributions with combinations of feature values. This allows deriving recommendations for the distribution of predicted values, and for resource investment, in calibration experiments. Finally, I highlight cases in which a misspecified theory is particularly problematic.
KeywordsCalibration
Retrodictive validity
Measurement uncertainty
Measurement accuracy
Data availabilityAll simulations and code are publicly available on OSF: https://osf.io/dfg9e/.
© 2023 The Author(s). Published by Elsevier Inc.
Comments (0)