You are here
Assessing the “Rothstein Test”. Does it Really Show Teacher Value-Added Models are Biased?
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM estimates of current teacher contributions to student learning. More precisely, the falsification test is designed to identify whether or not students are effectively randomly assigned conditional on the covariates included in the model. Rothstein's finding is significant because there is considerable interest in using VAM teacher effect estimates for high-stakes teacher personnel policies, and the results of the Rothstein test cast considerable doubt on the notion that VAMs can be used fairly for this purpose. However, in this paper, we illustrate—theoretically and through simulations—plausible conditions under which the Rothstein falsification test rejects VAMs even when students are randomly assigned, conditional on the covariates in the model, and even when there is no bias in estimated teacher effects.
Keywords: Value-added, Simulation, Teacher Effectiveness
Citation: Dan Goldhaber, Duncan Chaplin (2012). Assessing the “Rothstein Test”. Does it Really Show Teacher Value-Added Models are Biased?. CALDER Working Paper No. 71
You May Also Be Interested In
How to Measure a Teacher: The Influence of Test and Nontest Value-Added on Long-Run Student Outcomes
Benjamin Backes, James Cowan, Dan Goldhaber, Roddy Theobald
How Did It Get This Way? Disentangling the Sources of Teacher Quality Gaps Through Agent-Based Modeling
Dan Goldhaber, Matt Kasman, Vanessa Quince, Roddy Theobald, Malcolm Wolff
See other working papers on:
Research Area: Educator preparation and teacher labor markets