This study in a Nevada school district tests an implicit assumption of high-stakes teacher evaluation systems that use student learning to measure teacher effectiveness: that the learning of a teacher’s students in one year will predict the learning of the teacher’s future students. The study found that half or more of the variance in teacher scores from the model was due to random or otherwise unstable sources rather than based on reliable information that could predict future performance.
This REL/IES study, which is perhaps a first published study of the stability of the teacher-level growth score derived under the student growth percentile model (a common model used by states in teacher evaluation systems), provides valuable content for states mandating similar evaluation systems. It is communicated in a well-organized and succinct format with tables and figures that illustrate findings. The study is extremely timely as states review and revise educator evaluation guidelines resulting from changes in federal and state laws and statutes. The study bases its findings on reliability coefficients for high-stakes decisions about individuals that some researchers argue should show a coefficient of .85 or higher.