Data sgp is used by many organizations to evaluate teacher and school leader effectiveness. In particular, student growth percentiles (SGP) are a prominent indicator of the percentage of students growing above or below average in academic achievement compared to academically-similar students. SGPs are also one of the multiple indicators included in teacher and leadership key effective system models (TKES and LKES).
SGPs are based on the percentage of students that grew above or below a given cutoff point on a test or set of tests. They are usually reported as a percentile rank, where higher numbers mean that students did better than other students with similar prior achievement levels, and lower numbers mean that they did worse.
The current version of the SGP is calculated using the most recent assessment from a testing window and one or more previous assessments that were administered in the same testing window. The results are compared to the cutoff point of a particular grade level or content area, such as math or English language arts. This ranking is then compared to the growth rate of other students with similar prior achievement and the student’s own progress over time. SGPs are often reported at the student and class level and are used by education reformers as a way to assess educator effectiveness.
Our research has uncovered that true SGPs for individual students are correlated and related to their background characteristics. This finding suggests that a nontrivial portion of the variation in aggregated SGPs may be due to these relationships. It also means that aggregating these estimated SGPs may not be as beneficial as some suggest.
This is a serious concern because the benefits of aggregating SGPs are largely based on their ability to overcome the measurement error problem inherent in measuring student performance. These errors are due to the fact that teachers do not teach all students the same way, and our results show that these differences have substantial impact on aggregating estimated SGPs to the teacher or school levels. This bias is easy to avoid in a value-added model that regresses student test scores on teacher fixed effects, prior test scores, and student background variables. This type of model also carries additional interpretability and transparency benefits that are not available from SGPs that do not include these controls.