Which Schools Fall to the Bottom?
Educators, policymakers and the public generally agree that there are problems with the current accountability measure, Adequate Yearly Progress (AYP).
Under AYP, far too many schools are being labeled as “failing,” according to USC Rossier Assistant Professor Morgan Polikoff, often due to factors outside of the school’s control, such as focusing on schools’ achievement levels rather than growth and improvement over time. For instance, in California last year, 73 percent of Title I Schools failed AYP.
Additionally, under the current system, if one subgroup fails to exceed the proficiency target for Math and English, the entire school fails, so there is a bias against larger schools, diverse schools and schools with significant special education populations.
“It penalizes schools with kids from groups that have historically low achievement, as the proficiency rate measure is highly correlated with the percentage of kids in poverty,” said Polikoff.
One recent attempt at a fairer and more accurate system for identifying failing schools that need help is the Senate’s Harkin-Enzi revision to the Elementary and Secondary Education Act.
Polikoff and Andrew McEachin (PhD ‘12) decided to see how that proposal might look in action. They used California school data to calculate which schools would fall into the bottom 15 percent under the plan – the five percent with the lowest achievement level; five percent with the largest achievement gaps; and five percent with the lowest subgroup achievement.
Their findings, published in Educational Researcher, reveal that certain schools would be sanctioned disproportionately. They offer suggestions for policymakers to improve the way they identify the lowest-performing schools.
First, Polikoff and McEachin recommend that a system measures both achievement level and growth. The authors found that when schools were measured by achievement status only, middle schools serving poor and minority students fell to the bottom, and when schools were measured by growth only, smaller schools with year-to-year fluctuations sank to the bottom. The authors urge policymakers to use a combination of the two for better accuracy and fairness.
They also suggest systems be designed to use individual student data rather than school average data in order to measure how much individual students are learning. And they urge policymakers to use three-year averages when looking at growth as one year of data can be unstable.
The authors also recommend that schools be held accountable by school level and by school size, as their analysis found middle and high schools are disproportionately represented over elementary schools when school levels are measured together, and more small schools fall to the bottom than larger ones when all sizes are measured together.
Finally, Polikoff and McEachin found that schools with large numbers of students with disabilities are overwhelmingly in the bottom when ranking lowest subgroup achievement, and they suggest that policymakers consider ranking subgroups by type – for instance, schools should be ranked by the performance of their Hispanic students separately from that of special education students.
Polikoff argues that, while the Harkin-Enzi proposal is unlikely to be passed into law, his findings have broad implications for states designing and implementing accountability systems under the waivers offered from the Department of Education. He said he encourages policymakers to conduct similar analyses with existing data to see how their systems would actually work.
“When you have reasonable goals based on something that is under the school’s control, you are going to get better performance,” Polikoff said. “Most people support the idea of accountability, so when designing these systems, let’s learn from the mistakes made and the available data to really identify schools that need support to improve.”
This article was featured in the January, 2013 Issue of Rossier Reach