Ranking colleges by academic value could face many obstacles, new study finds

March 7, 2017

Selection bias is real, and powerful

By Ross Brenneman

Photo credit: Image by USC Photo/Gus Ruelas

In September 2015, the Obama administration abandoned plans to create a national college rankings system.

The system faced stiff opposition from university and college presidents, in no small part because of major disagreement with the idea that the “value added” by a particular program could be judged.

Now a new study from the USC Rossier School of Education dives into the complexities of ranking colleges for educational value.

The study, led by Associate Professor Tatiana Melguizo and published in the Journal of Research on Educational Effectiveness, examined several different indicators of value by turning to data from Colombia, a country that has done significant work within a centralized accountability system over the past two decades.

The results showed that while there are many important categories by which colleges could be ranked—graduation rates, graduate success in the labor market, academic achievement—each system is prone to certain biases and issues.

“Even though universities are supposed to be providing key knowledge and skills, many popular indicators, like the U.S. News & World Report rankings, focus on institutional selectivity instead,” Melguizo said.

And in addition to rankings like those of U.S. News, which look at factors like admissions test scores and acceptance rates, there are dozens of other popular rankings that can examine everything from diversity to campus beauty to food offerings.

The Melguizo-led study was designed to get a measure of how much students are actually learning in specific programs.

“This study not only provides rigorous empirical evidence related to gains in student learning outcomes, but illustrates how, even when certain programs might be doing a good job placing students in the job market, they are not contributing much to knowledge beyond what students brought from their previous educational experiences,” Melguizo said.

Researchers used data collected from 2000 to 2012 involving graduation rates, graduate success in employment, and results from compulsory college-entrance and college-exit exams. Unlike students in the United States who often choose between the SAT, ACT or both, there is only one prominent college exam in Colombia, known as SABER. Each dataset represented tens of thousands of students.

In parsing out the data, researchers identified a persistent selection bias—that is, selective universities choose students with a very high probability of graduation, so it is not clear whether the university is really providing the support to the students or whether the students would graduate no matter what.

While ranking each college on one of the three individual factors—academic achievement, graduation rates and entry into the job market—the study found certain positive contributions.

Results are less clear when combining factors. For instance, when ranking programs based on exit-exam outcomes, researchers found that schools in the top 75th percentile showed evidence of positive contributions. But when crossed with data on graduation rates, there was almost no correlation between positive growths in both categories, after accounting for selection bias.  Such findings suggest that if universities don’t provide a rigorous education, then the positive results of some programs might be attributed to the selection problem.

Similar results showed up when looking at correlations between exit exams and entry into the job market, at least for some programs—agriculture and veterinary sciences, for instance. But there was correlation for other programs, like math and natural sciences, even after controlling for selection bias. The authors suggested that these findings illustrate the need for nuance in rankings, looking at specific programs more so than entire institutions.

“The results suggest that given the sensitivity of the models to different specifications, it is not clear that they should be used to make any high-stakes decisions in higher education,” the authors conclude.

Related Stories: