Will Slack




College Rankings in Detail - Arbitrary Comparisons

10 Aug 2010 by Will Slack




After a long day returning to the office from a fabulous Eph vacation, little could have warmed my heart more than the kind complement from ShuffStuff @ Tumblr, and hope that these musing manage to satisfy. I also wrote a shortened version of the below a few days ago when the Forbes rankings came out.

Let’s start by taking a look at what US News says about their rankings:

  • Do use the rankings as one tool to select and compare schools.
  • Don’t rely solely on rankings to choose a college.
  • Do use the search and sort capabilities of this site to learn more about schools. Visit schools, if possible.
  • Don’t wait until the last minute. College matters. Take your time, and choose carefully.
  • Do think long and hard about the right place for you.

The problem, of course, is that the text above is two clicks away from the main rankings page, which lacks any sort of guidance about how to use the rankings. Why? Because the entire point of the rankings is to be a big deal, which makes US News a big deal. I can’t think of a time that the magazine is in the news except for these annual rankings, which might be why Forbes just started its own rankings. Any sort of equivocation (a la “these rankings are part of balanced breakfast of high school counselors, visits, research, etc”) would only hurt the impact of the rankings.

Because, let’s face it: they’re attractive. The world of higher ed is a heck of a lot easier to understand when you have a clear numerical order of every single US college. Far from adding any doubt, US News puts the top three in each category on a pedestal as if to say: these are the best places to get a good education because….. why? Let’s see (subpoint percentages are the fraction they represent of that subsection:

  • Undergraduate academic reputation (22.5 percent of total ranking)

  • 66% peer assessment (48% survey response rate)
  • 33% high school counselor assessment (21% survey response rate)

  • Graduation and Freshman retention (20 percent of total ranking)

  • 80% 6-year graduation rate
  • 20% freshman retention rate

  • Faculty resources (20 percent of total ranking)

  • 30% proportion of classes with fewer than 20 students 
  • 10% proportion with 50 or more students
  • 30% Faculty salary (adjusted based on cost-of-living differences)
  • 15% proportion of profs with highest degrees in their field
  • 5% student-faculty ratio
  • 5% proportion of faculty that are full-time

  • Student selectivity (15 percent of total ranking)

  • 50% entering classes SAT Math/Verbal & ACT composite scores
  • 40% proportion of students from top 10% of their high school (GPA)
  • 10% acceptance rate

  • Financial resources (10 percent of total ranking)

  • Spending per student on “instruction, research, student services, and related educational expenditures.”

  • Graduation rate performance (7.5 percent of total ranking)

  • Performance of the class of 2003 relative to an “expected graduation rate” based on Pell Grants, SAT Scores, etc.

  • Alumni giving rate (5 percent of total ranking)

  • Percentage of alums who gave to Williams two years

The above isn’t quite as attractive as “Williams is the #1 college in the US,” is it? In fact, it makes damn little sense, if for no other reason than that all of these ranking tools are based on different variables. Even if test scores are 1.25 times as high school performance, how do you compare school A (with avg SAT of 1400 and 50% of frosh who were top-ten percent in high school) to school B (with avg SAT of 1350 and 60% frosh who were in top-ten percent in high school). You can’t!

In order to do so, you have to state some sort of equivalency between them: that every 10 points in SAT is worth some percentage of high performing students in high school. How do you set this?  How do you compare differences in faculty salary (crudely adjusted on regional indexes that have little to do with the cost of living (COI) in a college town with percentage of classes with under 50 students?

I see seven different and incomparable variables up there (reputation survey score, percentages, money, student-faculty ratio, SAT score, ACT score, “graduation rate performance.) These variables cannot be compared without making an “exchange ratio” (such as saying each 1% increase in graduation rate equals 50 points in the SAT). No such formula is dependable unless you use statistics to make a z-score that depends on the distribution of each of the variables, but this incredibly unattractive method is still arbitrary, albeit mathematically.

So to sum up: Arbitrary comparisons of unrelated variables are arbitrarily weighted within subcategories, which are then arbitrarily rated together to produce a composite score such as “100,” for Williams. And since Williams’s 100 is almost certianly based on a curve, that means that every other score is also curved according to a formula that isn’t disclosed.

In short: These “attractive” rankings aren’t quite so nice looking when you strip them down.


Leave a comment