A new entrant has emerged in the law school rankings debate, as Christopher Ryan and Brian Frye recently posted their paper, A Revealed-Preferences Ranking of Law Schools (forthcoming in the Alabama Law Review). The paper presents a ranking of law schools based exclusively on LSAT and undergraduate GPA, on the theory that prospective law students' actual choices about law school are a better way of ranking than the hodge-podge of factors included in the US News rankings.
The approach used by US News includes a variety of factors with varying weights but among the most important are factors based on surveys of academics and of lawyers and judges. Specifically, the US News ranking methodology is based 25% on "peer assessment score" (academics) and 15% on "assessment score by lawyers and judges." The both categories are weighted heavier than LSAT scores (.125) and GPA (.10), which are the raw material for the Ryan and Frye ranking.
As might be expected, the Ryan and Frye rankings correlate strongly with US News rankings, but there are some significant outliers. What accounts for the differences? The Ryan and Frye paper suggests the divergence may be due to some schools being "better at gaming ranking systems than appealing to students, and vice versa." (p. 4) I don't doubt that is a potentially significant factor, but I would suggest that a significant part of this difference is due to timing. Students are a leading indicator, peer assessment scores a lagging indicator, and lawyer/judge rankings almost useless.
The Ryan-Frye approach has potential advantages and disadvantages compared to the survey-based rankings in US News. The primary weakness of the Ryan and Frye rankings would be that law students are not as knowledgeable as are legal academics about the quality of legal education. Prospective law students might be less able to critically evaluate statements of quality emanating from law schools or may be unduly influenced by irrelevant factors. But legal academics, lawyers, and judges also may have blind spots. In particular, academics' perceptions of the relative quality of law schools, especially elite law schools, may be somewhat frozen at their positions when the rater attended law school. As a result, those hierarchical relationships may be insufficiently insensitive to new information.
In light of these considerations, thought it might be interesting to examine the potential causes of divergence between the Ryan-Frye approach and US News by comparing the US News survey-based rankings between 1993 (the year of the first full ranking of law schools) and 2018 (the most recent ranking).
The peer ranking is the largest single component of US News and is measured somewhat comparably across the years so I will focus on that component of US News. The chart below shows a plot of the 1993 peer rankings (then called "academic" rankings) and those for 2018. Because higher ranked schools have lower ranking numbers, the highest ranked schools are in the lower left and the lowest ranked schools in the upper right. Schools above the line have improved in their rankings between 1993 and 2018. Schools below the line have lower rankings in 2018 than in 1993.
The correlation between the 1993 peer ranks and the 2018 peer ranks is .93, which is evidence of incredible stability over time. As a result, the 1993 rank can predict with a high degree of accuracy the 2018 rank, especially for the higher-ranked schools (the lower left). However, there are some notable outliers, which I've noted with text in the figure. It is interesting to note that among the largest gainers are three that changed names by affiliating with an existing university (Michigan State, New Hampshire, and Quinnipiac). The remainder of the schools with large jumps in peer rankings (Alabama, CUNY, Georgia State, and Pepperdine) have other explanations. My institution (Pepperdine) and Alabama have made major pushes toward emphasis on research productivity, which may explain the changes in their scores.
The laywers/judges ranking has quite a bit less stability, and indeed the 1993 lawyers/judges ranking shows lower correlation with all other measures than any other measure. The more surprising finding is that the peer ranking from 1993 predicts the 2018 lawyers/judges ranking better than the 1993 lawyers/judges ranking predicts the 2018 lawyers/judges ranking. Indeed, the peer ranking from 1993 predicts the lawyers/judges rating in 2018 (correlation .91) better than it predicts the lawyers/judges rating in even in 1993 (correlation .88). This data may suggest that the lawyers/judges ranking is far noisier than other measures, although it appears to have improved significantly from 1993.
Overall, the peer ranking shows evidence of extreme stability (one might say, "rigidity") in the US News hierarchy. The peer rankings change relatively slowly, but they do appear to lead the lawyer/judge rankings. It is also possible that neither "leads" the other but the lawyer/judge rankings have simply become less noisy over time.
One interesting issue for explaining the divergence of peer rankings and "revealed preference" rankings is that the revealed preference approach actually appears to predict future peer rankings to some extent. In a multiple regression with the dependent variable as the 2018 peer ranking and the independent variables as the 1993 peer ranking, the 1993 lawyer/judge ranking, and the 1993 LSAT ranking, the 1993 LSAT ranking is a strongly statistically significant predictor of the 2018 peer ranking. This is notable considering that the regression is holding constant the 1993 peer ranking. The extra predictive value is modest, but given that the 1993 and 2018 peer rankings correlate at .93, that is to be expected. In contrast, the lawyers/judges ranking in 1993 does not predict the peer ranking in 2018 when the other factors are held constant.
Similarly, the 1993 LSAT ranking predicts the 2018 lawyers/judges ranking, even when the 1993 peer ranking and the 1993 lawyers/judges rankings are held constant. Again, the increase in predictive power by using LSAT is modest, but suggests the possibility that LSATs of the past feed into the lawyers/judges rankings of the future, just as they may feed into the peer rankings of the future.
Thus, it may be that law students' choices are a "leading" indicator and the peer rankings are a "lagging" indicator of law school overall rankings. Still, the peer rankings appear to lead the lawyer/judge rankings, or at least not lag them. Although there are other potential interpretations of these relationships, identification of leading and lagging predictors could be significant. If this is true, then the revealed preference approach to law school rankings may predict the future US News overall rankings. Whether this suggests that increasing LSATs will cause increases in the peer and lawyers/judges rankings (with lags) is another question, but one whose answer could pay off for rankings-focused law schools.
Comments