Yesterday US News announced that it is evaluating the possibility of producing a ranking of scholarly impact for law schools. The ranking would be based on bibliometric measures such as median and mean citations and number of publications for the previous five-year period. The citations would be gathered from HeinOnline, a publisher of legal periodicals. According to the announcement, the ranking would not, at least initially, enter into the overall law school ranking scheme, but would be published separately sometime in 2019.
The news was met with lively debate, as all things US News are. There are several lines of criticism. One group worries that releasing such a ranking will create incentives for law schools to focus too much on scholarship as opposed to teaching or cutting tuition. I don't have much to say to this group in this post, because that's a broader debate. Another group worries not about placing greater priority on scholarship, but about the idea of quantifying scholarly impact. Although there are real problems with quantifying scholarly impact, that too is a different debate. Scholarly impact or reputation is already quantified, it's just quantified through the use of a somewhat haphazard survey distributed by US News. For those who object to the quantification of scholarly impact, I suggest skipping to the "Consider the Alternative" section toward the end of this post.
In this post, I want to focus on the line of criticism from a third group, those who value scholarship and accept at least in principle the idea of quantifying scholarly impact, but worry about the narrow focus of using law review citations. One of the most common critiques that surfaced on the day of the announcement is that the proposed methodology will undervalue interdisciplinary work. Interdisciplinary work often appears outside of law reviews and is cited by publications other than law reviews. The concern is that if US News undervalues interdisciplinary work, then because of the sway that US News holds, it may even reduce the incentives for interdisciplinary work.
As an interdisciplinary scholar myself, this critique concerns me. I do almost no "traditional" or "doctrinal" work in my scholarship, although I have done some. However, I think there are several reasons why the concerns might be overblown even for someone like me. In addition, there are actually some advantages to the proposed methodology over one that included interdisciplinary materials more broadly. Let me explain.
How Law Review Citations Would Affect Rankings
To examine how the proposed measures would affect schools doing interdisciplinary work, I start by comparing how the change would affect schools' rankings that have different intensities of focus of schools on interdisciplinary work. A reasonable (although obviously imperfect) proxy for intensity of interdisciplinary work by law school is the percentage of faculty with PhDs. The most recent reliable data I could find was from a 2016 article in the Journal of Legal Education, The Ph.D. Rises in American Law Schools, 1960-2011: What Does It Mean for Legal Education, by Justin McCrary, Joy Milligan, and James Phillips.
The plot below, taken from the article, plots the percentage of PhD faculty on the vertical axis and the 2011 US News rank on the horizontal axis. I want to show how using a law review citation measure like the Hein citation metric would affect various schools' rankings, and how that varies by interdisciplinarity of the school. We don't have the Hein numbers, but I have used the Sisk-Leiter rankings that also are based solely on citations from law reviews (gathered from Westlaw). The two measures are very closely correlated, with one important difference (addressed at the end of this post).
I have superimposed on the plot arrows showing movement in rankings of each school if the Sisk scholarly impact rankings were used instead of US News. The Sisk rankings used were those from 2012, the closest date to the date of the data on PhDs. Note that the movement indicated is moving the school from its US News ranking to its Sisk ranking-- the actual movement would be much less because the new scholarly impact measure would not count for 100%. The key is the direction of movement, not the magnitude.
Schools above the diagonal line are those who have high interdisciplinarity compared to their US News rank. Those below the line have lower interdisciplinarity. The plot shows that there are certainly schools that move significantly one way or the other, although quite a few don't move at all. But the point is that the movement isn't decidedly more negative above the line or below the line. This means that the Sisk measure doesn't appear to penalize schools with many interdisciplinary scholars compared to schools with fewer. To the extent we consider just degree of interdisciplinarity compared to "expected" interdisciplinarity for US News rank, it suggests that the Sisk measures actually penalize the less interdisciplinary schools. This is the opposite of the worry about discouraging interdisciplinarity.
Consider the Alternative
The proposed US News measures are not perfect. Let's get that out of the way. A citation-based input focusing solely on law reviews could be improved with careful calibration. Indeed, the proposed approach may be more imperfect than using Google Scholar (although there are arguments to the contrary--see next section).
But in the face of an imperfect measure, I think it's most important to compare the measure to the current alternative--the status quo. Currently the only measure of scholarly impact is the peer reputation score that is based on a survey circulated by US News.
In addition to the obvious problems of such a survey, namely that it is based only on subjective and somewhat strategic evaluations of a narrow slice of professors (deans, hiring committee chairs, and most recently tenured profs), it has an extreme problem with stickiness. As I explained in a previous post, the correlation between the 1993 score and the 2018 score is .93, nearly perfect. It is almost impossible for a law school to significantly change its peer reputation score with scholarly production alone. The main way schools have broken away from their 1993 peer scores was by changing their names and/or affiliating with a more well-known university. That can be seen in some of the outliers in the plot below.
As a result, a strong argument could be made that the incentives to produce scholarship are actually too low in the current regime. Allowing part of the ranking to depend on recent, objective scholarly measures would increase the incentives.
Law Review Citations Actually Have Some Advantages
Finally, although the disadvantages of looking only at law review citations are clear, there are actually a few advantages of using law review citations over citations from a broader source, such as Google Scholar. These are just some preliminary ideas and some may have strong rebuttals. I think on the whole a broader measure would be preferable, but it's not unambiguously preferable without careful implementation.
First, this is a ranking of law schools. To the extent that a law professor is highly cited in areas unrelated to law and the professor's work doesn't translate into substantial citations in legal scholarship, those citations don't directly affect the quality of the law school. Take an extreme example of a theoretical physicist on the faculty at a law school. Suppose that professor has thousands citations from a CERN paper with 5,000 authors. That shouldn't enter into a law school ranking that would swamp all legal scholarship by all law professors (including the Sunsteins of the law school world). That's an extreme example, but it exists in law schools on a smaller scale.
Second, to the extent that interdisciplinary work has an impact in law, it will be cited in law reviews and therefore captured in the ranking. Some of the papers most often cited in law reviews were published in economics or finance journals (Jensen and Meckling, Coase). The key here is ensuring that Hein and US News take into account citations TO interdisciplinary work FROM law reviews, not just citations TO law reviews FROM law reviews as it appears they might do. That would be too narrow. Sisk currently captures these interdisciplinary citations FROM law reviews, and it is important for Hein to do the same. The same applies to books.
Third, citations in many other disciplines are just not on the same scale as those in law reviews, and they have the potential to "blow away" all the law review citations. Although law reviews are notorious for excessive footnotes, legal scholars actually garner far fewer citations than those in many other disciplines. At many universities, the scholars in other disciplines have citation counts nearly an order of magnitude larger than the top legal scholars. Simply using all citations would potentially overvalue scholarship in other fields that have different citation practices.
Finally, there is a bit more quality control on citing references in the HeinOnline database compared to Google Scholar. Although some might laugh at the idea of quality control on the law review world, dig into some of the Google Scholar citations and you will see what I mean. Google Scholar includes citations from many materials that could not reasonably be called scholarship. This is more prevalent for certain types of materials than others.
Summary
So, if I were creating a scholarly impact measure, would I do it the way US News proposes? No. I would use something that would take into account a broader set of disciplinary materials and would control for where citations come from, such as an eigenfactor type approach. But something like that is not likely to ever enter into the US News formula. Thus, we must ask whether the proposed measure of scholarly impact is better than the status quo. Does it improve the information currently contained in the US News rankings? There I believe the answer is yes, because much of what is currently contained in the rankings is immovable peer reputation based on the distant past. I do hope, however, that US News will listen to these critiques and refine the measure to better quantify scholarly impact without distorting the market.
Comments
You can follow this conversation by subscribing to the comment feed for this post.