Brian Leiter has a new post up ranking law schools' scholarly impact based on median performance on the Sisk measures of scholarly influence. The Sisk ranking itself uses a formula that weights the mean faculty member's citations twice the weight of the median faculty member's citations. Re-ranking based on the median faculty member produces some interesting changes, even at the top. For example, Cornell moves up from #13 (based on Sisk weighting) to #8 (based on median citations). The mean-heavy Sisk approach favors schools with "star" professors who have very high citation counts, whereas the median approach favors schools where most of the faculty have higher citation counts, even if none are "stars".
I wonder whether the mean-median debate should be shifted to the question of denominators instead. Although looking at the mean citations (total number of citations divided by total number of faculty) is intuitive, does it really make sense? Is a faculty's scholarly impact increased when a professor who doesn't publish retires? The mean and median approaches both suggest that such a retirement does increase the scholarly impact of the faculty. Of course, arguments could be made to support this, such as that having a greater percentage of active scholars fosters a scholarly culture within the law school, that the retirement frees up budget that could be used for scholarship, etc. These arguments are rather indirect, however, and not obviously persuasive.
An argument could be made that total number of citations is relevant (i.e., with no denominator), even though that measure would favor larger schools. A large school with 100 faculty members cited 100 times each has "more going on" in a sense than a law school with 10 faculty members cited 100 times each, even though both the median and mean approaches would treat both schools the same. On the other hand, not using a denominator in this fashion would treat both of those schools the same as a school with 10,000 faculty members cited one time each, which I think we can reject as not a terribly attractive scholarly environment.
Another possibility is that we should use number of JD students as the denominator. To the extent the idea of the law school is to expose students to scholarly ideas, perhaps we should measure scholarly influence by the amount of influence (citations) per student. That approach assumes, however, that students are the audience for scholarly work which isn't necessarily the case.
Thus, the question of denominators in scholarly influence studies has some perplexing methodological details that haven't really received adequate attention. Dare I say that the measures are under-theorized and that there is a gap in the literature?
Of course, all of this is qualified by the criticism some have that we shouldn't try to quantify scholarly influence in this way (or perhaps in any way). Although I don't subscribe to that view, many influential scholars do, and it's worthy of debate. Another more persuasive criticism in my view is that raw counts of citations (whatever the denominator) introduces unnecessary noise into the rankings. Not all citations are equally informative, and ideally the citation rankings should use graph theory concepts such as eigevector centrality or Page Rank or the like to calibrate rankings more precisely. I hope to have more thoughts about these concepts in future posts.
Comments