Monday, February 16, 2009

PGR Minutiae II

Thanks to everyone who read and/or commented on the earlier PGR Minutiae post. I wanted to correct a couple of defects and incorporate some issues that came up in comments.

1. I presented the points in the order that seemed to flow the best, not in order of importance. In descending order of importance, they would go (2), (1), (4), (3).

2. I didn’t really “sum up” or anything, but it seems to me that the four criticisms have a sort of net effect that is significant, even if on their own, the problems don’t seem that significant. The evaluative scale the rankings are based on is probably not particularly suitable for the finding of averages—not that the resulting numbers are nonsense, but they don’t really mean exactly what we think they mean; there is no way to tell whether a given difference in mean score is significant; the ordinal scale that is the final product greatly exaggerates these differences.

3. I am inclined to endorse Zach Ernst’s point about Leiter’s sampling techniques. The issue is whether the sample is representative or not. In comments someone asks why you’d want the sample to be representative. You want the sample to be representative because you’re trying to understand how the philosophical community sees the departments—it’s a survey about reputations—and if the sample doesn’t represent the community, the results won’t represent the community’s views. The best way to ensure representative sampling is to collect the sample randomly. This is probably not feasible, which is why Leiter makes use of the snowball technique. But it does not seem obvious to me that the group of respondents accurately represents a cross-section of the discipline—most of the respondents come from and teach in highly ranked department, but not all “research-active” philosophers teach in ranked departments (some teach in unranked departments; some teach at SLACs), and not all of them graduated from top departments (some graduated from medium- or low-ranked departments; some graduated from unranked departments). If the advisory board is just going to invite people to participate, an effort should be made to invite philosophers from a wide variety of teaching and graduate-school backgrounds. The sample as it is currently collected appears to represent a judgment about what kind of philosopher will or will not have worthwhile opinions.

--Mr Zero


Anonymous said...

I agree completely. Only once the PGR moves to a playoff system can we then fairly and reasonably determine department ranking.

Anonymous said...

I think the pool of POSSIBLE respondents should be expanded to include every philosophy professor in any philosophy department in the Anglophone world. Names should be selected at random from that pool until enough people are found to take the survey.

Putting aside practical difficulties (would it REALLY be that difficult to do it this way?), why NOT do it this way, or a relevantly similar way?

A related question: how, if at all, do y'all think it would affect the results?

Soon-to-be Jaded Dissertator said...

Anon. 5:41,

I disagree with you. I think what we need is an inscrutable algorithm, programmed by outside sources, that is run on 50 Cray Supercomputers to determine who is the top philosophy department. Only once we remove the human element will we be able to have a true champion.

Anonymous said...

You are all wrong. We simply need an old-school, 152 game season where the department that wins the most games at the end of the season gets the pennant.

Then there could be a 7-game series between the best continental and the best analytic department. Home field advantage would be determined by ... (ok, someone else needs to finish that for me.)

Anonymous said...

This is not a rhetorical question. Are the rankings set up so that they can be wrong? In other words, are they trying to find some ranking they think is independently true, or are the rankings purely descriptive? We say things like, "X shouldn't be that high" or "Y isn't that bad in subfield Z". If the rankings are supposed to merely described the reputations of the faculty, can we say these things? Imagine someone complaining about the results of a survey entitled "How People Surveyed Respond to Surveys." I have no background in stats or anything, so forgive me if my question is stupid.

Anonymous said...

It seems obvious to me that the PGR is biased against history and continental philosophy. Having glanced through the specialties of the respondents it looks to me as if the majority are mainstream analytic philosophers of various stripes. Now it seems to me that one way to fix this bias is to remove the rankings from judgments of a small group of people. Instead the rankings could be based on quantifiables like publication history, citation numbers, involvement with APA (and maybe other groups) etc. Obviously this would have it's own biases, but a school that scored highly in all areas would be a very active department in which faculty are publishing work that is being taken seriously (ie cited). A department that didn't score highly would be the reverse. Obviously not all faculty publish a lot, but if they published one reasonable influential article (ie one that is cited a lot), this would have a similar effect as publishing a lot of articles that weren't.

Maybe a system like this would be too hard to get together, but it seems to me that the end result would be much less questionable than the PGR as it now stands. Am I missing something?

Anonymous said...

All this inevitably and frustratingly falls on deaf ears because it's the best and the brightest and the well-pedigreed who do these surveys. The thing that sucks is not that the ranking aren't somewhat close to fair - I think the speciality rankings, at least, are. It's that the overall rankings are so big in people's minds, and thus they wield so much influence, when the speciality rankings are the only thing we should trust. And it's that the overall rankings are deceiving: in basketball, #23 is a lot better than #43. That's just not true in philosophy. But it's hard to convince Deans and grad students of that.

Anonymous said...

Notice how well rankings lend themselves to sports and pop-cultural analogies. When we apply rankings to philosophy programs, we inherit all of the odious baggage of pop culture: gossip, obsession with status, cults of personality, etc. etc. Face it: Brian Leiter is the Perez Hilton of academic philosophy, and whether he styles himself in this way or not, this is how a lot of people read him. He talks about TT hires as though they were first-round draft picks, major faculty moves as though they were trades.
Is this on purpose, or it is a consequence of reducing an academic profession to Casey Casem Think (tm)?

Anonymous said...

Anon 2:20,

There is no such thing as analytic philosophy - just "good" philosophy. Brian Leiter says so. And if you want to know what counts as good "continental" philosophy, just look at what he does. My guess is that the people who participate in the PGR accept, or at least extremely sympathetic to, Leiter's metaphilosophy. Otherwise the specialty rankings for, e.g., "continental philosophy" would undoubtedly be very different than they end up being year after year after year...

Mark said...

The sample bias of the PGR is utterly visible. What you're getting in the rankings is the collective judgment of *those people*. If you are interested in the reputation of departments among other people, then find some other source. If for some weird reason you want to know the average reputation of departments among all APA members, or something like that, and for the life of me I can't see why that would be particularly valuable, then maybe the NRC report could help you.

I honestly don't see what all the fuss is about. The PGR is what it is, as sports guys say, and it doesn't pretend to be anything else. And if you want something else, find it or create it.

Mr. Zero said...

Anon 2:20,

The Chronicle of Higher Ed publishes a report like the one you describe. There are controversies surrounding this report, too, but its rankings are a function of publications and citations. Which function? I don't know. This methodology results in some curious differences from the PGR (UNC is #1; Purdue is in the top 10), but some similarities, too (Rutgers is in the top 10). I guess it only ranks the top 10 or something.

Anonymous said...


That is one mindboggling table. Take a look at the column, "Percentage of faculty with a journal publication". Apparently 41% of the Rutgers philosophy department has managed to achieve that evasive goal. Impressive! (Okay, it only claims to count 2004-6 articles. Still.)

Anonymous said...

As an oldster, I still cannot figure out how Brian - as a graduate student, no less - constructed himself into such an expert on 'good philosophy departments,' much less 'good philosophy.'
Of course his [early] efforts did happen to coincide with the agendae and views of certain 'analytic' philosophers and graduate departments during the heyday of the 'analytic versus pluralist' wars. So, perhaps we should view the Leiter Reports as a kind of self-fulfilling, collectively self-aggrandizing prophesy.

What is most sad about all this is that most jobs are not in 'leiteriffic' programs and will never be. So, this is all a terrible scam on the many graduate philosophy students who are told they are only worth a career if they (a) they do their graduate work in a 'leiteriffic' program, and (b)find a career in such a program.
How cruel and self-serving of the Leiter crowd of graduate faculty.


Anonymous said...

Dear oldster docs,

There is no such word as 'agendae'. 'Agenda' doesn't need a Latin plural, because it is already plural. Here's a piece of free advice: it is much less pretentious and embarrassing to use an ordinary English plural when a fancy Latin one is available than it is to use a fake Latin one.

Anonymous said...

Can I just add one more thing about the validity of the PGR survey? The question asked of the evaluators is itself terrible.

Here it is: Please give your opinion of the attractiveness of the faculty for a prospective student, taking in to account (and weighted as you deem appropriate) the quality of philosophical work and talent on the faculty, the range of areas the faculty covers, and the availability of the faculty over the next few years.

Anyone who has done any kind of survey work knows that you cannot crowd more than one variable into a question and get meaningful results. They're called "double-barreled" questions. This one is like quadruple-barreled. Come on people. There are survey experts, why don't we get one to design a survey that might actually yield useful data!

Any decent self-respecting psychologist or sociologist would laugh their asses off at this survey.

colin said...

More PGR minutiae: 2009 edition.

I think a similar point was made in the last post, but so what?

The difference between the lowest ranked schools at 48 and the 26th ranked is .8, which is the same difference as between NYU (1st) and U of M. (5th). Similarly the .3 points difference between schools 1, 2, and 3 is the same difference as between the schools at 48, 41, and 34. From the top that .3 difference goes 1, 2, 3, 6, 20. It would seem that the pgr describes a parabola in which most schools are clustered in the middle with a few outliers at the top.

I'm not entirely sure what all this means, but I find it interesting.