Saturday, September 27, 2014

Answering (?) 'The PGR Challenge'

Spiros has a post up issuing a challenge for certain kinds of objections to the PGR (this post from the editor of the PGR seems to be in the same spirit). I dug around and, below, I link to some objections that may or may not answer the challenge.

Spiros's challenge is to find objections that do not fall into four broad categories:
1. Objections based on a mistaken characterization of what the PGR is (its methodology, how it is produced, what it aspires to track, etc.). (E.g., "The PGR is just a small group of Brian Leiter's friends desperately trying to uphold analytic orthodoxy in the profession" -- actual quotation, by the way.) 
2. Objections, also based on a mistaken characterization of what the PGR is (and its objectives), that claim that the PGR fails to satisfy its own objectives. (e.g., "The PGR, being just a small group of Leiter's friends, can't possibly be an objective measure of actual faculty quality" -- actual quotation,) 
3. Objections to the effect that the PGR is harmful because it is too easily misunderstood/misused by faculty, students, and administrators.

4. Objections to the very idea of surveys / rankings / reports of the kind that the PGR is.*
After an hour or two of looking, I dug up the links below. Note that I intend these links to serve only as a response to Spiros's so-called "PGR Challenge."**

Richard Heck's original criticism, courtesy of the Wayback Machine (via Heck's current website).

Zachary Ernst's 2009 critique, "Our Naked Emperor."

The Smoker's own Mr. Zero's "PGR Minutiae" and "Bride of PGR Minutiae."

Some entries at Choice and Inference on the PGR's "sampling problem" and the "educational imbalance within the PGR evaluator pool." There are also many other posts linked within these on the Choice and Inference blog.

Jennifer Saul has a post at Feminist Philosophers to her paper “Ranking Exercises in Philosophy and Implicit Bias," which appeared in Journal of Social Philosophy, 43:3, 2012.

Alan Richardson has a brief discussion of the PGR in his 2012, HOPOS, 2:1 (1 - 20), "Occasions for an Empirical History of Philosophy of Science: American Philosophers of Science at Work in the 1950s and 1960s." (I highlighted the discussion with screencaps on Twitter [the last three or four tweets]; the editor of the PGR calls this strain of criticism a serious objection in the second link at the top of the post.)

I welcome any further links or examinations of the objections in the comments below.

-- Jaded, Ph.D.

*Spiros calls objections 1 and 2 obvious failures; 3 is not an indictment of the PGR, but of the reading comprehension skills of the various parties (and any such consequentialist arguments, he states in the comments, are failures because they don't consider that the positives, e.g., more information for grads, might outweigh the negatives, e.g., (my favorite) conservatism); and 4 fails since we all "walk around with some such reputational ranking of various programs."

**I leave it up readers to determine if they fall into the above four categories, are successful objections, etc. (I should note that I'm partial to the conservatism worry, as I mention at Spiros's original post.)


Jon Cogburn said...

Leiter exercises huge control over the makeup of the advisory committees and whether a school even gets ranked. This puts his thumb rather heavily on the scale.

On the former, consider the under-representation of contemporary French philosophy on Leiter's continental committee. Of course departments he views as "shit departments" end up not registering.

Please also read Heck's recent blog post .

Look we don't need a "reputational survey" using anything like Leiter's methodology. I hate to say this, because Chalmers and many of the other reviewers are behaving so admirably now. But we should follow the sciences in how we rate departments, and if we did it would be far less dysfunctional, more fair, and more helpful to graduate students.

For each area of philosophy we can use citation metrics to determine the best programs (you have to do this by area because cognitive science, and business and environmental ethics, articles tend to cite whole bunches of literature, and core analytic articles are not nearly so completist). Sciences already do this.

For the overall ranking, surely something in the neighborhood of Carolyn Dicey Jennings' project is much more useful for prospective graduate students.

Anonymous said...

Spiros's call was tendentious trolling. Of course, he simply meant that he has found none of the various objections persuasive. YMMV.

Anonymous said...

I wonder how many graduate students make decisions less connected to reputation or placement statistics. I would think a significant portion of PhD program applicants (possibly most?) are looking more at location (near where I live)and atmosphere (I like the people I met and the campus.

Anonymous said...

I think Spiros's challenge was in bad faith. When he was asked to respond to some of the attempts to meet the challenge, he conveniently failed to address some of the better arguments. He has also yet to return since the posting of Ernst's paper.

Anonymous said...

When I applied to grad school, I applied based on who I want to study with. I got a great education, but now I realize that I should have considered rankings.

Anonymous said...

Yeah, when I applied I really had no idea which departments were good in which areas -- the slightly embarrassing truth is that I had not read enough recently published work even to be familiar with names, except for a few. I am fairly happy with how things worked out, but I definitely wish I had known about the PGR.

Anonymous said...

The Gourmet is supposed to be first and foremost a tool to help grad students decide what programs to attend, and I just don't think it does that well. I think a survey could be extremely useful in that regard, but the Gourmet for the most part doesn't survey the right data. As prospective grad student your main questions ought to be.: 1. Can I work on the stuff I'm interested in there? 2. If my interests evolve how likely is I could work on something different there? 3. How supportive will they be? 4. Can I get a job when I finish? The Gourmet kind of sort of answers questions 1 and 2, though maybe not as well is it could. After all, it's entirely possible that for various reasons you could have an easier time doing work on say early modern at a school that was only respectable in it than one that was highly ranked. It claims to touch on 4, but as Jennings pointed out it did a lousy job of doing so since the correlation between Leiter ranking and placement, while real, was way weaker than everyone assumed. It doesn't even try to answer 4. I think that a good survey should include not just placement rank but completion rates as well. Certainly any survey of undergrad programs would. A low completion rate means either that the dept. is doing a lousy job vetting potential students or doesn't do much to support its students once there. (Or likely both.) Grad students certainly need to know that going in, and grad programs ought to be punished for failures on either of those scores.

Anonymous said...

[Cross-posted on DailyNous. Maybe there are good responses to these objections? I hope there are good responses to at least some...]

Golly, Mr. Ernst is fond of extreme and combative language. I certainly don’t think the PGR is obviously unsound and detrimental to the profession. But maybe I’m so stupid that I can’t see the obvious? In any case, I find the following things puzzling, given Ernst’s rhetoric:

1. Ernst begins by negatively comparing the PGR to the US News rankings when…the largest component of the US News rankings is a reputational survey! (Well, it’s tied for largest.)
2. Ernst claims that the PGR doesn’t measure anything meaningful, and whatever it measures it doesn’t measure well. But he also claims that the PGR measures the strengths and weaknesses of his own department accurately. Of course, that could just be a coincidence, but…
3. The “objective data” that Ernst thinks decisions should be based on is not completely objective, or not very useful: merely knowing that University X has 5 people working on metaphysics, and that those five people have published five papers each, really doesn’t give one much information about whether University X is a good place to study metaphysics. Were those publications any good? An undergrad would be hard pressed to determine that. She might use a reputational survey of journals, but that will only work if reputational surveys work. She might look at the record of metaphysics students at University X, but that will be very misleading if the five metaphysicians have been hired in the last five years. What our undergrad wants to know is whether University X is a good place for her to go, now, to study metaphysics. Knowing University X’s specialty ranking in metaphysics is very useful in that regard. Note that even if some metaphysician at University X has gamed the speciality rankings by getting her metaphysician friends to vote up University X, that means that someone at University X has a lot of pull in the metaphysics community…the exact kind of person our student might want on her dissertation committee. Note that I’m not endorsing strategic voting, I’m just saying that anyone who could radically manipulate their university’s specialty rank by pulling strings with other members of that speciality can probably get their students jobs by pulling those same strings.
4. Of course, all that objective data could be sensibly combined with the PGR to get a better overall picture.
5. If the PGR isn’t tracking what it is supposed to establish–if it is very unreliable–it should be relatively easy to point to examples of where the PGR makes egregious errors. I don’t expect uncontroversial examples. But if the PGR is that bad, there should be a good number of plausible examples of where it has gone wrong. I’m not saying there aren’t any, I’m just saying that anyone who thinks the PGR is useless should produce some.
6. It was never made clear to me what the problem was with the idea that Leiter uses snowball sampling because he wants philosophers that are “in the know” to be the ones filling out the surveys. Of course, if Leiter is totally wrong about who is “in the know” then this method will be a disaster. But if the PGR is a disaster, then, as I said above, it should be relatively easy to point to places where it makes egregious errors in the overall or specialty rankings.