First, by her own admission, the data is incomplete (indeed, woefully incomplete in some cases I know about).Of course, she was up-front about the incompleteness, and the up-front admission of incompleteness was accompanied by a request for additional data, and she has updated her analysis in light of the additional data. It really seems fine to me to run a preliminary analysis on incomplete data, and then publicize it in the hopes of generating more data (and a discussion of your findings). Of course it would be pretty bad to publicize a preliminary analysis without mentioning that it was preliminary, but Jennings didn't do that.
Second, no one would expect a department's reputation in 2011 to have any correlation with its placement prior to 2011, but almost all the placements recorded by Prof. Jennings are from students who would have started graduate school between 2000 and 2005. I would think philosophers are smart enough to understood that past placement success is a backward-looking measure, and that current faculty reputation, as it correlates with job placement, is a forward-looking measure.I'm not sure about this. I would expect a departments reputation in 2011 to correlate at least somewhat with its reputation prior to 2011, so I would expect a (potentially indirect) correlation between placement in 2011 and reputation prior to 2011. I'm not sure why it matters when the recently-placed students started grad school. If my department is trying to place me now, I'd think that its current reputation is more important that whatever its reputation was 10 years ago. (I suppose it would be interesting to see whether PGR rank at the time of enrollment correlates with job market success upon graduation, but the suggestion that Jennings should be doing this study rather than the one she did is too strong.)
And I just don't get this "forward-looking/backward-looking" stuff. Correlations are not inherently directional. Obviously the past is the past, and if you're looking to the past you're looking backward. But people look to the past in the hope of learning about the future all the time. It doesn't always work, but it's not nonsense. It seems to me to make perfect sense to investigate whether current "reputation," as the PGR attempts to measure it, correlates with overall placement record.
But maybe I'm all wrong about this. It's not as though I know what I'm talking about. So if I'm wrong, I hope some Smokers will set me right.
Third, her measure of placement success takes no account of the kinds of jobs graduates secure. 2/2 is the same as 4/4, research university is the same as a liberal arts college, a PhD-granting department is the same as a community college. I know philosophers happy in all kinds of positions, but it's not information, it's misinformation, to equate them all in purporting to measure job placement.This criticism strikes me as patently unfair. First, the additional data that would be required to control for these factors would be prohibitively difficult to collect and manage. Second, controlling for job type suggests an unnecessary value-judgment about which jobs are best. Of course people are free to make those judgements, but I'd rather not see them reflected in an analysis of the placement data---particularly not at this preliminary stage. If someone were to do a breakdown of the placement data by job type, similar to the PGR breakdown by specialties, that would be fine and even welcome. Knock yourself out. But the idea that not doing so is "misinformation" is, like, not true.
Fourth, the placement rate is calculated nonsensically: comparing average placement, as incompletely reported on blogs, between 2011-2014 to average yearly graduates between 2009-2013 is equivalent, in most cases, to comparing two randomly chosen numbers, since many (maybe most) of those placed in 2011-2014 will have completed their degrees well before 2009 and well after 2013. This is so obvious that I'm mystified why anyone would think this is a relevant comparison.Again, I just don't see how this is nonsense. The average yearly graduate figure tells you the number of job-seekers per year each program has recently produced; the average yearly placement tells you how many job-seekers per year each program has recently placed in a tenure-track job. In effect, it's a comparison of the department's recent graduation rate with its recent placement rate, and I think it makes perfect sense to make that comparison. Taking averages over several years will smooth over outlier years and compensate for the fact that the candidate's hire year might not be her graduation year.
I see why someone might want to see a straight comparison of graduates to tenure-track hires per year, but---as Leiter points out---a person might get their first tenure-track job well before or well after graduation. I see why someone might want to see a metric that strictly follows individual graduates, but doing so will raise problems in data-collection (departments often don't publicize it when their graduates are unsuccessful on the job market), and in indexing placement records to times (since, again, one's graduation year is often not one's year of first TT hire). (Of course, I don't know which comparisons Leiter would find acceptable, or if he had anything in mind at all. He doesn't suggest a better way to do it, so I'm just guessing.)
So, anyways, the comparison doesn't strike me as nonsensical, but maybe that's just because I don't know what I'm talking about. If that's how it is, I hope the Smokers will set me right.
I also don't see how the reference to NYU's placement record is instructive. Leiter complains that although NYU has "one of the best placement records in the world" but ranks only 26th on CDJ's analysis (this ranking was revised to 14th after new data came in), which Leiter thinks is mediocre. In defense of this, Leiter links to NYU's placement page. But, for one thing, the placement page doesn't tell the whole story of NYU's placement record---it shows how many people they placed (and where) without showing how many people they tried to place. But knowing how many people they put on the market every year is crucial to evaluating their placement record. If (And 26th doesn't have to be mediocre; it could be excellent if there was a large but tight group near the front. Which is one reason I don't love ordinal rankings.) And anyways, Jennings' spreadsheet indicates that NYU's placement record isn't as stellar as Leiter claims---most of their graduates get nice tenure-track jobs, obviously, but a substantial minority do not. You don't need a "perverse ingenuity" to generate that result; you just need to compare the rate at which they produce graduates with the rate at which they place those graduates into tenure-track jobs.
Now. I did not find the way the information was originally presented---as a comparison between the (ordinal) PGR rank and an ordinal "placement" rank---to be at all illuminating, and I'm glad she revised the post to present the information in terms of percentages. I think the focus our profession puts on ordinal rankings is pernicious, as is the fact that the PGR is principally organized in terms of them. But I think it is absolutely worth wondering whether whatever it is that the PGR measures is correlated with success on the tenure-track job market---as I indicated at the top of this post, I've been interested in this question for a long time---and I am grateful to Dr. Jennings for her work on this. And I appreciate her willingness to engage with her critics, to explain what she did and how she did it, and to revise her analysis in response to criticisms. To me, it seems like she has responded to her critics in exactly the right way.
So I'm not sure I see the need for such a hostile response on Leiter's part. Doesn't seem helpful. But what do I know? Nothing.