Thursday, February 26, 2009

Pardon the interruption

Sorry for the slow posting these days; I've been deathly ill for a week or so and unable to think straight for more than twenty minutes at a time (this cough syrup has me feeling leaned).

Rest assured, I'm trying to work on some exciting shit/posts/comics to get this place hopping again (incl. a fuck the faddy nature of the profession and how it does a disservice to younger scholars post).

Though, I must say, you all have been pretty damn good at entertaining yourself. So, in that spirit. Let this post serve as an open thread. Have at it.

--STBJD

Wednesday, February 25, 2009

Bride of PGR Minutiae

Obviously, there's a new PGR out. I'd like to sort of renew my complaint about the way the data is presented, in terms of ordinal rankings rather than in terms of mean scores. For example, we see the ordinal rankings from previous years are reported going back to 2002, but the mean scores are omitted. This is bad because, even assuming that the procedure for assigning numbers to departments measures anything, the ordinal rankings are just a derived quantity based on, and carry far less information than, the mean scores on which they are based. For example, we might notice that Texas at Austin has fallen from 13th in '06 - '08 to 20th in the current edition. But while this drop appears to be steep, it could have been caused by any number of things: a decline in quality at UT (obviously); an improvement in by a number of neighboring departments with no decline at UT; or some combination of neighborly improvement and Texan decline. It turns out that although the seven-step drop was caused by a decline in UT's mean score, this decline - 3.6 in '06, 3.4 now - is in a range we have been lead to believe is a statistically insignificant. So, in effect, the report measured no change in the quality of UT's department.

In the current issue, NYU is ranked at #1 (with a 4.9 mean score) and Rutgers #2 (with a mean 4.6); although this is a slightly bigger difference from the previous version, we still have every reason to suspect that the .3 difference in their mean scores is insignificant. They are within the range Leiter cites as insignificant; both schools have a median score of 5, which means that more than half of the respondents gave them 5s - which suggests that we should regard them as tied. The numbers we have suggest that any difference between the two is too small to be measured by the report’s methods. So I'd much rather see the mean scores than ordinal ranks.

--Mr Zero

Saturday, February 21, 2009

The Central APA is also a disaster

The reports are beginning to drift in, and suggest that the Central division APA meeting did not go well. the consensus appears to be that it was cold, expensive, and sparsely attended. 

A lot of electrons have been spilled about what a bad idea it was to move it to February, but no one seems to have addressed this question: what was the point of moving it in the first place? Why did they bother?
 
--Mr Zero

Friday, February 20, 2009

Oh My God the JFP is a disaster

Oh my God, the JFP is a disaster. There are 34 jobs. Total. In the whole thing. Including web ads. Fuck.

Update: Among the highlights are Los Angeles Valley College's 5-5 plus committee work, promotion-by-seniority, AOC-everything piece of crap; Florida State's Atlantic's position as assistant director of the counseling center, AOS: must have a Ph.D. in clinical psychology (uh, wrong APA, Florida State Atlantic); Yale's fellowship at the University Art Gallery, affiliated with the department of coins and metals. We are doomed.

Late Update: I thought it was weird that FSU was in Boca Raton. Sorry, and/or my bad.

--Mr Zero

Tuesday, February 17, 2009

PSA (Chicago edition)

Onto the Central APA and all the glorious things to do.  I wasn't able to quickly find a great list of recession specials, but the meeting is just a few block away from from Grant Park where Obama spoke on November 4th.

Of course there are some delicious deep dish pizza options (if you have personal recommendations let us know). 


Oh, yea.. and philosophy.

So, if you're off have a good time!

-- Second Suitor

ps: it is the cold, windy city... bundle up.

Monday, February 16, 2009

PGR Minutiae II

Thanks to everyone who read and/or commented on the earlier PGR Minutiae post. I wanted to correct a couple of defects and incorporate some issues that came up in comments.

1. I presented the points in the order that seemed to flow the best, not in order of importance. In descending order of importance, they would go (2), (1), (4), (3).

2. I didn’t really “sum up” or anything, but it seems to me that the four criticisms have a sort of net effect that is significant, even if on their own, the problems don’t seem that significant. The evaluative scale the rankings are based on is probably not particularly suitable for the finding of averages—not that the resulting numbers are nonsense, but they don’t really mean exactly what we think they mean; there is no way to tell whether a given difference in mean score is significant; the ordinal scale that is the final product greatly exaggerates these differences.

3. I am inclined to endorse Zach Ernst’s point about Leiter’s sampling techniques. The issue is whether the sample is representative or not. In comments someone asks why you’d want the sample to be representative. You want the sample to be representative because you’re trying to understand how the philosophical community sees the departments—it’s a survey about reputations—and if the sample doesn’t represent the community, the results won’t represent the community’s views. The best way to ensure representative sampling is to collect the sample randomly. This is probably not feasible, which is why Leiter makes use of the snowball technique. But it does not seem obvious to me that the group of respondents accurately represents a cross-section of the discipline—most of the respondents come from and teach in highly ranked department, but not all “research-active” philosophers teach in ranked departments (some teach in unranked departments; some teach at SLACs), and not all of them graduated from top departments (some graduated from medium- or low-ranked departments; some graduated from unranked departments). If the advisory board is just going to invite people to participate, an effort should be made to invite philosophers from a wide variety of teaching and graduate-school backgrounds. The sample as it is currently collected appears to represent a judgment about what kind of philosopher will or will not have worthwhile opinions.

--Mr Zero

Saturday, February 14, 2009

Do your good deed for the day

Fellow Smoker CH directs us to this petition (there's also a lively discussion here) asking the APA to enforce their anti-discrimination policy in the case of those colleges and universities that:
require faculty, students, and staff to follow certain 'ethical' standards which prohibit engaging in homosexual acts.
Some things are bigger than losing jobs in the JFP, folks, so sign the petition if you are so moved by the its stated purpose:
We, the undersigned, request that the American Philosophical Association either (1) enforce its policy and prohibit institutions that discriminate on the basis of sexual orientation from advertising in 'Jobs for Philosophers' or (2) clearly mark institutions with these policies as institutions that violate our anti-discrimination policy. If the APA is unwilling to take either of these measures, we request that the APA publicly inform its members that it will not protect homosexual philosophers and remove its anti-discrimination policy to end the illusion that a primary function of the APA is to protect the rights of its members.
--STBJD

Thursday, February 12, 2009

Harebrained thought of the day

So we all know that the CV repository at APA placement services is really a waste of time.  Has anyone ever gotten a job (or even a real interview) when you submitted an application at the APA? [Seriously though, has anyone ever successfully made it to the on-campus stage this way?]

Crazy thought: with the Central APA/JFP scheduling debacle (in case you're not paying attention, you don't have  an interview at the Central APA because the second JFP is coming out during the conference) maybe the chances of picking up something on the fly at the Central APA are higher??

-- Second Suitor 

Monday, February 9, 2009

Holy shit!

Just when you think you have it bad, something like this comes along to give you some much needed perspective. Seems like, Daniel Bennett, a doctoral student at Leeds University, had seven years of work he built up while "studying the rare butaan lizard" simply thrown in the trash. Granted, the work was literally a big pile of shit (77 pounds, in fact), and it was inadvertently thrown out after a "clear-out in his lab", but still, fuck... I think if someone threw out the few hundred or so pages I've accumulated on my dissertation and throughout graduate school, I'd handle it a little less well than Mr. Bennett, who seems to have some sense of humor about the whole situation:
"Whether it was the largest collection of lizard shit in the world is uncertain, but it certainly contained the only dietary sample from that little-known species Varanus olivaceus, and probably the most complete dietary record of any single population of animals in South East Asia. Its loss left me reeling and altered the course of my life forever."
Leeds University's offer of "£500 in compensation" for the mix-up got me thinking about what sort of compensation I would require if someone flushed my work done the toilet (you think that's how they disposed of it?). I mean £500 is a lot of money for us in the states, but I think I'd shoot a lot higher than that, like funding for the next five years or a computer made out of solid gold. How about you? What's the price you'd put on your work?

--STBJD

Friday, February 6, 2009

PGR Minutiae

I've been thinking about the PGR, since we've got a new edition on the way in a week or two. I have a few worries about it. Most of these worries concern the manner in which the information is presented, though one is methodological. Also, for full disclosure, I am broadly in favor of the existence of the report, even if I am dissatisfied with some of the things it is sometimes used for. I found it invaluable when I was applying for grad school.

1. I worry about the rating scale the evaluators use contains the necessary structure to permit calculation of the mean scores that the ranking is based on. The issue is fundamentally one about how the prompts are related. You can use numbers representing features of objects to calculate a mean if and only if the numbers are from an interval scale, which means that the points represented by the numbers are equally spaced. A classic example of this type of scale is the five-point Likert scale (strongly disagree; somewhat disagree; neither agree nor disagree; somewhat agree; strongly agree. Likert scales often come with a visual aid that represents the points as being equally spaced). PGR evaluators use a six-point scale with the following prompts: 5: distinguished; 4: strong; 3: good; 2: adequate; 1: marginal; 0: inadequate; I wonder whether the points on this scale are evenly spaced. For example, intuitively, it seems to me that the distance between "distinguished" and "strong" might be greater than the distance between "marginal" and "inadequate." This is important because unevenness in the spacing of the points will result in distortion in and unreliability of the mean scores that are based on them.

2. I worry about the emphasis placed on the ordinal ranking of departments. Leiter, of course, specifically says to attend to the mean scores and not just to the ordinal ranking, but whenever anyone talks about the report, everyone refers only to the ordinal ranking and never to the mean scores. When, for example, we talk about a range of schools ranked on the report, we always talk about the "top 10" or the "top 25", and never "the threes" or "the fours." Attending to the mean scores paints a rather different picture of the report, I think. NYU and Rutgers are the only schools in the high 4s, with 4.8 and 4.7, respectively. The rest of the top 9 is in the low 4s (below 4.5), with Harvard, MIT, and UCLA all tied at 4.0. This means that only two schools in the US round up to "distinguished" and that MIT is merely "strong." The rest of the top 30 are all in the 3s. The mean scores suggest that the difference in quality between NYU and MIT is equal to the difference in quality between MIT and UMass; looking at the ordinal ranking I would have expected MIT to be closer to NYU than UMass—nothing against UMass, but MIT is ranked at #7 while UMass hangs out down at #24, which seems like a huge difference compared to the seven schools between MIT and NYU. That this difference is illusory is entirely the point. The "top-10" category is fairly arbitrary—there is no cohesive grouping of approximately ten of the strongest departments. The reality is that there are about two departments that are close to "distinguished" and then a relatively large number of departments—more than half the departments ranked by the report—in the neighborhood between "strong" and "good." I worry that the ordinal scale we all internalize exaggerates the differences between departments. If I were on the advisory board, I would encourage a move away from it.

3. I worry a little bit about the sensitivity of the mean-score scale. The mean scores are rounded off to a tenth of a unit, but the raters are only allowed to make half-point distinctions. This makes me wonder whether a difference of one tenth of a point can be regarded as statistically significant. Maybe it would be better to round off to the half-point, as the raters are asked to do. It might not be good to present information in a manner that is more precise than the manner in which it was collected.

4. Related to (3), I also worry about the fact that a bunch of relevant statistical data is missing. For example, there is no information on the margins of error or standard deviations. This is important because at heart, the PGR is a poll that is trying to measure aggregate faculty reputations, and all measurement procedures involve some level of imprecision. Information about MoE and standard deviations would help us to better understand these facts. In particular, it would help us understand which differences are trivial and which are significant, and what the levels of uncertainty in the scores are. (Leiter himself suggests that differences of less than .4 are insignificant, and he may be right. But it would be nice to know how he arrives at that figure.)

--Mr Zero

Tuesday, February 3, 2009

Trying is the first step towards failure

I've been doing a bit of obsessing over some shit lately (in a constructive rather than destructive way; which is key to remember to do for everyone else out there in a similar mood), and had recently decided that this whole 'If you don't get a tenure-track job sometime soon after you receive your Ph.D you're a steaming pile of failure' path that we (and our peers, and our advisors) set ourselves on by entering graduate school is (to continue with my preferred imagery) a mountain of bullshit no one has bothered to clean up or mention after being tossed into it.

Lately, my conviction in this conclusion has been wavering. I mean, fuck, I would be a failure if something I've devoted at least 10 years of my life (including undergraduate) to philosophy, something I felt like I was good at, were to suddenly disappear and I was left with 250 pages and at least 2 years of work on a topic that, apparently, no one in all of the philosophical world cares about; right?

But, just as I was about to give into this thought, fellow Smoker Yousaidsomething points me towards this entry over at Bitch Ph.D. ringingly endorsing this article in the Chronicle by Thomas Benton on why no one should pursue a Ph.D. in the humanities. You should really read both posts, but, in case you don't have time, here's Benton's little gem on the topic of failure:
If you cannot find a tenure-track position, your university will no longer court you; it will pretend you do not exist and will act as if your unemployability is entirely your fault. It will make you feel ashamed, and you will probably just disappear, convinced it's right rather than that the game was rigged from the beginning.
Nail. Head. Hit.

The moral: don't give in to the expectations that make us feel (wrongly) like failures if all doesn't go according to a plan others have constructed for us, start thinking about other things that might make you happy in case the very possible scenario of not getting a job happens, and heed Xenophon's timely advice:
Grad school has to be about the journey, because there's no guarantee that there will be a career, or even a first job, at the end of it. If you don't love the trip, it's not worth continuing on it.
--STBJD

Someone kindly requests that you take this survey

Latest and final update: Professor Jun informs us that the survey has been removed. So, move along, there's nothing to see here.

Late late update (partially edited to remove any priming effects for those willing to take the survey): By linking to this survey, we at the Philosophy Smoker do not endorse any of the work of Professor Jun, make any judgments about the value, importance, or quality of the survey. These are the types of things that Fellow Smokers and takers of the survey can decide on their own and then they can direct their problems, questions, or concerns with the survey, to Professor Jun. And in the interest of collegiality and fostering constructive debate, I ask that you seriously consider doing so. I linked to the survey originally because from its description it seemed like an outlet for many of the same debates we've been having for the past few years, not because, without reading it, I thought it was the bee's knees.

Update: Professor Jun asks in the comments: "Can I ask that people who have comments about, critiques of, or suggestions for the survey send them directly to me at nathan jun [at] mwsu.edu instead of, or in addition to, posting them here? That would actually be helpful and constructive." Please do that for the good man, if you are so inclined, because comments are now disabled here to maintain good netiquette and help Professor Jun out. It's his survey, after all.

Professor Nathan Jun asked a few days ago that we link to this here survey, which:
[...] is part of a broader study of issues in the profession including, but not limited to, the influence of rankings and pedigree, employment and hiring practices, the status of women and minorities, and philosophical pluralism. Your answers will be entirely anonymous and it shouldn’t take longer than 30 minutes to complete the survey.
I haven't taken it yet, but judging by the response to the nice little debate we had going on here (that was originally started over there) perhaps some of you are licking your chops to answer some questions about 'the influence of rankings and pedigree, employment and hiring practices, [and etc.]'.

So, get all experimental up in that motherfucker and be sure to report back here.


--STBJD

Monday, February 2, 2009

Like waves in which you drown me shouting

I feel like I've been busy. I've been doing things. I swear that things have been done.

For the last semester (hell, the last year) I've been driving myself hard to try to become a reasonable job candidate. Even if it's not clear what counts as a good job candidate, there are things you have to do, things that you can work on and count as progress. My writing sample's better, I'm going to a conference, blah blah blah.  

But whatever that is, the focus, the drive, just knowing what to do next... whatever it was, I've lost it. I keep going to do work...and doing work...but it just doesn't feel like it's going anywhere.  

-- Second Suitor