Thursday, December 18, 2014

A permanent thread for info or questions about specific jobs

A lot of people in the comments seem interested in having space to discuss or request information about specific jobs. If providing information and if possible, please provide the source of your information.

Here's a permanent thread for this. Perhaps we can use the other open threads for people to trade horror or success or weird stories, any hints that they might think are helpful, strategies for dealing with stress of the job market, etc.

In the future, after this isn't at the top of the page, you can find this thread in the sidebar. Here's a picture, with the place to find this thread in the future.



Wednesday, December 17, 2014

We turned 6 last week [+ New Job Market Thread]

This was our first post. Our stats are below.**

I cringe a little looking back at some past posts. I probably wasn't as funny/clever as I thought I was (Seriously, referencing The Shining in our first post? Real fresh, bro.) or as thoughtful as I should've been (I'll leave it to the reader to find those). And I probably got, like, too raw sometimes (again, for the reader to find). Such are the risks we run archiving our growth as bloggers, scholars, and people (?) on the internet.

Thanks to everyone who reads. I remain pleasantly surprised by our audience, our commenters (we've published over 22,000 comments and counting), and more generally by the little community that's popped up here.

Big-ups especially to our co-moderators, Mr. Zero and Zombie (once commenters, themselves) who kept the blog running through some of the lean years and still keep the blog running more than I do (I've been seduced by microblogging). So happy to have them!

Okay. Okay. Who cares about this, right? Use this as a new job market thread. My stats, excluding the 20 PFOs I got from Wooster, a job I didn't even apply to (not really; though someone in the comments below said they got 4 identical PFOs from Wooster and counting; oof):
Current Position: Yearly; sorta secure(?); they'll keep me around if they can/the budget permits/the need persists (so it seems; they just want to keep it casual still, you know?).
Publications: Not enough to make me competitive for most jobs; folks coming out the last few years are really crushing it; keep it up (or knock it off? I'm torn).
Teaching experience: Plenty.
Applications this year: A handful.
PFOs this year: None official; one silent.
Interviews this year: None (but see applications).
Plan B: None. But I could see doing other things, finally. Especially if those other things don't require me uprooting my life every few years chasing the dragon.
-- Jaded, Ph.D. 

**6 years in, here are our stats (first one from Google (only from 2010 forward); second from Statcounter; click to embiggen):


Thursday, December 11, 2014

By popular demand: new job market thread.

Today's stats, thanks to philjobs.org's new date search feature (thanks again, guys!):

196 TT jobs listed between Aug 1 and Dec 11, 2014.

I count 101 fixed term positions (postdocs, VAPs, fellowships), same dates. Of those, more than half -- 53 -- are postdocs.

For good measure, 46 tenured/senior positions advertised.

Seems to me there are more postdocs than there used to be, which is positive, if they actually serve to transition philosophers into TT jobs (as in the sciences). If philosophy can avoid the perennial-postdoc problem they have in the sciences.

~zombie

Wednesday, December 10, 2014

The 2014 PGR: Confidence Intervals and Graphics

As you have probably heard, the 2014 edition of the PGR went live earlier this week. The results had been extensively previewed, so there wasn't anything terribly surprising that I could see. One thing that was a little surprising was that the confidence intervals we were promised are not going to fly after all, and that three types of graphical representations of the data appear instead. (Of course, this was announced a little ahead of time, too, so it wasn't exactly a surprise, either.)

Why no confidence intervals? A couple of reasons, according to Leiter. A) Given the design of the survey, in which not all evaluators evaluate all departments, there are several ways to calculate them, and they "did not want the precise method chosen to become a matter of pointless controversy." And B) properly informative confidence intervals should be rounded off to two decimal places, and this generates an accuracy-related mismatch with the PGR's long-standing practice of rounding to the tenths place, which is done in order to discourage "invidious comparisons."

I guess I kind of accept point (B), except that I don't see what the big deal would be to post the more-precisely rounded means along with the accompanying confidence intervals off to the side, or on a separate chart, while retaining the customary averages rounded to the tenths place for the main rankings. I don't see how this would encourage invidious comparisons. You'd have numbers rounded to the hundredths, but you'd also have the confidence intervals right there.

Point (A) seems to me to be a non-issue. If there's more than one reliable way to do the calculation, pick one of the reliable ways—whichever one you want, as long as it really is reliable enough—and tell whoever doesn't like it to go fuck themselves. If it's reliable then it's reliable, and it's not like we're measuring the critical mass for weapons-grade plutonium. One method is probably as good as the next, and I can imagine only that the bootstrapping procedure Healy used on the 2006 data would be totally fine. (Of course, maybe I'm wrong about all of this, and if I am I hope one of y'all Smokers who knows more than me about this will set me straight.)

Furthermore, I think the survey-design issue that Leiter says gives rise to point (A) serves to underscore the need for confidence intervals. It's just not possible to understand or properly interpret the Report without them. Not all evaluators evaluate all departments. Some evaluators evaluate all or almost all of them, but some evaluate only a few. And, as Healy points out, "higher-ranking departments do not just have higher scores on average, they are also rated more often. This is because respondents may choose to only vote for a few departments, and when they do this they usually choose to evaluate the higher-ranking departments." (His 2006 analysis found approximately the same thing.) That means that, generally speaking, more evaluators evaluated the top departments than the rest of the field, and explains why the confidence intervals for those top-rated departments tended to be narrower than the rest. That is, the size of the confidence intervals is not constant throughout the report, and so a difference of 0.1 might be meaningful when it involves a top-ten department like Yale and then non-meaningful when it involves a top-30 department like Virginia.

Now, I realize that I'm on record as being basically okay with looking to the confidence intervals for the 2006 Report and extrapolating/guessing about what they suggest about this year's edition. But i) I don't think doing that is close to ideal, and I was really looking forward to Healy's analysis of the 2014 data; ii) I think that it's okay to do that only if there's no more recent data available; iii) I realize that the 2006 intervals are only indirectly relevant to the 2014 edition, and don't have any direct implications in any specific case in 2014—just general trends, and then only suggestion, and definitely not anything close to proof; and iv) I'd really, really much rather just have confidence intervals calculated on this year's data—so then, you know, we'd know. (In retrospect, I think I could have been more clear about some of this in my post from last week, and I apologize for any confusion that might have caused.)

I do like that Brogaard, Healy, and Leiter have included these new graphical figures. I think that the histograms and kernel density plots are interesting. I do feel like they help me understand the ratings better. I do. But I don't agree with Leiter's claim that "these visualizations convey the necessary information in a detailed and accessible way." On the contrary. If you are trying to figure out what to make of the fact that (e.g.) UConn's score increased by a margin of 0.4 while MIT fell by 0.3 (which is a slightly smaller margin but takes place much higher up in the rankings), these visualizations are insufficient, and do not convey the necessary information. In order to understand what's going on there, you need confidence intervals calculated on 2014 survey data for each department, because sample sizes differ from department to department and tend to get smaller as you go down the rankings.

And so, while I appreciate why they don't want to invite "invidious comparisons" by posting rounded mean scores that are too fine-grained, I think that ultimately this is a misguided reason against calculating confidence intervals or including them in the Report. It seems to me that you need the confidence intervals in order to know which comparisons are invidious. And if past analysis is any guide, there's reason to suspect that differences of one tenth of a point are sometimes at least potentially invidious, and that this margin is more likely to be invidious the further down in the rankings one goes.

In closing, I continue to think that confidence intervals are a vital tool whose absence greatly impairs the PGR's usefulness, and I don't see any good reason not to include them.

Ok. I'm sorry about this. People have been asking in comments for a new thread, and I realize that this was not what you wanted. Last post about the PGR for a while. Promise. Soon I'll put together one of the "interview questions" posts we do every year.

--Mr. Zero

Thursday, December 4, 2014

Did The "Big Movers" of the 2014 PGR Actually Move? (No)

After the most recent PGR survey closed, Leiter posted some data about which departments improved most in the rankings. That is, which departments increased their ordinal rank most, in comparison with the next-most-recent ranking, in 2011. But because the "data" is presented only in terms of ordinal rank, the size of these moves are highly misleading, and are almost all based on trivial differences in mean numerical scores.You can see this when you compare the mean scores for the 2014 survey (reported here for the top 20 and here for the rest of the top 50) with the mean scores as reported in the 2011 version of the Report.

According to Leiter, the biggest movers of 2014 are the following, along with their numerical scores from both the 2011 and 2014 versions of the Report (I omit Saint Louis University, which was not evaluated in 2011):

Yale University (from #7 to #5, occupying that spot by itself)
Yale 2011 mean score: 4.0
Yale 2014 mean score: 4.1 
University of Southern California (from #11 to #8, tied with Stanford)
USC 2011 mean score: 3.8
USC 2014 mean score: 3.9 
University of California at Berkeley (from #14 to #10, tied with others)
Berkeley 2011 mean score: 3.7
Berkeley 2014 mean score: 3.8 
University of California at Irvine (from #29 to #24, tied with others)
UCI 2011 mean score: 3.0
UCI 2014 mean score: 3.0 
Washington University in St. Louis (from #31 to #24, tied with others)
Wash U 2011 mean score: 2.9
Wash U 2014 mean score: 3.0 
University of Virginia (from #37 to #31, tied with others)
UVA 2011 mean score: 2.7
UVA 2014 mean score: 2.8 
University of Connecticut, Storrs (from #50 to #37, tied with others)
UConn 2011 mean score: 2.3
UConn 2014 mean score: 2.7 
Of the "big movers" that were included in the 2011 survey, only UConn's mean score has significantly improved. All of the others improved by a trivial margin of 0.1, except the University of California at Irvine, whose mean score stayed exactly the same.

The bulk of the rankings are densely packed and ties are common, which means that apparently substantial jumps in ordinal rank can be caused by disproportionately negligible changes in mean evaluator score, or, in the case of UC Irvine, by no change whatsoever. In the case of UCI, what actually happened was this: Indiana and Duke fell from 3.1 to 3.0, UMass and Ohio State fell from 3.1 to 2.9, and Colorado fell from 3.1 to 2.8. None of these departments changed by very much—two by 0.1, two by 0.2, and one by 0.3 (Leiter suggests that differences of 0.4 or less are unimportant)—but it was enough to cause UCI to jump five spots and create the illusion of a substantial improvement.

Kieran Healy's analysis of the 2006 PGR data showed that "in many cases" differences of 0.1 were "probably not all that meaningful." This is the only time I'm aware of that any attempt has been made to perform this kind of analysis on Leiter's data, and although Leiter says Healy will be calculating confidence intervals for the 2014 edition, those calculations are unfortunately not yet available. But on the assumption that the 2014 numbers are similar to their counterparts from 2006, there is reason to doubt whether these differences of 0.1 or less represent actual differences—which means that almost all of the departments Leiter has singled out as "big movers" haven't actually moved at all. In all but one of the cases Leiter singled out, the 2014 survey didn't measure movement.

And so, as I have said before, there is a general problem with this kind of ordinal scale in that it fails to accurately represent the differences between ranked departments. As another example, the most recent data has NYU as the best-ranked department with a mean score of 4.8, which is better than #6-ranked Harvard and Pittsburgh by a margin of 0.8. That same interval of 0.8 also separates the sixes from UC San Diego, which comes in at #23. I, for one, find it impossible to look at the PGR and see these differences accurately. To my eye, the way the information is presented significantly understates the difference between NYU and Harvard/Pitt, and dramatically overstates the difference between Harvard/Pitt and UCSD.

Finally, I should say that I was glad to read that Kieran Healy will be calculating confidence intervals this time around. I think that information would be helpful. However, I bristle a little bit at the attribution of this idea to a session at the 2013 Central Division APA meeting; I raised this idea in 2009.

 --Mr. Zero

Wednesday, December 3, 2014

[Guest Post] APA interviews are morally impermissible (Again)

The blockquoted material below was originally published August, 5th, 2014. Use the comments as an open thread!

We're full-swing into the job market. Thanks to everyone who has responded to my survey about first-round interviewing practices (which is still going on; now candidates might have information)! Of the approximately 40 schools who have responded to my survey, 35 or so are doing interviews via remote means or skipping first round interviews altogether.

With those results in mind, here, again, is Asst. Prof. at a Canadian School's take on the moral impermissibility of APA interviews:
It’s the middle of the summer, so no one wants to think about searches for new tenure track hires. But now’s the time to talk about something important -- before those searches start. 
APA interviews are really expensive for job candidates. This isn’t news, but it’s worth doing the math again. Flights can easily run to $500 for candidates on the West Coast. If people are coming from the UK or Canada, it’s closer to $800. I don’t even want to think about Australia or Asia. Then there’s hotel costs, which even if you bunk with a bunch of friends in one room, is probably going to run past $100. So we’re talking about $500, $600, or a lot more for candidates to go the APA. 
That price might have been one thing in the olden days, when everyone got ten interviews at their first APA, and then got a job, and never had to deal with the job market ever again. Back then, the APA was a one-time cost. But that’s not the world we live in now. Now people spend three, four, or five years on the market before they get permanent jobs. They go to APAs where they have one interview -- a one-in-12 shot at a job. And then they do it again the next year. And then again the year after that, and the year after that. At that point, they’ve spent $2000 or $3000 just trying to get a job.
That bears repeating: candidates can easily spend well over $2000 going to APAs for interviews. 
For a grad student? For an adjunct? For some postdocs and VAPs? That is way too much money. It’s two or three months’ rent. It’s health insurance. Grad students, adjuncts, and other part-timers are the most economically marginalized, most economically vulnerable members of our discipline. To impose those costs on them is to impose on them a considerable hardship. 
Now, you could argue that in the olden days, there was just no way to avoid APA interviews. Search committees had to get a first look at people before they made up their minds about who to bring out to campus. That would be a bad argument for at least two reasons I can think of, but it’s an argument you could make. 
But now there’s Skype. Really. It’s a real thing and it works. I know, I know, it can be glitchy, and even when it’s not, it’s not the same as an IRL meeting. 
But how much better than a Skype interview is an APA interview? So much better that it justifies forcing some adjunct to spend $500 she could have spent on her kids’ Christmas presents? Or her health insurance? Or her rent? 
To recap: APA interviews impose a considerable economic hardship on the most economically vulnerable members of our discipline. And since there’s Skype, they impose that hardship for no reason at all. But to impose a considerable hardship on the weakest and poorest among us -- for no reason at all -- is an injustice. It is morally impermissible. 
That point deserves to be put in the second person. If your department is hiring this year, and if you let your department do APA interviews, you are committing an injustice. You are forcing economically vulnerable people to spend way more money than they can afford, in order to have a one-in-12 shot at your job. And you’re doing it for no good reason at all. That is a despicable thing to do. 
So what should you do? Easy. Don’t do APA interviews. Just refuse. Don’t wring your hands this year and think maybe you’ll skip the APA next time around. Don’t wait for the APA to come up with some new policy. Don’t wait for a few other departments to start skipping the APA before you do. Just do it yourself. Do it this year.
-- Jaded, Ph.D. 
 

Monday, November 24, 2014

Worst year ever?

According to PhilJobs, there are, as of today, 228 active job listings, of which 115 are for TT positions. I count 80 expired ads that are already past deadline, giving us a total of 195 jobs.* Compare that to last year's hiring numbers (again, PhilJobs, so those are self-reported hires, and not likely to represent the actual number) where there were 216 TT hires. Which adds up to fewer jobs this year compared to last year. Possibly a lot fewer. And we are really, really past the point where a significant number of new jobs are going to appear, I should think. The torturously slow trickle is going to get slower. And then stop. First round interviews are already being scheduled. And PFOs are already going out. (This is actually a PFO thread, since a Smoker requested one. But first I'm gonna do some complainin'.)

I still have a couple of applications to get done, but my numbers are very low this year. 17 applications total (although I'm being geographically very, very picky this year). Still, I applied for about 60 jobs last year, and I was being pretty selective then, and this year there are only about 70 jobs total in my AOS (broadly construed).

I don't remember the numbers for my first year -- the year everything went to hell in a handbasket -- 2008/2009. And we were still in the JFP days then, so getting an accurate count was near impossible, but I don't think it was this bad.

On the plus side, my impression is that there are a lot more postdocs and fellowships than in past years.

PFOs. I got one last week.**

~zombie

*I don't see a way to search PhilJobs for expired ads from the current job season without getting ALL 3,000+ expired ads, so it's possible there are more jobs that are already past deadline and expired. I count 80 such jobs going back to Aug 1, but make no warranties as to the accuracy of my eyesight and counting. Chalmers and Bourget: any chance of getting a search field added to limit searches by date or some such? Please?

** If your PFO indicates how many applications were received, please share that info.

Friday, November 21, 2014

In Support of Cheryl Abbate

Late update: John Protevi writes in to clear up some of my imprecisions in my original post:
[A] few things to correct. McAdams is an associate professor, not a full professor. And there were two students; one asked the question in class, another one pursued the matter after class with the recording and so on.
Thanks for clearing things up, John!

(Don't forget about Zombie's important post about interviews! You can use this as an open thread about the market, too.)

If you've been paying attention to the philosophy blogosphere, then you know that Daily Nous has a post up detailing a "political smear campaign" against a Marquette graduate student, Cheryl Abbate. According to Daily Nous, Abbate made a classroom decision during a discussion of Rawls to head off a discussion about gay marriage that a student attempted to initiate. She had a conversation about this decision with the student outside of class justifying her management of the classroom (that the student recorded and, it appears, lied about recording), and is now being attacked for "censorship" in the classroom by a full professor at her own school.

Please consider signing this open letter by John Protevi in support of Abbate. And read this by Charles Hermes, who encouraged us to write a post on this topic and who nicely details why it's important to support Abbate.

Read the following if you need to get caught up (or any of the links above):

At the center of this campaign, is a Political Science professor, John McAdams, who, it appears, has just emerged from a cryogenic freeze that started in the late nineties/early aughts, gnashing his teeth about the pernicious effects of "political correctness" and using terms like "gay lobby" without a hint of irony (so "trigger warning," if you click on this link).

SMDH.

After (if I'm remembering correctly), encouraging the student to record his conversation with Abbate, McAdams, emboldened by the remembrance of David Horowitz, posted snippets of it on his blog (without hearing Abbate's side of the story). McAdams criticized Abbate for her decisions about classroom management and accused her of being part of the vast left-wing conspiracy to silence all dissenting opinion or to make conservatives feel uncomfortable to voice their opinions.

Again: SMDH.

This is rich, coming from a man whose Rate My Professor listing is littered with references to his conservative, right-wing political beliefs (and suspenders) [sic throughout]:
Okay teacher. His bigoted attitude caused some views to be imperiously ignored. Also, according to him, this class requires a thorough background in Economics, which is not a prerequisite for the course. Conform and go along with what he says, and you'll be fine.
I took Policy with McAdams. FABULOUS suspenders everday. HOWEVER- If you are not a member of the Ron Paul fan club, the college republicans, or you don't write for the Warrior, you will be pissed off at his straight-up economist's approach to public policy. NO social graces.
So, why are we not using these testimonials from students to launch a campaign against McAdams' classroom management style? Perhaps we should send these students to record their conservations with McAdams and then "report" those conversations on our blogs expressing our worries about McAdams' inability to keep his political beliefs out of the classroom? No.

We understand that the classroom is a complicated place with dynamics that are unique to each class; hard decisions have to be made by teachers about how to manage classroom time on the basis of the unique dynamics of that class; and second-guessing teachers, especially graduate student teachers (and refraining from second-guessing students), doesn't create a space in which teachers are able to do their jobs well based on the unique dynamics of that class (also within the demands set on those teachers by the policies of their universities).

If anyone is undermining academic freedom and chilling speech, it's John McAdams.

--Jaded, Ph.D.

Monday, November 17, 2014

Do you have any questions for us?

This is the question I dread, the capstone of the interview, where I am to show (I guess) my interest and enthusiasm for the job/students/school/department, but NOT ask any questions that I could have looked up myself by perusing the department's website.

I always default to some variation on: Tell me about your students. And everybody always says the same thing about their students. And frankly, I have found students to be more or less the same at every school where I've taught. They're diverse. They range in ability. Blah blah blah. I mean, I've never taught anywhere that the undergrads as a whole were just radically better, or worse, or different, than anywhere else.

That stupid question does not seem to have hurt me, so I'm kind of inclined to think that it's just a rote question everyone asks to finish up the interview, but the answer doesn't matter (unless you massively blow it somehow).

So, open discussion here, as first-rounds approacheth: what questions do you have for them?

~zombie

Friday, November 14, 2014

Deep Thought for Friday

I fucking hate this god damn shit.

--Mr. Zero