Thursday, November 10, 2011

Some Things I Have Noticed While Perusing This Year's Job Ads

1. Gettysburg College wants someone whose Area of Specialization is philosophy of peace and nonviolence. That seems pretty specific. Is this something that a lot of people specialize in? Are there enough controversial philosophical issues surrounding peace and nonviolence to make this a worthwhile area of specialization? Why would a philosophy department at such a small school require a philosopher with such a specific, narrow specialization?

2. This year's award for best "is there an echo in here?" ad goes to University of Reading, Reading, Reading. (#61, 192W). Yeah, but where is this university located?

3. Saint Anselm College wants someone who specializes in "contemporary" and whose AOC is the (entire?) history of philosophy. I guess they want a real generalist. Someone with a broad background.

4. I see a couple of one-semester VAPs (e.g. Lyon College, #30, 192W). Why would anyone agree to that? I could see it if you already lived in Batesville, Arkansas, pop. 9,556. But you don't, do you?

5. As Zombie points out, and has been discussed in comments, the Cycorp ad is pretty awesome.

6. I think I picked up exactly one new application in 192/192W. I'll double-check later when I have more time, but that's piss-poor.

7. This sucks.

--Mr. Zero

147 comments:

Anonymous said...

is the reading thing like the "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" thing? Are they both grammatically correct sentences?

Anonymous said...

Regarding #1: my guess is that they have an internal candidate, and are writing the ad narrowly so as to exclude all possible competitors.

Anonymous said...

This is just a guess, but if Gettysburg is a Quaker-affiliated school, that AOS might make sense. In a couple of the military school adds, I noticed an AOS request for military ethics, which also seems pretty specific (but also understandable, given the schools in question).

Anonymous said...

Re (1): Bernard Williams has an essay in Philosophy as a Humanistic Discipline (I think it might just be the title essay) where he says it might be good if people focused on particular phenomena of broad interest rather than disciplines or subdisciplines. For example (I think this might have been his example, but I'm not sure) someone might work on "artifacts" instead of mereology and/or metaontology. Such a philosopher would still do a lot of metaphysics, but it presumably it would be intertwined with significant normative stuff, too. She would need to say a lot about teleology in answering both metaphysical questions about the nature of artifacts and practical questions about our thought and practices with respect to them.

This has always struck me as an interesting and attractive suggestion. "Peace and nonviolence" seems like a ripe candidate for it. A philosopher focused on these things would have things to say about just war theory (which apparently is indeed a hot topic), but these would inform and be informed by a whole bunch of related, and very interesting, questions for normative ethics at the individual level.

I don't know if that's in this is in the neighborhood of what Gettsysburg College has in mind, though. It'd be nice if they did. But I bet it's just that (a) they're probably a Quaker school or something, and/or (b) what Anon @ 10:22 said.

Anonymous said...

I agree with 10:22 - that was also my first thought.

But I'm ignorant concerning the rules here. Is it required that depts. post their jobs? Has any dept. hired someone without doing so? Is this cause for APA censorship?

Mr. Zero said...

About the Gettysburg thing: it's always possible that I'm naive about this stuff. And I'm not saying that there aren't any interesting issues in the neighborhood of peace and nonviolence. But it seems to be that, as an AOS, peace and nonviolence is really narrow.

I mean, I could see writing a dissertation on peace and nonviolence--just like I could see writing a dissertation on the "humanity" formulation of the categorical imperative. But if you did that, you wouldn't describe yourself as a specialist in the humanity formulation, and you wouldn't expect to see a job ad looking for such a specialist.

And yes, departments are required to advertise their jobs. There's a law.

Anonymous said...

They may have to advertise. However, that has no bearing on them rejecting all applicants because someone already has the job sewn up. I hear that this happens all the time.

Anonymous said...

When small schools/programs advertise there is a different thought process than when big schools/programs do, even though the ads look similar.

So take the St. Anselm job. It seems funny that the entire history of philosophy, Plato through NATO as they used to say, could be an AOC. And at a big time grad program it would be funny. There an AOC might mean a few publications or presentations in the area, or if a younger scholar, a few grad seminars on the topic. But for a smallish school it means "could you teach ancient or medieval or modern or contemporary" on short notice.

Likewise, when Gettysburg asks for AOS: peace and non-violence, they are not expecting some one who 'works on that' the way a big school expects a logician when they advertise "AOS: Logic" They just want someone -- maybe an ethicist, maybe a political philosohper, maybe a hippie, who could teach a class on it with short notice and be happy doing it.

Anonymous said...

12:04,

I could see that if "Peace and Non-Violence" were the desired AOC rather than the AOS. But given that it's the AOS, and that, from looking at Gettysburg's dept. they do seem to care about research as well as teaching, I'm guessing they DO want someone who "works on" issues with peace and non-violence. Perhaps they are interested in developing some kind of undergraduate specialization or institute or something, though if that were the case, you'd think the ad would mention it.

Relatedly, the other job they advertised for in the October JFP was also strangely specific, seeking someone who does social and political from a continental standpoint, with "competency from Husserl to Zizek." Does Husserl even have a political philosophy?
Strange.

Perhaps they just have an unusually specific sense of what kind of philosopher they want to hire.

Anonymous said...

Lord knows the military schools need someone to teach them military ethics.

Anonymous said...

My guess on #1 is that the line was created because someone with lots of money wants such a post. Perhaps the board of trustees, or a large private donor, decided this should exist. At my PhD university, the board of trustees wanted a tenure track hire to focus on human rights, so created a line for that purpose.

Anonymous said...

This is (only) my second year on the market. Last year took some out of me. I feel like my motivation is totally sapped this year. I've sent out all my Oct. JFP aps, and I picked up three in the Nov. JFP, and I can't even bring myself to sit down right now and wrap up these three applications. I feel entirely demoralized and apathetic.

Zero - how have you managed to forge ahead and keep your head up for this long? I admire you, sir.

Anonymous said...

I took the St. Anselm ad to be looking for someone who does something that's contemporary who can teach some history of philosophy but without a strong preference as to what for either.

A lot of peace studies departments are forming, and there might be funding available to bring in more people who can teach in it, but they might want people from several departments to be in that interdisciplinary program, so they're advertising here for a philosopher who does some work on such issues. I know of position where there was funding earmarked to hire a person who did work on environmental philosophy (it was listed as an AOS), and they hired someone who had one article in it and was willing to teach an undergraduate course in it, whose work mostly consisted of other areas of ethics.

Anonymous said...

Notice that Gettysburg currently has two VAPs, one of whom specializes in social and political philosophy from a Continental perspective, the other of whom specializes in the philosophy of peace and non-violence. Coincidence? I think not.

Anonymous said...

Gettysburg has a Peace & Justice Studies minor. It didn't take long to find that, either; you'll find it immediately if you go to their Philosophy Dept's webpage.

Anonymous said...

3:25-
Are you telling me I spent all this time refining my AOSs in Husserlian through Zizekian political philosophy as well as the Philosophy of Peace AND Non-violence, only to have insiders screw me?

Anonymous said...

@ 4:39:

3:25 here. Yes, that's what I'm telling you. Sorry to disappoint. ;)

Rex-158 said...

Anon 3.39 is right: Gettysburg College has a Peace and Justice Studies Program. There are a couple of these out there. There are also a couple of philosophers who study these issues. Anyway, here it is:

http://tinyurl.com/86t25ex

CTS said...

Gettysburg has a surprisingly big - in termns of students - phl program. One of their sub-areas (sorry, cannot recall if this is a specialization within the major or something else) is justice/peace studies.

And, yes, there are people who specialize in subfields that would fit the AOS: some people in phl of law, phl of international law, political philsophy, international political philosophy, and some subbranches of the phl of pyschology.

In fact, this is a growing area of interest in many liberal arts colleges.

zombie said...

Gettysburg:
"Peace and Justice Studies is a multidisciplinary minor that explores the causes and nature of conflict and war, the connections between violence, terrorism, war and social life, and models of peacebuilding, healing and reconciliation in the resolution and transformation of conflict. Students who minor in Peace and Justice Studies are encouraged to explore opportunities relevant to Peace and Justice Studies through fieldwork, service learning, internships and study abroad."

Cool beans.

zombie said...

I think "Reading, Reading, Reading" is just a job description. Like, for a math job it would be "Multiply, Divide, Carry the one."

Anonymous said...

In regards to the Saint Anselm job, I'm just glad to be seeing a whole lot less of this sort of thing this year than last. Save two, every job I was eligible for last year had 'history of philosophy' as an AOS. And it isn't as though I wrote my dissertation on some obscure philosopher no one reads anymore. I wrote on Kant. Thankfully the market seems to have corrected this year.

Built to Spill said...

Zombie:

Don't forget to carry the zero.

Anonymous said...

Alright not that I have a shot at getting an interview at Brown, but the deadline is in Dec and they're up on the wiki as having scheduled first rounds?

Jamie Dreier said...

Brown has definitely not scheduled any interviews.

(I'm not on the SC, but I'm sure of this.)

Rebecca Kukla said...

You never know what donors will want. It is entirely plausible that someone gave Gettysburg a bunch of money to use on faculty working on peace and nonviolence. That doesn't seem weird to me at all. And it could explain the VAP without turning the ad into an inside job.

Anonymous said...

Look, I don't think there's anything weird about the Gettysburg ads at all. They currently have two VAPs working in EXACTLY those areas. The VAP lines have probably been converted into TT lines, and they probably intend to move the current VAPs into the TT positions -- but they are still required by law to advertise the TT positions. I've seen this kind of thing before more than once. Nothing odd about it at all.

Anonymous said...

There's no requirement that jobs be advertised. Most senior hires aren't advertised. There might be rules that, if you advertise, you need to meet certain standards. In most industries the bulk of hiring is done based on who you know.

zombie said...

I think there are federal requirements for advertising jobs, to satisfy immigration and EEOP (and maybe other) regulations. Since many universities receive federal funding, they have to be able to show compliance.

I don't know what those requirements say about "opportunistic" senior hires. But there are certainly ads for open rank positions.

Anonymous said...

@ 7:12

Nothing odd about it, but there's surely something wrong about it. Namely, it's not a job competition...the search is wired, which is nice when you're the beneficiary, but no so nice when you're an unemployed philosopher who would like a shot at getting a job at Gettysburg.

Anonymous said...

@ 11:09

I didn't mean to suggest that I condone this practice. I don't. I think it sucks. My point was just that it happens, and it probably explains the Gettysburg ads.

Anonymous said...

Since people keep hypothesizing:
(1) If you want to get a green card for someone who is not a US citizen who doesn't already have a green card, you must hire them with a search that meets certain kinds of publishing criteria.
(2) Schools have affirmative action/diversity plans. Virtually all of these plans PROMISE the gov't that they will conduct national searches for all tenure lines.

zombie said...

I don't see what's wrong with hiring a VAP for a new TT line. The VAP is there, has presumably demonstrated his or her qualifications, has proven to be someone the dept likes. This doesn't strike me as different from a company promoting from within. If you earn the promotion by your demonstrated capability, you deserve it. Same with VAP to TT. It's not like you get a VAP job by just walking in off the street -- they are also the result of a national search.

What's a little smelly about it is that the job is advertised as if others might have a legitimate shot at it, but as has been noted, this may just be forced compliance with internal HR or federal regulations. It is not wrong for a dept to decide, after being told to consider other candidates, that the one they've already got is preferable.

Or am I missing something here? Is it that the false hopes of other candidates is harmful to them? It doesn't strike me as more harmful than being a VAP who has put in the work and then gets passed over for the promotion (always the bridesmaid, never the bride...)

Anonymous said...

"What's a little smelly about it is that the job is advertised as if others might have a legitimate shot at it, but as has been noted, this may just be forced compliance with internal HR or federal regulations. It is not wrong for a dept to decide, after being told to consider other candidates, that the one they've already got is preferable."

That's assuming, of course, that the department actually CONSIDERS the other candidates. I'm not sure that's the case in these kinds of situations. What's really going on is that the decision has already been made, and the search is pro forma. This gives outsiders who apply for the job the false impression that they are actually being considered fairly and impartially for the position.

The problem, of course, is that candidates aren't in a position to know for sure whether a given search is legitimate or merely "staged" for compliance purposes (except after it is complete, perhaps).

zombie said...

"The problem, of course, is that candidates aren't in a position to know for sure whether a given search is legitimate or merely "staged" for compliance purposes (except after it is complete, perhaps)."

Granted. As I said, it may be misleading. I'm just not sure it's significantly more misleading than any other job ad where, say, the SC is only going to seriously consider your application if you're from an Ivy but the ad says nothing about such hidden "qualifications." (Hey there, Lou Marinoff! Haven't forgotten you over there at the "Harvard of the Proletariat." http://www.insidehighered.com/advice/2009/08/31/marinoff)

I stand by the claim that there's nothing wrong with preferring a VAP who is a known quantity over an unknown stranger. It's less suspect than other hiring biases because the VAP does something to earn the promotion.

zombie said...

That url is: http://www.insidehighered.com/advice/2009/08/31/marinoff

in case you're interested in knowing why you're not good enough to teach at CUNY.

Anonymous said...

Does anyone know if Lou Marinoff is getting rich on that philosophical counseling business? If so, how rich?

Anonymous said...

On the topic of speculating about inside hires, did anyone notice that Colby College, who is advertising for an AOS in Environmental Ethics and an AOC in Continental (and emphasizes that candidates should be an "exact fit" for these criteria), happens to have a VAP with those exact areas of specialization and competence? Curious.

Rex-158 said...

Re: philosophical issues surrounding peace and non-violence:

http://peacephilosophy.org/

Anonymous said...

I don't understand why anyone would assume that current VAPs in the hiring department have a leg up on the competition for a tenure-track job. VAPs were hired in a completely different market the previous year, typically with no thought of their hirability on the tenure track. In the cases that I'm familiar with (from both sides), being a VAP means having a leg *down* on the competition (if that metaphor can be reversed!). When I was a VAP I got three t-track offers from better departments but *not even an interview* from my home department. (There was talk of a "courtesy" interview that never materialized. They clearly didn't regard me as a potential hire and were amazed at my success at other departments.) And now from the other side I've *never* seen VAPs being groomed for t-track jobs. It probably does occasionally happen, and I have no particular reason to think that Gettysburg is not an exception. But in my experience the rule is that VAPs should assume they'll be moving on and that others should not fear competition from VAPs in the hiring department, even if those VAPs perfectly match the hiring criteria.

Anonymous said...

Also regarding a VAP who fits the job description, related to Zombie's remarks. The VAP is a known quantity, and is not going to be judged in terms of "promise" - basically, he or she can't bullshit. I've seen this work in favor and against VAPs - if the dept likes what they see in the VAP, its a huge advantage, perhaps indefeasible, but if they don't (or even if a minority of the faculty don't), there's no way in hell that person will get the job.

I also think people here are overestimating the coherence and rationality of the hiring process. So many crazy factors go into making a hire, there are so many competing interests at play among the faculty (many to do with their relations to each other, and the candidates are judged in terms of those relationships), that I'd suspect the kind of complete agreement people think happens in advance of publishing an ad to make such searches "fixed" is very very rare.

It would also be pretty easy to check how many internal hires actually take place - look at the Leiter lists, or this year just keep track of how many jobs appear to be fixed, and then follow up on them at the end of the season. I think the number will be low.

Anonymous said...

I was a VAP who was hired for a TT job at the same school, so I am surely biased, but I don't see much wrong with internal hires like this. There was a nationwide search for the original VAP position, which I was told later had 100+ applications (not a big number by today's standards, but not insignificant), and I had to go through the same interview process as TT hires, phone interview, on-campus interview, teaching demonstration, etc.

It then turned out that the department got a TT line my 2nd year there. At that point, I had a pretty good track record of student evals, publishing while teaching there, and even being willing to pitch in on committee work. The chair argued to the administration (successfully) that they knew they wanted to hire me and shouldn't have to go through the motions of a full-on search that wouldn't really take any external candidates seriously.

I'm not sure what's unfair about this. I get why it's frustrating from the applicant's point of view (I was on the market for 4 years), but I also didn't appreciate wasting time applying for jobs that were clearly earmarked for the internal candidate.

Anonymous said...

@ 10:07

I think it's only unfair in those instances where outside candidates are led to believe they have an equal chance of getting the job. But then again, maybe the point of highly specific advertisements (such as the Gettysburg ads) is to provide a tacit signal to outside candidates that the job(s) are earmarked and so they shouldn't waste their time applying for them!

zombie said...

"I think it's only unfair in those instances where outside candidates are led to believe they have an equal chance of getting the job."

By that standard, all job ads are unfair. It's never the case that all applicants have an equal chance of getting the job. Even assuming a level playing field, a handful will make it to the campus interview (where they might all have an equal shot) but most won't even get to play for any given job search.

Anonymous said...

Zombie, let's not pick nits. I chose my words poorly, but I think you understood what I meant.

Anonymous said...

The legal requirements can make a difference here in altering the course of a pro forma in-house hire. My own tenured position was obtained by beating out such a candidate. I chaired a committee which presumed that our in-house candidate would get the job--but was beaten out by an obviously better candidate. If mandated procedures are followed, then it's not a given that an in-house candidate will succeed. I can't say that's the case for senior hires, however, where I get the sense that some of those are tailored to one candidate.

Anonymous said...

I agree with 3:56, and would encourage people to apply. Sometimes being a VAP is an advantage, and it's a case of wanting to hire the internal candidate. Sometimes the internal candidate isn't desired. Sometimes the internal candidate is excellent, they really like what they see, and they decide they prefer unknown potential to known excellence. Sometimes even if they like their VAP, they know their VAP is likely to get other offers and so the search is real even if they'd be happy hiring their VAP.

The advice not to borrow trouble is wise here, I think.

Searchy said...

If you all don't mind, here's a little advice about applying. I'm currently serving on a search committee for a TT person in philosophy. I am at a non-R1 school and my department really cares about quality teaching. Here's my advice: when we ask for sufficient evidence of teaching effectiveness, don't cherry pick your evaluations. We have received so many applications that give just a small handful of individual student evaluations (~4-5). Please give your evaluations for at least an entire course, preferably two, and preferably from your two most recent courses. We've even had people submit evaluations from only 2008 or 2009 when they have taught much since then. Why? Beats me.

Think about it this way, if you don't give us enough to judge whether you really are an effective teacher now, why would you make our short list?

I say this so that some of you can improve your applications.

Asstro said...

Searchy:

I'm a fac member at an R1 school, and thus have been involved in (though never chaired) faculty searches. We tend not to focus too heavily on the teaching dossier, though we do, of course, look at it.

I also, however, advise grad students on their job apps. Can you say a bit more about how you might handle a file that included non-cherry-picked student evaluations? At least at my university, that'd result in an incredibly fat and unnecessarily heavy dossier. Are you suggesting that the applicants should include the actual evals, warts and all? Or are you suggesting instead that they also include, in their teaching dossier, some acknowledgement of the areas in which they have been criticized and will need to grow?

Anonymous said...

One thing I find frustrating is how search committees give a vague description of what they want, like "Evidence of teaching effectiveness" and then complain when they don't get exactly what they really want--like two sets of your most recent course evaluations.

I understand why a search committee might want those things. I could also understand why a search committee might not appreciate my sending them 60 evaluations. I can also understand why someone might send their evaluations from a few years ago-- maybe the classes they taught back then were on topics that the candidate thinks SC's will be excited about. Maybe the candidate did an extra good job in a particular course from a few years ago and wants to highlight that. Maybe the candidate has a few stray unfair or nasty comments in her otherwise stellar recent evaluations, but would prefer not to send the nasty comments along if the ad doesn't specifically ask for them.

Again, I understand the reasons for wanting two sets of the candidates most recent evaluations. But I can also understand why a well-meaning candidate might think she's putting her best foot forward by doing something else. If the SC wants something specific, why not just ask?

Xenophon said...

Searchy, thanks for the advice. Let me ask you an eternal question: how do you read student evaluations? A lot of people swear by them, and a lot are really skeptical of their value. Do you take them at face value, counting all negatives against all positives, or do you look at the comments to get a look inside the classroom, to try to see the teacher behind the evals? (I know the latter is hard to do -- is is possible, or not worth trying to accomplish?)

Anonymous said...

Here's a question:

When I send recommendation letters, I send them from people who I know have positive things to say about my work. I don't include letters from faculty who have been less-than-impressed with me.

When I list publications, I do not list papers that were rejected from journals, or journals from which the publication has been rejected before it found a home.

When I send a writing sample, I send my best, most polished work. I don't send along with it my most hand-wavy, poorly argued paper.

So there's a lot of cherry picking that goes on when sending in an application. It's cherry picking that can aptly be described as putting yourself in the most positive light possible. Why, then, should it suddenly seem weird when applicants don't include nasty evaluations from students, or send evaluations from their most successful class (rather than from their most recent class), and so on?

Anonymous said...

FWIW this is what I do:

I give a summary of all teaching evals with averages of scores on important questions, such as "Is the prof an effective teacher?" Under that I include some selected good comments along with some bad. My teaching portfolio is 39 pages long just with that included. I think this is a mixed optimal strategy. I also have a table of contents for my teaching portfolio since it is so long.

Searchy said...

Yes, I knew I was probably going to open a can of worms.

Asstro: If the applicant happens to send all individual student evaluations, thus creating a large dossier, so be it (our process is online too, so big files don't matter as much). I want to see the quantitative and the qualitative. If our number one need is a great teacher, then I need to have ample evidence that the person is in fact a great teacher. NB: usually the quantitative evals are already summarized, but one can also type up and condense the qualitative results as some applicants are beginning to do.

Xenophon: A picture is formed by looking at the quantitative evals, qualitative evals, teaching recs, and the applicant's cover letter. I put much greater weight on the evals since anyone can write a glowing rec and craft a good cover letter.

Listen, this is not an exact science by any means, but the notion of evidence has its place. When we make discriminations separating the good from the not-as-good teachers, we need evidence to make that cut. Thus, if one cannot clearly show that one is a good teacher, he or she will not make that cut. Plain and simple.

Anonymous said...

857

I'd imagine we send complete evaluations because those are the most informative.

Unless you've read your rec letters you don't know what they say or the amount of nuance they may contain as regards your abilities. There is, of course, a trend toward rec letters being purely glowing AND, as evidenced by SC members on this very forum, a trend toward discounting rec letters for that reason.

Re: your work. You might send in your best writing sample but that doesn't yet make it a good writing sample. The writing sample itself is an extremely useful gauge of who you are as a philosopher because your best paper really does, still, carry a warts and all approach. At least mine do.

If everyone just included their positive student (or faculty) evaluations of their teaching then the teaching dossier would become essentially worthless as an indicator of your teaching. Seeing what students have liked and what they didn't like best represents who you are as a teacher of philosophy. If you consistently have seriously negative evals then that is something pretty deeply informative and it should be taken seriously by both you (to improve your own teaching) and a SC (to make a more informed judgment).

At the very least, if we are not including a full set of student written responses then 1)that should be indicated on the teaching dossier and 2) the statistical data on the evaluations should be given as well.

I don't see any reason why I wouldn't include this unless I was either a bad teacher (which sucks for me) OR had a really obnoxious bad student review in which case all of my other good reviews would clearly show it to be an aberration. If it was really serious (alleged really bad things) I could even address it in the dossier itself.

Anonymous said...

the amount of weight some people here seem to place on student evals for assessing teacher quality is horrifying, given the way such instruments track factors irrelevant to quality of teaching (gender, race, nationality, etc). I don't have the studies handy, but someone out there probably does. Why not just look the candidate up on ratemyprofessors (which, if I remember rightly, someone did a study on showing it accurately tracked student evals)?

Anonymous said...

"I put much greater weight on the evals since anyone can write a glowing rec and craft a good cover letter."

And it's easy as hell to get good student evals. Getting good evals is easy; getting good evals while also holding your students to high standards is the tricky part.

When I'm on a search committee, I read for the bad evals; they tell me so much more than the good ones. Good evals are worthless. But bad evals that complain of difficult grading, lots of writing, and standards being too high paint a very different picture than evals complaining of the instructor being vague, unclear, and late to class.

Anonymous said...

Yes, the cherry-picked writing sample might not be perfect and the cherry-picked recommenders might not say exclusively positive things. But those things might be true of cherry-picked comments as well. The point is just that while I am not expected to get a letter from the professor who has the worst opinion of me,and I am not required to send my worst work along with my best, and I am not required to list journal rejections along with acceptances, in this one instance I'm expected to send my worst evaluations along with my best. It's just surprising that so many people seem to assume a different set of rules for teaching evals.

As for whether it would be informative to send cherry-picked evals, that might just depend upon how you cherry pick them. I would have thought it would be annoying to send a giant stack of evals, many of which say little more than: "The teacher rocks!" or "This class was scheduled for too early in the morning." Cherry-picking comments that actually say something substantive, like "The teacher did X, Y, and Z for me" or "I liked that he did X, and Y, but didn't like that he did Z" might actually be informative. And it wouldn't require sending 60 pages worth of stuff.

My point isn't that I would mind sending all the evals. Just that when it isn't specified that the entire stack is to be sent, a person could reasonably come to a different conclusion about what would constitute helpful evidence of teaching excellence.

Anonymous said...

1056

I see what you're saying BUT there are, I guess, two different concerns here and two different ways of addressing them:

1. If the worry is sending too many pages in the teaching dossier then this much can be remedied merely by condensing the material and not literally sending in every student evaluation. Typing up all of my student comments from the last 5 courses I've taught (several hundred students) fits them all into about 8 pages give or take. That coupled with a teaching philosophy, statistical data on each of my courses, and all of my recent syllabi puts my teaching portfolio at around 30 odd pages. I could shorten that by including fewer syllabi and courses (as the years go by I replace older courses with newer) but I find that to be a condense, useful, and tolerable amount of information for a SC to read through since the guesswork has been taken out of it.

2. If the concern is that you do not think including all of your student comments is the best way to represent your teaching excellence. Well, there's some truth to that. If you have teaching awards or attend or participate in pedagogical conferences those are also good indicators. However were I on a SC that cared deeply about teaching effectiveness (I ask you to imagine the same) I would be skeptical about someone who cherry picks her or his evals. I would worry about what they didn't want me to see. Of course if the rest of the dossier is glowing then this might not be an obvious cause for rejection but it might (at least for me) lead to a question during an interview. Still, if you teach a class with 50 students and I only see 8 or 9 comments then, assuming all the students had to fill out an eval (something which, if not true, should be stated somewhere) then I would wonder what happened.

How's that for a compromise?

Anonymous said...

Searchy, I disagree. I think the important thing is to make sure that the evaluations are informative. That has little to do with whether you hand the committee a complete stack of comments. At my old employer, student evaluations were very detailed requiring comments for each question given a numerical score, so to send you complete comments would have meant sending you 75 double-sided pages along with my application.

I don't think you want that. (FWIW, my colleagues at a SLAC didn't. "Don't make us figure out what is relevant; we have another 300 applications to look at.")

I also think it's completely reasonable for an applicant to want to remove obscene and rude comments. While I'm sure most search committees would believe that they've discounted the effects of an ethnic slur or commentary on physical appearance or abusive language on their evaluation of an applicant, I think there's reasons to think that implicit bias is powerful, and that an early comment saying "she's popular because of her giant tits"/"he's obviously an affirmative action minority hire" will color the rest of the evaluations.

So, I summarized it in a chart comparing my numbers on all of ten of the main questions to the institution's average numbers. I explain how our evaluations work. I had my chair observe my teaching and write a letter. I included some of the more verbose and informative comments -- about ten, I think. The evaluations portion of the teaching dossier, including the numerical data: 1.5 pages, total. It worked for me (several interviews, t-t job at teaching school) in last year's market.

I think it is important to give evidence that one is a good teacher, but I think that my way gave more evidence than merely handing the committee a stack of evaluations would have.

Anonymous said...

To those on search committees who want to see complete comments, but realize this means the candidates have to type them up themselves (as 11:12 describes) - do you just assume that the candidate accurately transcribes the comments? Have you ever followed up and asked for the original (perhaps hundreds) evaluations? Why would you take the candidate at his or her word, especially if this is such an important part of the file?

Xenophon said...

I've always been curious about the expressions "evidence of teaching competence," "evidence of teaching effectiveness," and "evidence of teaching excellence." Should candidates read anything into a department's choosing one phrase over the other?

(1) Is there an implicit rank ordering indicating how much departments (or universities) think they evaluate teaching quality?

(2) Is the choice of one phrase over another typically mandated by the university, or do departments debate wording when writing job ads?

(3) Does anyone else think that "evidence of teaching excellence" is a little fishy? Are all your professors really excellent teachers? Or is it like when a university mandates that ads say the school is a learner-centered institution? Certainly that's an empirical question, not one that can be determined by the fiat of some administrator, yet if you asked for evidence to back up the claim then you'd probably get either a confused look, or a pissed off administrator/SC.

Anonymous said...

Adding to Anon 1139-

It's possible that the reason some candidates include evals from years ago rather than the class they taught last semester is that the evals from an earlier semester are genuinely more informative. Sometimes I get an entire stack of evals and say: "Really? There's nothing in here. Not good, not bad. It's just all worthless." So I wouldn't be inclined to send a worthless stack in, and would be more inclined to send in comments from a semester where the students provided genuinely informative feedback. It's cherry picking, AND it's more informative.

I also agree that it's reasonable to want to remove abusive comments. I don't think that removing nasty and abusive comments renders the rest uninformative. It *might* if all the comments say is "X was awesome!" But if a lot of comments mention that the applicant was always available outside of office hours, or gave extremely helpful feedback, then you've got some important information there. And you got it without the applicant having to include comments that exist for no reason other than to embarrass or belittle her.

zombie said...

I never included student comments in my teaching evals. To do so (without cherry-picking) would mean 30+ additional pages for each course. Since most students don't write comments, I'd send a lot of blank pages, just to prove I hadn't cherry-picked. (I always laugh when I get a document and a page is printed with "This page left intentionally blank.") And many comments don't show anything about my teaching. I found most of them either said they loved me, or that my class was too hard. I had a student once write that I resembled some character from a science fiction TV show.

I always sent a complete set of the data sheets, for every course I ever taught. My teaching dossier was well over 20 pages.

Second, I was at a new faculty meeting a couple weeks ago, and one of the senior faculty recounted an incident from another university where he taught. A student wrote, in four inch tall letters, an extremely derogatory slur directed at a female faculty member. He also noted that female instructors consistently scored several percentage points lower on their student evals than male instructors teaching the same courses. The two incidents combined cause the university to reconsider the student evals, and the weight they gave them in tenure decisions.

These comments, I guess, are directed at Searchy, or others who give a lot of weight to student evaluations. How do you establish that positive student evals (if that is what you are looking for), correlate with good teaching?

Searchy said...

omg. People need to chill a little. Remember that my basic advice was to give sufficient evidence of teaching effectiveness. I happen to value complete sets of teaching evals, but as one commenter stated that he or she did a fairly concise but informative analysis of his or her student evals, that would be fine with me. The point is that for a school that values teaching (refer to the job ad to find out) you have to try to convince the SC that your teaching is good. And when we have 200+ applicants, if you fail to come close in providing that evidence, that's a serious ding.

Zombie: we are well aware that women and minorities tend to score lower on student evals. We take that into serious consideration when evaluating the teaching effectiveness of a candidate. That doesn't mean that student evals are worthless. Remember that I said that student evals help to paint a picture which is comprised of other factors as well.

But I guess for all of you who are not liking what I'm saying, what is your advice? Don't send any evals and rely solely on what your recommenders say? Send only severely cherry-picked evals? Both are inadequate by my lights.

zombie said...

"But I guess for all of you who are not liking what I'm saying, what is your advice? Don't send any evals and rely solely on what your recommenders say? Send only severely cherry-picked evals? Both are inadequate by my lights."

Searchy, I understand the problem here, and I'm sure others do too. I also know that many of us who understand that our employment may depend on student evals feel a lot of frustration about the seemingly increasing weight given to them. And some of us (myself included) wonder if they have any value at all, particularly when the anonymity of the student means the eval is not linked to the student's performance in class (did they show up? did they do their assignments? Did they get a D?) and removes some needed perspective.

I think it would be more valuable to see not the data, but actual student comments, from all students. But since many don't write comments, there is no way to get the perspective of an entire class. So you get extremes. I'm sure you know all this.

So I'm curious why you think the evals actually give you reliable information about someone's teaching ability, and how much weight they have compared to other factors. And since the evals are relatively new (we didn't do them when I was an undergrad), how did SCs evaluate potential teachers back in the old days?

Also, I'm sure many here would like to hear from you, as someone who has served on an SC, about the other "evidence of teaching excellence" you consider and how that factors into your decisions.

Anonymous said...

Summer before last I taught a small class with about 10 students. The student evaluations my department uses are not exactly user friendly. There are a series of questions and you then rank your instructor on a scale from 1 to 6. In some instances 1 would be the best score; in others 6 because what 1 and 6 mean changes from question to question. My evaluations from that class contained 9 marks at the highest level (1 or 6) for each category and 1 mark at the lowest level for each category. I've frequently wondered if I had one student who was really unimpressed with my teaching or one student who couldn't figure out the confusing evaluation instructions.

Anonymous said...

Usually, I send a one - two page summary of my student evaluations and a teaching statement. From time to time, I send one or two sample syllabi. I've always tried to keep my teaching portfolio under ten pages. Perhaps I'm in the minority.

Anonymous said...

"I've frequently wondered if I had one student who was really unimpressed with my teaching or one student who couldn't figure out the confusing evaluation instructions."

It's funny how you're not wondering if the 9 who gave you good marks were ass-kissers, or thought you were cute.

Because, as we all know, good evaluations are a sign that we did our job right, but bad evaluations must somehow be discounted.

Anonymous said...

If departments want an honest, comprehensive view of our teaching, and are going to rely on student evaluations to get such a view, then they should be explicit about what they want to see--"we would like to see all your evaluations. Please do not send only a limited sampling of student comments." As it stands, they ask for "evidence of teaching excellence" or "evidence of effectiveness as a teacher." Shitty evaluations are not evidence of these things. At best (for applicants), they are meaningless. At worst, they are evidence of ineffectiveness and non-excellence as a teacher. Philosophers love to be literal, and everyone needs bread. Thus, when asked for evidence of excellence, it's not at all obvious why anyone should send evidence of ineptitude (assuming there is such evidence).

Anonymous said...

@9:35

The thought that the 9 couldn't figure out the instructions crossed my mind, but I think that only makes my larger point clearer. Whatever data one collects from student evaluations is indicative of practically nothing concerning one's teaching. In my case it could mean that I had 9 students who can't read directions or 9 students who were sufficiently swayed by my raw physical attractiveness to rate me highly or 9 students who felt bad for me and feared that bad evaluations might lead to me loosing my job or 9 students who mistakenly believe that evaluating me highly might boost their grades. Who knows why students pick the scores they pick? And what is any of this really telling a search committee about my teaching?

Let's assume that I am a rotten teacher. My students all despised me but only 1 of the 10 could figure out how to properly fill out the evaluation form. The end result is still that I have a set of evaluations saying that 90% of my students thought I was top notch.
Hell, what's to stop me from inventing an entirely fake set of evaluations indicating that I am the best philosophy instructor known to humankind. The larger point is that, good or bad, student evaluations just don't seem to indicate much of anything. It surprises me that anyone takes them seriously.

Christopher Hitchcock said...

Anonymous 11/14. 4:13 A.M. said:

"My evaluations from that class contained 9 marks at the highest level (1 or 6) for each category and 1 mark at the lowest level for each category. I've frequently wondered if I had one student who was really unimpressed with my teaching or one student who couldn't figure out the confusing evaluation instructions."

This is exactly how the myth that Einstein did badly at school started. He earned mostly 1's on a 6 point scale, where 1 was the best grade. But the system was changed shortly afterward so that 6 was the best and 1 a fail. So anyone looking back at his report card would think that he failed.

Anonymous said...

If what 12:51 says is true, then we shouldn't take anything as indicative of teaching effectiveness. Peer reviewers might just write a good review because the reviewer likes the person and wants the person to get a job. Who knows what students evaluations really track? Hell, who really knows what is good teaching anyway?

What a load of BS. This is skepticism taken way too far. Healthy skepticism is one thing, but that's not what's happening here. Philosophers either want absolute preciseness or nothing at all. You are so uncomfortable with imprecise, though helpful, measures.

Anonymous said...

Searchy said: "The point is that for a school that values teaching (refer to the job ad to find out) you have to try to convince the SC that your teaching is good."

Actually, this isn't quite right. You can't simply refer to the job ad to find out. I know for a fact that one of the schools advertising this year who asked for "evidence of teaching excellence" does not value teaching. (E.g. the administration places no weight at all on any teaching metrics when reviewing for tenure.)

Anonymous said...

Ok, then I propose that everyone refrain from sending any evidence of teaching effectiveness to anywhere. Even if they ask for it, they might not even care. That will probably improve my chances when I do send something.

Anonymous said...

So much for the 'Harvard of the proletariat': http://itisonlyatheory.blogspot.com/2011/11/city-university-of-new-york-to-turn.html

Anonymous said...

@1:54

I'm not sure how it follows from my complaints about student evaluations that we shouldn't take anything as indicative of teaching effectiveness. Just because I think having students fill in bubbles or circle numbers and write optional comments isn't an effective method of evaluating teaching it certainly doesn't follow that teaching can't be evaluated. It doesn't even mean that a more effective student evaluation system couldn't be developed (though I do think that a lot of students tend to base their evaluations on things I think have little to do with good teaching).

What I am saying is that I think the current student evaluation system (at least the ones at schools where I have taught) is not a good indicator of teaching ability. I am surprised to hear that anyone takes them seriously in this regard. Were I on a search committee I don't think that I would pay much attention to them (and I can't imagine requesting/desiring more of them as someone in an earlier comment did).

And, to stave off ad hominem attacks, this isn't me being annoyed because my teaching evaluations are bad. My evaluations are good, but they certainly haven't convinced me that I'm an effective teacher.

Anonymous said...

I'll just add that this thread about evals is indicative of why teaching demos are becoming de rigueur as part of the interview process at any institution that pays even lipservice to the importance of teaching. I've seen a number of jobs won and lost at the demo.

wv: sundie, dessert to die for

Anonymous said...

I'll just add that this thread about evals is indicative of why teaching demos are becoming de rigueur as part of the interview process at any institution that pays even lipservice to the importance of teaching.

how is teaching demo any better?

"Furthermore, other methods of evaluating teaching effectiveness do not appear to be valid. Ratings by colleagues and trained observers are not even reliable (a necessary condition for validity)--that is, colleagues and observers do not even substantially agree with each other in instructor ratings." http://home.sprynet.com/~owl1/sef.htm

congratulations, you've replaced one unreliable method with another.

Xenophon said...

As I recall, Einstein's grades were good but a little uneven. And he did rather poorly in Italian. Or are you talking about his grades at ETH?

CTS said...

@11/14, 5:19:

We don't need to substitute demos for all other forms of 'evidence.' They simply add another element - one judged by the SC and the students at the interviewing institution.

At my SLAC, we ask for evaluations by students and/or colleagues and a teaching demo.

CTS said...

@November 13, 2011 10:48 AM:

This is [one] reason I find the entirely fill-in-the-circle evaluations so problematic.

For those whose institution[s] only use these Scantron things, I recommend a prepared (i.e., printed out) set of questions requiring narrative responses. You can send copies of these along with the Scantron results.

By the way, I think these problems illustrate the benefits of having a teaching demo for the on-campus visit. They might even encourage those interested in positions in which teaching is a priority to get a tape of a class - so as to get past the first cut.

Carolyn Suchy-Dicey said...

Searchy, what would you recommend to the applicants who may have already sent along select, positive comments? Would you advise them to now send either the entire set from the most recent class or a typed document of all their evaluation comments after the fact (to those departments they suspect might feel similarly)? Or does your advice mostly pertain to those who have not yet sent applications? Also, do you think it might be a disadvantage at other departments to send the full set if the norm is to send only positive evaluations? Or do you think your sentiment is widespread?

Searchy said...

Dear Carolyn,

I don't know how widespread my sentiments are. Clearly not everyone here agrees with what I wrote, but at base, my goal is to ascertain as best as possible the teaching effectiveness of the candidate I am reviewing. Nothing is perfect, but many things can begin to create a picture. And student evaluations do do that. And more complete student evaluation sets do a better job creating that picture.

In fact, and this is not quite on your questions yet, I value student evaluations because I do value the students' views. It's a little disheartening how quickly people here dismiss the views of the students they are teaching. Give students credit. My students are quite reflective and know what they want out of an education, and I'm not at an SLAC. With the growing standard of letter writers only writing absolutely glowing recs, I'm beginning to trust student evals more.

So what do I say to those who only sent a small sample of student evals? I don't know. That's really your call. If it were me, and the application due date has not passed, I'd send a more complete packet. You need to think from your potential reviewer's standpoint. Ask yourself if what you sent is enough to give a good fair snapshot of your teaching. I could care less if an applicant sent more materials at a later date, yet before the due date. Even if it was after the due date, but before I reviewed the file, I wouldn't care; I'd include it. But chances are I wouldn't review new stuff after I have reviewed the file [and I should not that I only review files after the application due date].

I do wish you the best of luck.

Anonymous said...

At my SLAC, we ask for evaluations by students and/or colleagues and a teaching demo.

Congratulations, you've now got two unreliable procedures. What reason do you have, in light of the empirical evidence, that having both will remotely track actual teaching ability, whatever that may amount to?

I can believe that jobs are won and lost at the demo. I don't yet see the rationale.

Anonymous said...

I have actually taken an informal survey of my students a few times to try to get a sense of their attitudes toward evals. Most of them assume that the evals will never be used for anything. Lots assume that I won't even read them. Most assume that I am the only one who will read them.

All of them say that they sometimes take a particular evaluation seriously as they write it. Very few take most of them seriously. Lots of them say that they just use the evals to let off steam, and that it would change their eval significantly were they to know that the eval could affect the teacher's life prospects.

There is near universal surprise and disgust when I mention that avals are sometimes taken seriously in hiring decisions.

So, before placing any weight on the evals in aggregate, I definitely think SC's should look for some evidence that the people who write them are even taking them very seriously.

Carolyn Dicey Jennings said...
This comment has been removed by the author.
Anonymous said...

Searchy -

I'm not dismissive of my students' views, but I know that student evaluations often can be a reflection of things other than "teaching effectiveness". At ratemyprofessors, almost every professor/instructor at my unversity who has a higher "overall quality" rating than I have is also rated as higher on the "easiness" scale. I wonder why?

Searchy said...

Slight correction to my last comment. The comment in brackets should read, "and I should note that I only review files after the application due date."

Anonymous said...

I had a colleague in another department do a study trying to correlate higher student evaluation scores with higher student grades. He was sure there was a strong correlation. After he finished studying 4 or 5 departments, he found no correlation. He has since dropped his insistence that they are connected.

Carolyn Dicey Jennings said...

Searchy, (sorry, in the process of a name change) thanks for the helpful comments. I think that evaluations can be informative. In fact, I normally do a midterm assessment to see if my students think the class is going well. On the other hand,

1. Individual reviews can be misleading: I once had a student try to psychoanalyze me in a review (something like, "she probably only grades hard to separate herself from us because she is a graduate student") and I have had many other students say things that were lazy ("great"), ill-thought-out ("she is the only reason I know anything"), or just off-topic ("have a good vacation"). Luckily, all the error-ridden reviews I just mentioned are from happy students. If you add ill-thought-out to mean you get your D student calling your class "superficial," even though he has no idea what you were talking about, and other such off the-mark reviews. All methods of evaluation have flaws and so long as search committees keep these in mind the full disclosure method might work.

2. Comparing sets of reviews is like comparing apples and oranges unless one knows the average grade, how the reviews were collected, etc.: I once TF'd a class with someone who gave an A average on the first logic assignment to my D average because he "did not get it so how can I expect the students to?" Most of my students never forgave me and I got my lowest evaluations in that course. His grading policy was "C+ or higher for anyone who makes an effort," he would high five students as he walked into the room, and he would have to look at my answers in order to do the grading. His reviews were great, and he confided to me that he uses strategies like that all the time (TF with a bad teacher or grade easier than the other TF and you get great reviews).

2 is not something that search committees are likely to scope out and where does that leave the teachers who are effective but who don't game the system? I don't think this is mere skepticism. What evaluations can catch are people who don't really care about teaching and think it is a waste of their time. Beyond that, one would have to have access to a lot more information to make crosswise comparisons or would risk leaving out anyone whose commitment to teaching does not stop with a commitment to good reviews (and may even conflict with that goal).

I think observing a class gives one a really good idea of how someone works as a teacher. This method is at least more direct. In getting the 200 to those 3 or 4 I just hope that departments that do care about teaching don't pay too much attention to any one possibly flawed bit of evidence.

Anonymous said...

I had a colleague in another department do a study trying to correlate higher student evaluation scores with higher student grades. He was sure there was a strong correlation. After he finished studying 4 or 5 departments, he found no correlation. He has since dropped his insistence that they are connected.

Did he control for other known variable like race, gender, looks? If not, this "study" is worthless.

Anonymous said...

@Carolyn and 4:10

My thoughts exactly. I've been in situations similar to the ones that Carolyn describes. In my experience students just don't take the evaluations very seriously. I think that students are capable of providing useful feedback, but with the very rare exception they just choose not to. And why should they? There is no real incentive for them to put any effort into the evaluations. Honestly, that's where I think the system should change if we want to take student feedback into account.

Anonymous said...

I wonder about the extent to which economic considerations (perhaps for lack of a better word) affect how much stock search committees put in student evaluations. Departments compete for funds, sometimes on the basis of how many majors they produce, how many students choose their classes to meet distribution requirements rather than other classes offered by other departments, etc. I remember a professor I had as an undergrad who promised the (very large) class that so long as you attended the review session the day before the exam, you'd leave the class with a B (because the review session was really a session where the answers to the next exam were given out by him). He was very popular (my undergrad school was heavily populated by mediocre students who were there to party). Given the numbers he could draw, his doing what he did had a kind of value for his department and for the school (i.e., lots of "satisfied customers"). My sense is that in many places, departments want to see lots of full classroom and lots of majors in their subject because these things help them survive. Teachers who earn a reputation for being tough graders who expect a lot are unlikely to fill classrooms in the way "cool" professors (i.e., easy Bs) are. I suspect that departments might have to justify low enrollment but not high enrollment, and I wonder if there's a correlation between high enrollment and easy grading, and between the latter and "good" evaluations. In short, to what extent are search committees looking at "evidence of teaching excellence" as a proxy for high enrollment, "satisfied customers", etc.?

Anonymous said...

From what I gather from many of the comments here, one cannot trust evaluations because they are more indicative of a teacher's popularity than anything else.

Think about this, however. If I was hiring a new faculty member, I would want that person to be popular with the students. I'd want to have someone who could draw students into our classes and perhaps into our major. As long as there was also good substance being taught in the classes and grading was fair, popularity is a good thing.

Anonymous said...

I'm as hard-assed a grader and teacher as they come. And I never have a difficult time getting top evaluations. Other hard-asses I know -- some of whom have won teaching awards -- are similarly popular.

I've heard many instructors whimper and whine about how all the students want is an easy A, only to find out from their students later on that those instructors can't teach worth shit, and don't seem to care. So I take those claims with a grain of salt.

Has anyone ever done a good study showing that generally poor student evaluations for allegedly difficult courses might actually be motivated by the fact that the course is difficult _because it's taught by a classroom fuckup who hasn't got a clue how to get people to learn philosophy_? If not, then all alleged 'studies' that show a correlation between poor ratings and demands of academic rigor should be questioned.

But of course, there are no such studies. There's too little actual interest in finding out whether philosophy instructors are actually doing their jobs in the classroom. The contempt for the pedagogical needs of students is de rigeur in most discussions of the topic I've seen among faculty members.

Moreover, any time (as in this thread) it's suggested that a SC should take _some_ pains to figure out whether the person they hire knows how to teach, a million whiny wankers who couldn't teach their way out of a wet paper bag throw up a million different objections to why this or that methodology is useless or worse ('Hey, has your method really taken into consideration the fact that nobody can prove the existence of the external world? Then you can't really know whether the person is really teaching the class, can you? Therefore, don't bother asking for a teaching demo or evaluations'.)

I've long thought that we should cull those who take the 'Fuck You!' approach to students out of classroom settings, and place them securely in institutes wherein they won't be skimming off the trough of tuition fees and other money earmarked for _teaching_. Why has nobody started a revolution along these lines?

BunnyHugger said...

Sorry to be a bunny-come-lately to this post, but I'll point out that a college I was formerly affiliated with had an interdisciplinary Peace Studies department which housed people with co-membership in other departments. It was a Quaker college.

Mr. Zero said...

I realize that perhaps the time to have posted this comment has passed. But just to be clear, my quibble with the Gettysburg ad is not that peace and nonviolence is not an interesting or potentially fruitful area of philosophical study; it's that it's too specific for an AOS. I think it's clear that I'm right about this, but if it's not, here's some more evidence: a bunch of people have reacted to the ad, in conjunction with the fact that they already have a VAP or two with that AOS, by inferring that the ad was tailored to them and that the search isn't on the level.

On the other hand, I did ask in an incredulity-expressing way why a department like that would want to be so specific, and I appreciate the people who explained why.

Anonymous said...

Mr Zero 11:32
The restriction is probably linked to funding.

Anonymous said...

For those of you complaining that students do not take your evaluations seriously...why don't you tell them that you take them seriously and that you really care about what they have to say? I always tell my students this (while reminding them that they do not get read until after the quarter is over) and I consistently get most of them writing in comments (and not just filling in bubbles).

A little initiative on your part might go a long way toward changing student behavior.

empirical philosopher said...

Has anyone ever done a good study showing that generally poor student evaluations for allegedly difficult courses might actually be motivated by the fact that the course is difficult _because it's taught by a classroom fuckup who hasn't got a clue how to get people to learn philosophy_? If not, then all alleged 'studies' that show a correlation between poor ratings and demands of academic rigor should be questioned.

There are studies that investigate *within class* variation in grades, finding that students who get lower grades rate the professor worse than students who get higher grades, *in that class*. There are also studies that show that when a professor lowers the curve in her class, her evaluations go down. And a Cornell psych professor taught the same course, same content, same textbook, in successive semesters, but getting lessons in presentation style in between. His evaluations went up dramatically; students thought they had learned more, and they rated the textbook much more highly. But in fact there was no difference in performance on final exams from semester to semester.

But of course, there are no such studies. There's too little actual interest in finding out whether philosophy instructors are actually doing their jobs in the classroom.

This is false. There are such studies. Only you have to be interested enough in actual evidence to find them, rather than just making shit up.

Anonymous said...

Excellent point, 1:26.

Here's my two cents' worth of contribution to this discussion:

Last year, I got invited to an on-campus interview that I later discovered was an excuse for hiring the internal candidate. The interview process involved a teaching demonstration.

Later, I met one of the students (a junior) who attended my teaching demo at an undergraduate conference. He told me that he was astonished that the internal candidate won the position, since (as this senior student put it) the internal candidate was "like Ben Stein in _Ferris Bueller's Day Off_". This student was obviously very interested and dedicated, and is at that school completing his BA this year.

The student told me that he would gladly have taken a course with _any_ of the candidates aside from the one who secured the position, whom he will now avoid on the basis of the teaching demo.

Now, here's a question for those who think that students can't be trusted to know good teaching when they see it and/or that teaching demos should not be used: do you really have enough contempt for the intelligence of a student like that to hold that his views are utterly worthless in determining who should be hired?

Beyond that, do you have enough contempt for your _own_ ability to distinguish quality teaching from garbage teaching to conclude that, if you had witnessed such teaching demonstrations, they would not have been genuinely informative to you?

I'm seriously asking. Thanks.

Anonymous said...

anon 126:

After I conducted my little survey, I started telling them to take it seriously. Still, this suggests to me that I shouldn't really be taking other people's evals seriously, unless they're all similarly impressing the importance of evals upon their students.

umble said...

Hi 1:54.

I guess I *do* have that 'contempt' for my own ability. Only I would have called it 'humility' rather than 'contempt'. But why mince words?

Anonymous said...

Just to be clear, Umble:

If you were on a SC, and you saw one teaching demonstration that resembled Ben Stein in _Ferris Bueller's Day Off_, and other demonstrations that seemed great to you -- and if your top undergrads told you their own strong feelings about it... you would _not_ find that helpfully informative?

Not helpfully informative to the point where you would say, after witnessing such an apparent disparity in teaching ability, "Geez, next time let's not even have a teaching demo: totally worthless!"???

Could you please confirm that you're really committed to such a radical position?

Umble said...

No, 3:34, you've misunderstood. Maybe you didn't ask the question you meant to ask.

Here are two different questions:

1. What would I do in such a situation?

2. Am I in fact confident that in such a situation I would gain useful information?

I was answering the second question, because that's the one you asked. That is the question about the confidence I have in my own abilities (or humility, or contempt, whatever).

Look, the evidence is pretty powerful: people are much worse at judging things like this than they think they are. Knowing about that evidence heightens my humility.

It's quite likely that I, like most other people, would think to myself, "Wow, I just got some awesome useful information!" (That's my answer to question 1.) And quite likely that I would be misjudging.

3:34, let me as you a question. Suppose you witnessed a really attractive, well-dressed person teach a class, and then a kind of schlumpy, unattractive person teach a class. Do you think you would be more impressed by the attractive person's abilities?

Anonymous said...

@9:57am

'Hey, has your method really taken into consideration the fact that nobody can prove the existence of the external world? Then you can't really know whether the person is really teaching the class, can you? Therefore, don't bother asking for a teaching demo or evaluations'.

You do teach your students something about straw man, right? It's one thing to be unable to prove the existence of the external world, it's another to be unable to find even agreement (necessary for reliability) in what constitutes good teaching in an observation.

Anonymous said...

As long as there was also good substance being taught in the classes and grading was fair, popularity is a good thing.

But the "as long as" clause is precisely what people are questioning.

Anonymous said...

OK, Umble: let me rephrase my question.

Suppose you see the same two teaching presentations I just mentioned: the guy who drones on and on, never looks up from the thing he's reading, stumbles around without apparently getting anywhere, that sort of thing; and on the other hand, the person who is engaged, interesting, and presents a set of ideas that strikes you as clear, interesting, well-thought-through, and leading to a novel and attention-worthy conclusion while explaining things with the utmost clarity.

Suppose, also, that (as before) the students who attend the teaching demos say that they would much prefer to take courses with the second candidate.

Also, imagine that the first candidate received terrible student reviews in all his classes, but the second received stellar reviews in hers (with students explaining in depth precisely how her teaching style helped them produce great papers, etc.).

Under those conditions, do you maintain that no genuinely worthy and even dimly reliable information was conveyed about which of the candidates is a better teacher of philosophy? To the point that one may as well not have such teaching demos at all?

Thanks.

Anonymous said...

So, here's the reality that slaps you in the face when you hit the job market: The stuff that is deemed important while you are in grad school - doing well in coursework, impressing your profs, writing a good dissertation, etc. - isn't particularly important in the world of the job market. Very few applications ask to see your transcripts, so your grades don't really matter, and, yes, you have to impress three or four profs enough that they can write the glowing recs that everybody has (and as a result are almost meaningless). And sure, if your dissertation *really* kicks butt (plus you have a pub and conference presentations), then you might get a research job on this basis alone, but that scenario is somewhat rare. For very many, getting a job depends on excellence in precisely those areas that were marginalized while you were in grad school - teaching evaluations or awards, having an AOS or AOC in some under represented area that most top research departments don't care about and offer no support for, and having the political "document savvy" to negotiate the bizarre world of academic applications. In my experience, at least, teaching is not emphasized in graduate school. It's talked about from time to time, and resources are available for improvement, but the attitude is that it's secondary. You have a second year review of your academic work, but there is never any official review of your teaching. There's no point in being bitter about it, you just have to adapt to the market as best you can and move forward. But it seems to me that most programs could do a much better job of making grad students aware of, and prepared for, the reality. I got the "it's really competitive out there, so your academic work better be brilliant" speech, but this assumes that one is exclusively targeting research jobs, and this is only part of the market. Maybe some departments already do this, but there ought to be some required practicum courses - one on teaching, and one the idosyncrasies of the application process. By sheer luck, I've got some AOCs in some under represented areas, and my teaching evals are pretty good (but I would have worked much harder to make them better had I known how important they would turn out to be). This has helped a lot to add to the list of positions I can reasonably apply for, but still I've been really stuck by the disconnect between what you are expected to focus on as a grad student and the reality of the job market.

Cardinal Monday said...

To all you skeptics about the ability of evaluations, assessments or naked-eye observations to give any clue about teaching ability:

Do you hold to a similar skepticism about the ability of philosophers to determine whether another philosopher has produced good research?

When you interview a candidate who smoothly answers your questions about his or her research potential, do you suspend judgment on the grounds that some irrelevant factors may be coloring your judgment?

How about when you read a candidate's writing sample? Do you lose sleep worrying that the clear writing style might be blinding you to a poor philosophical ability, or vice versa?

Have you ever considered that publication in this or that top journal might, to some extent, be a function of some of the same factors that might buffalo _you_ when you read the writing sample?

Are you aware of the studies showing that this sort of influence is possible? Do you care about those possibilities, to the point of saying that publication record is an irrelevant distraction in a job application?

Just wondering how even-handedly you are with the skepticism.

Umble said...

8:22, etc.,

I have three things to say.

1. I think the correlation between perceived ability and actual ability is low, not zero, so if your standards are really so low as “even dimly reliable information”, then sure, you probably do get that.

2. On the other hand, I hope if we are ever in that circumstance together, you would be willing to entertain the possibility that the candidate we loved seemed “engaged, interesting” to us for some reason we didn’t notice – maybe she smells really good, for instance, or her eyes dilated. I’m not making this up – that kind of thing really does happen. (You can read about it in some of John Doris’s work, for example.)

3. That said, I think some other commenters have made a good, albeit somewhat cynical point, namely, that maybe what a department should be looking for is not a good teacher (imparts wisdom) but a popular teacher (fills classes). If that’s what we’re looking for, then I’m quite willing to trust our “blink” judgments.

I notice that you did not answer my question.

empiricist philosopher said...

Cardinal Monday, do you understand the difference between evidence, on the one hand, and making shit up, on the other?

"Are you aware of the studies showing that this sort of influence is possible?"

I am not. Please cite the studies.

I think it's really interesting that so far, everyone on one side of this dispute is interested in evidence, and everyone on the other side is using the powerful philosophical method of having intuitions about imaginary situations.

Anonymous said...

@empiricist philosopher

obviously this just shows that empiricisist philosophers are shitty teachers, and rationalist philosophers who vigorously intuit their way to truth about human nature are awesome teachers.

Anonymous said...

How about when you read a candidate's writing sample? Do you lose sleep worrying that the clear writing style might be blinding you to a poor philosophical ability, or vice versa?

Actually, I do worry that sometimes good prose is mistaken for good philosophy in a way that systematically disadvantages philosophers for whom English is the second language. Any reason why I shouldn't worry?

Anonymous said...

Umble: in answer to your previous question, sure, I accept that my impression (like everyone else's) of someone's ability at doing something -- whatever it is, whether publishing, teaching, building a fence, or whatever -- is probably influenced by several irrelevant factors like the ones you mention.

The reason I didn't answer that question before is that it seemed to rest on a misunderstanding of my point. I wasn't saying that no such biasing factors exist: I was saying that, despite the presence of such biases (which we should all do our best to minimize and factor in), they do not make teaching demonstrations or examinations of candidates' teaching reviews worthless. I'm glad you agree with that, as it now seems.

Anon 9:14, I'm not sure where you're getting your information; but I can tell you that for the most part teaching evaluations and teaching awards do _not_ trump other components of an application package. I know, personally, several people with neither teaching competence nor evidence of that teaching competence who have landed plum jobs when the SCs overlooked that gap owing to the 'strong research potential' component. But I haven't heard of a single case in the past decade of someone who won a teaching position (other than at community colleges, which seem to care more about collegiality and conformity than excellence in teaching _or_ research) without having either a pedigree or some publications.

SCs even at SLACs want to attract the sort of money that comes with strong publications, if they can get it. Sure, they care about teaching excellence. But if they get a number of candidates who seem to be good teachers -- and they will -- they will go with the one with the earmarks of a big future publisher and grant-winner(pedigree, publications, and letters from famous professors raving about how much great stuff the candidate is likely to publish) over the outstanding teacher in the pack, any day. If you don't believe that, then have a look at the faculty lists at even the SLACs at the schools advertising in JFP.

Cardinal Monday said...

empiricist philosopher,

I was being somewhat tongue in cheek: my point was that, as far as I've seen, there _are_ no studies examining the extent of irrelevant bias in anyone's assessment of the worth of publications, research projects, ability to explain one's work on the spot, etc. However, everyone and his/her dog knows all about the studies showing bias in student reviews of teaching, peer reviews of teaching, etc.

Either there is a serious discrepancy in sorts of studies done, or else there is a serious discrepancy in the sorts of studies people like to talk about and publicize.

And yet, it seems straightforward that the same sorts of bias would be present in both cases. Why would it be ridiculously easy for people with dilated pupils to fool SC members in their teaching demos, but not in their discussions about their published work? And so on.

Presumably, the reason for this is not hard to come by: the game is, as any sane and attentive person should acknowledge, completely dominated by research over teaching; and so the people who have and feel entitled to the greatest degree of career success are well-trained at research and not at all at teaching (please think, for a minute, of how much training your graduate program gave you in teaching as compared with research). Moreover, those who feel the most entitled to win the spots are those who feel little or no serious moral qualms about doing a bad job teaching: they tend to blame their students for not learning if there are any problems, rather than blame themselves for not teaching well.

And, not surprisingly, when such people who are either bad or insecure about their teaching see even the _spectre_ of concern over teaching on the horizon, they go into defensive mode and need to throw these studies around to help ensure that their poor abilities in the classroom (and/or those of the rest of their in-group) will not be detected or cared about, regardless of the negative effect that is likely to have on students.

Anonymous said...

To Anon 6:53:

Yes, you should certainly worry that you may be biased against people's philosophical ability on the grounds of familiarity with English.

As you can see, I didn't intend to argue against that: I just wanted to point out that only a _fool_ would move from that sensible insight to the conclusion that we ought not look at writing samples at all, simply because we can't control for bias.

And yet, a parallel (and equally foolish) argument to this reductio seems to be made in the case of teaching evaluations/demos. I was merely pointing that out.

Anonymous said...

my point was that, as far as I've seen, there _are_ no studies examining the extent of irrelevant bias in anyone's assessment of the worth of publications, research projects, ability to explain one's work on the spot, etc.

There you go http://www.sciencedaily.com/releases/2007/10/071002151837.htm

Why would it be ridiculously easy for people with dilated pupils to fool SC members in their teaching demos, but not in their discussions about their published work?

But people do think biases exist in discussions about research too. Have you not been following the various threads about the worthlessness of interviews that have been cropping up over and over? Antony Eagle's write-up provides good references.

And, not surprisingly, when such people who are either bad or insecure about their teaching see even the _spectre_ of concern over teaching on the horizon, they go into defensive mode and need to throw these studies around to help ensure that their poor abilities in the classroom (and/or those of the rest of their in-group) will not be detected or cared about, regardless of the negative effect that is likely to have on students.

I just don't see how this follows. You might think it's in fact a sign of a candidate's care about teaching that she has bothered to investigate the cognitive psychology behind learning and the validity of assessment instruments.

Perhaps some philosophers can rigorously intuit the methods of good teaching. I think it's better to look at the available evidence and figure out appropriate methods (often through iterations). To do so, you gotta decide what's good evidence and what's not.

empiricist philosopher said...

And yet, it seems straightforward that the same sorts of bias would be present in both cases. Why would it be ridiculously easy for people with dilated pupils to fool SC members in their teaching demos, but not in their discussions about their published work? And so on.

But some of us do think that interviews are also very low value for assessment of philosophers. I take it that's what "their discussions about their published work" means.

Again, this isn't something that you can work out a priori, by thinking of lots of interesting examples and rendering your intuitive verdicts. I thought your phrase, "need to throw these studies around", was very telling. The people you slander in the rest of your comment could just be people who (like me) are interested in evidence.

Anonymous said...

Holy shit! I didn't realize that empirical data was the only kind of evidence in existence! Too bad for you mathematicians. That attitude scares the bejesus out of me...

YFNA

Anonymous said...

On another note: I do think the general questions are somewhat useful. They are often phrased comparatively, so you can get a sense of how you measure up. Of course there are all kinds of factors, but that's the point -- all things considered -- are you an effective teacher. Students will ask themselves whether they enjoyed the class, were engaged by the material, whether the prof presented it in an accessible manner, whether they learned anything. Don't these speak to your overall effectiveness as a teacher?

Also, about the popularity thing: popularity is an important factor in being an effective teacher since the more popular you are, the more chance you will be able to reach students, right?

YFNA

Anonymous said...

I didn't realize that empirical data was the only kind of evidence in existence! Too bad for you mathematicians.

Are you actually claiming that the psychology of human learning, teaching, assessing, judging, etc. is anything like mathematics? If you can make a case for that, then I'll accept your a priori evidence.

Of course there are all kinds of factors, but that's the point -- all things considered -- are you an effective teacher. Students will ask themselves whether they enjoyed the class, were engaged by the material, whether the prof presented it in an accessible manner, whether they learned anything. Don't these speak to your overall effectiveness as a teacher?

I take it that all-things-considered really means all-things-***relevant***-considered. I also take it that we pre-theoretically think things like gender and race and hottness are irrelevant. However, things like gender and race and hottness do affect student evaluations. Students may not consciously ask themselves how hot the teacher was, but it certainly shows up. So, the claim here--to repeat, since you apparently missed out on half of the thread--is that student evaluations don't reliably track overall effectiveness as a teacher.

(No doubt someone will soon chime in and say that being a white man really makes you a more overall effective teacher.)

Cardinal Monday said...

I see now that I should have made my overall point more clear: sorry.

I certainly have been following the discussions of the worthlessness of interviews. I agree that, if the point being argued for is not that teaching demos are a bad part of the interview process but rather that there should _be_ no interview process, those making that point are at least not guilty of a blatant inconsistency.

However, what about bias involved in assessing the worth of philosophical writing samples (which SCs and also journal editors do)? Isn't there a worry that one can't rule that out?

Before you answer, let me explain where I'm going with this. Right now, many SCs look at a combination of the following:
a) Research;
b) Pedigree;
c) Grant-securing abilities;
d) Letters;
e) Teaching evaluations;
f) Teaching demos;
g) Research presentation; and
h) 'Performance' in an extended interview.

So, let's imagine that we scrap e-h as being too vulnerable to bias. Strictly speaking, as others have noted, we would have to knock out letters also for the same reason. So now, we're down to:

a) Research;
b) Pedigree;
c) Grant-securing ability.

There are a number of problems with that. One, as I mentioned, is that it suspiciously rules teaching ability completely out of consideration (which is very convenient for the people who have no moral consideration for the interests of students, but _is_ actually a required and important part of more or less any job). Another problem is that bias seems to play a big role in those three factors as well. Moreover, it is rather elitist in an unhelpful way: graduate admissions committees make errors just as search committees do, and b) and c) are closely tied to a).

So: I think the decent solutions are either to make the hire at random, or else to find the best way possible to measure research _and_ teaching potential. Are there problems with many of the popular methods of measuring teaching potential/performance? Sure. That's mostly a function of the structure of academic life (nobody gets to watch one another teach, it would be considered rude to take someone to task for bad teaching but not for bad reasoning in a paper, etc.).

I'm perfectly happy for people to suggest that teaching demos or teaching evaluations be replaced _with some other method of gauging teaching ability/potential_. But if people just want to snipe at all the components of an application package that touch on teaching and leave intact all the research and pedigree stuff, we have to ask where they're coming from.

Anonymous said...

I realize that there are non-relevant factors, but they infect the whole damned thing. I was just addressing someone's claim that the general questions are useless. Your point is orthogonal to that.

YFNA

empiricist philosopher said...

Cardinal Monday, I can't tell if you're joking, or doing the tongue-in-cheek thing again.

It sounds like you are claiming that the same problems that arise from taking teaching evaluations seriously will arise also from taking seriously the evaluations of a candidates research that the SC reads and makes on its own judgment. This strikes me as extremely implausible, but I am quite happy to look at any evidence.

I'm getting pretty tired of your repeated, utterly baseless charges and insinuations that people like me have some ulterior motives, so from now on I'm just going to ignore those.

YFNA, I think you must be joking about the mathematics thing. Please?

Finally: I just noticed that I called myself "empirical" one time and "empiricist" thereafter. It's the same author. I just forgot.

Anonymous said...

i feel like i am the same person as the empiricist philosopher, but i'm not.

I realize that there are non-relevant factors, but they infect the whole damned thing. I was just addressing someone's claim that the general questions are useless. Your point is orthogonal to that.

Well, whether the general questions are useless depends on whether the responses to them track what they're asking for. If non-relevant factors "infect the whole damned thing", then the responses don't track the intended target. So, yeah, those questions would be useless.

Where is the orthogonality in that?

Anonymous said...

Dworkin once argued that race could count as merit in criteria for law school admissions. In part this was because of the effects that one's race may have on the academic environment in which they are a part of.

Why isn't hotness a merit for effective teaching? If being hot means students come to class more often, pay more attention, and are more engaged in class...then why shouldn't that count in one's favor? Isn't that exactly what we want our students to be doing in class?

Cardinal Monday said...

Empiricist philosopher:

Why is the burden on me? I just pointed out that it would be natural to imagine that biases might well affect the assessment of writing samples just as they affect the assessment of teaching demos, etc. In fact, others in this thread have admitted that. Now, you might disagree, and claim that no such bias exists (though you maintain the first kind of bias). In that case, it seems to me the onus is on you to show why you are justified in arguing for the worthlessness of one kind of assessment only.

As for the insinuations you complain about, I can only wonder what other conclusion one can reasonably draw. We are in a system in which top researchers get paid several _times_ more than top teachers, who work at least as hard; in which everyone talks about 'research opportunities' and 'teaching loads', and never the other way around; in which it is commonplace for students to have their educations ruined by departments shoving grossly unqualified teachers in the classroom with them if said teachers have research expertise, but no experts at teaching thereby are given the right to ruin good philosophy journals by publishing garbage articles solely on their teaching merits; in which one's research output is the topic for general discussion and helpful advice, but one's teaching is almost never observed by anyone; and so on.

And in this already heavily biased climate, in which those of us who teach conscientiously are treated as second-class citizens, what is the loud and constant reaction when it is suggested that SCs might give _some_ attention to teaching? It's a utter disparagement of any such attempts: a trashing of any SC that dares to use any presently available methods of assessing teaching, _without making a single positive comment about how to measure it instead_.

You tell me: what's the _best_ interpretation of that?

We've all seen people who cry about being asked to teach better (though nobody seems to know of a single case of a faculty member being dismissed for poor teaching), whose teaching in turn seems to suck the big one.

Those who are interested in quality teaching ought to provide some alternative method of evaluating it if they don't like the present ones. Those who aren't interested in it should not be permitted in front of a classroom -- ever -- and should stop leeching away tuition money, etc. from those of us who earn it.

It's that simple.

Anonymous said...

Why isn't coming from a rich family a metric for doing important research? People from poor families are unlikely to have attended good schools, which is a major determiner of success later on.

Also, people from the upper middle class or higher tend to be good at playing golf, to know their wines from one another, etc. Those are excellent skills to have when one is trying to get alumni and others to donate research funds. Why should a department hire a TT faculty member who won't be able to do that effectively?

Anonymous said...

If being hot means students come to class more often, pay more attention, and are more engaged in class...then why shouldn't that count in one's favor?

Again, here you're conditionally assuming that there's a causal link between hotness and learning. Is there any reason to think that's true?

Don't straw man this. No one objects to the conditional. People just don't think the empirical evidence provides any reason to think that the antecedent is true.

Anonymous said...

Those who are interested in quality teaching ought to provide some alternative method of evaluating it if they don't like the present ones.

There are alternative methods. They are just far more time-consuming so they are not done systematically. For example, one might keep track of how students taught perform in upper-level courses afterwards. The hard thing is getting enough data so we can screen out noise.

Look, I agree with you that the university doesn't pay enough attention to teaching. That's why genuine assessments of teaching effectiveness aren't done. But the fact remains that current instruments are unreliable. Sure, committees can use them because that's all they've got, but then they should at least be aware of the problems. Judging by the various a priori responses, people simply aren't aware of the problems, or think they can intuit their ways out.

empiricist philosopher said...

Cardinal Monday,

I don’t know what burden you mean.
And I can’t see what interest there is in what is “natural to imagine”.

Maybe you could say a little more specifically what you mean when you say that “biases might well affect the assessment of writing samples.” Just give some imaginary example – I know there is no way in hell you are going to present evidence.

As for the insinuations you complain about, I can only wonder what other conclusion one can reasonably draw.

I’ve already told you. But you find it so utterly alien that someone might be interested in actual evidence that you can’t believe I’m not lying.

zombie said...

"Dworkin once argued that race could count as merit in criteria for law school admissions. In part this was because of the effects that one's race may have on the academic environment in which they are a part of.

Why isn't hotness a merit for effective teaching? If being hot means students come to class more often, pay more attention, and are more engaged in class...then why shouldn't that count in one's favor?"

Seriously?
Race should count as merit because race has historically been used to oppress, because being a person of color kept you out of the best schools, the upper classes, the country clubs, etc. Counting race as "merit" is intended to compensate for the unfair disadvantage of not being white.

Aside from being subjective in a way that race is not, being "hot" is not a disadvantage, especially if, as you say, it actually makes you more popular. Moreover, being an "effective" teacher is more than just getting students to attend class to bask in your hotness. You can get the students in class, totally hanging on your every word, and still be a crap teacher.

Anonymous said...

I just want to understand, again, the reasoning behind the claim that the person who has gone out and looked for empirical evidence regarding teaching, learning, and assessment is judged to be less interested in teaching than the person who intuits real hard in their armchair and thinks up lots of imaginary scenarios.

reason rocks! said...

7:43, I don't understand why you hate math so much. One might naturally imagine that you are a defensive elitist. This thought experiment has lade the burden on you now.

Anonymous said...

To 7:43 -

I think that's already been made clear. People who really care about hiring a good teacher will not just say 'Well, the familiar ways of determining whether someone is a good teacher seem flawed because of this or that reason; therefore, we should make no effort to determine it at all'. Rather, they will propose better methods and urge that those methods be put in place.

More generally, any time X is assessing Y for a job whose main function is to Z, and W proposes that X should eliminate from the assessment criteria all methods (including direct observation) of determining whether Y is a good Z-er, then W is either taking the piss or else a moron.

I took that to be obvious, so I didn't state it.

another empiricist said...

Hi 8:37,
Has anyone actually proposed what W in your amusingly variable-laden sentence proposes?
Or maybe I've misunderstood. Maybe you were only suggesting a thought experiment. If so, I withdraw the question: carry on!

Anonymous said...

That's certainly my impression, 9:49. But if I'm mistaken, and I'm attacking a straw man, I'll withdraw my comment with apologies if all those who have been advocating the elimination of teaching demos, examination of teaching reviews, etc. from the search process confirm that they actually meant only that they wish that those criteria in the search be _replaced_ at once with some other criteria _that also are geared toward assessment of quality teaching (or quality teaching potential_).

Anonymous said...

I didn't think people were arguing for elimination. Rather, people were advocating for the *recognition* that the standard measurement instruments are problematic. Committees should be aware of this. Even if no better instruments present themselves, recognizing that there's a problem is an important first step. Moreover, there should be some institutional procedures to account for standard race, gender, etc. biases.

that other empiricist again said...

11:19, would it be too much to ask which comments you have in mind?

I hope not!

Cincinnatus C. said...

@ 7:11:

I'm familiar with something like the idea of "counting race as merit" from Stanley Fish rather than from Dworkin, so Dworkin may be saying something different (although it doesn't sound like it from the earlier comment), but Fish's idea at any rate is not that "counting race as 'merit' is intended to compensate for the unfair disadvantage of not being white"; it's that having people of "different" backgrounds in a department makes that department better, because it sucks for students to have departments that are dominated by white people, men, or whoever (and it sucks especially, but not only, for students who are themselves not white, male, or whatever).

I think the original comment pulling in hotness along these lines is worth thinking about. But thinking along with Fish, it might suck for students to have only hot professors.

(For that matter, it might suck for students to have only *good* professors.)

Anonymous said...

@8:07

I intended Stanley Fish, thanks for the correction. I'm essentially in agreement with your reconstruction of the argument.

Cardinal Monday said...

Empiricist philosopher,

I guess you ain't been listening. My point was that there don't seem to _be_ any studies, or at least not any that are talked about, examining whether biases similar to those that skew people's perceptions in interviews and teaching evaluations will also affect examinations of writing samples; and that some explanation is needed for that lack of research direction. As I explained, the burden is on people who insist that looking at writing samples, publication records, and pedigrees are not significantly biased to the extent that watching a teaching demonstration (say) is to show that the evidence supports this view. And one cannot do that simply by citing the original studies under discussion.

It follows from a pretty basic principle of reasoning, actually. If A and B are considered to be candidates (people, institutions, methods of inquiry, whatever) and then someone points to a study showing a problem with A, then one is _only_ justified in eliminating A and leaving B _if_ one can show that the problem with A doesn't also affect B.

Also, I think you should reconsider the accuracy of calling yourself an 'empiricist philosopher'. You yourself purport to know what I find alien and what I can't believe. But you are _wrong_ on both counts. Why speculate when you have no fucking clue what you're talking about? Do you, perhaps, dimly understand that one can't just make stuff up when one doesn't know?

I'm still waiting for you to meet the burden of proof by providing the _empirical_ evidence that no important biases affect writing sample evaluation, evaluation by pedigree, or evaluation by publication record.

Do you have such evidence to support your assertion? Or are you, perhaps, not really an empiricist at all?

empiricist philosopher said...

Dear Cardinal,


I guess you ain't been listening.


Hm. Let's see...

You said it would be natural to imagine that biases might well affect the assessment of writing samples just as they affect the assessment of teaching demos, etc. I then asked you to say more specifically what you meant. Your response is:

My point was that there don't seem to _be_ any studies, or at least not any that are talked about, examining whether biases similar to those that skew people's perceptions in interviews and teaching evaluations will also affect examinations of writing samples; and that some explanation is needed for that lack of research direction.

I don't see how this is responsive. Maybe you weren't "listening".

As I explained, the burden is on people who insist that looking at writing samples, publication records, and pedigrees are not significantly biased to the extent that watching a teaching demonstration (say) is to show that the evidence supports this view.

Yes, that's what you said. And I said I didn't know what 'burden' you meant. As far as I can tell, 'burden of proof' is bullshit in this context. It's not that you've got the burden wrong -- the problem is that it doesn't mean anything here.


It follows from a pretty basic principle of reasoning, actually. If A and B are considered to be candidates (people, institutions, methods of inquiry, whatever) and then someone points to a study showing a problem with A, then one is _only_ justified in eliminating A and leaving B _if_ one can show that the problem with A doesn't also affect B.

I think I must be unfamiliar with the basic principle of reasoning you have in mind. Please say what it is. (To be clear: I do not believe that what you said follows from any basic principle of reasoning. I don't expect you to provide a basic principle and a derivation. I'm just calling you on your bullshit.)

Also, I think you should reconsider the accuracy of calling yourself an 'empiricist philosopher'. You yourself purport to know what I find alien and what I can't believe. But you are _wrong_ on both counts. Why speculate when you have no fucking clue what you're talking about? Do you, perhaps, dimly understand that one can't just make stuff up when one doesn't know?

Empiricists freely admit that their hypotheses are sometimes wrong; I certainly do. But you've given no reason at all to make me think I was wrong.


I'm still waiting for you to meet the burden of proof by providing the _empirical_ evidence that no important biases affect writing sample evaluation, evaluation by pedigree, or evaluation by publication record.

And, as I've said, I think your 'burden of proof' is bullshit.

Do you have such evidence to support your assertion? Or are you, perhaps, not really an empiricist at all?

Which assertion? Please quote it.

You won't. That would be dangerously close to providing evidence.

Cardinal Monday said...

'Empiricist' 'philosopher',

It's becoming increasingly clear that, beyond your puerile rhetoric, sneers, and jokes to the effect that I don't care about evidence, you actually care less about evidence than I do. I hope none of the readers of this thread are fooled by the title you give yourself into thinking otherwise.

I accept, as you do, the evidence that indicates the presence of bias in the teaching-oriented components of evaluation. So that is not where you and I differ.

Where we differ, rather, is that you don't seem interested in asking whether a similar demand is met for the research-oriented components of evaluation. I've asked whether you have any evidence that there is no such bias. You've declined to present it.

Now, to respond to your two challenges:

1) What assertion were you making, and will I quote it back to you? Here it is: " everyone on one side of this dispute is interested in evidence, and everyone on the other side is using the powerful philosophical method of having intuitions about imaginary situations."

Now that I've shown it to you, and explained once again where the claim seems to be wrong, please be so good as to withdraw or justify it.

2) Your challenge that I justify my principle: sorry, I took it to be so obvious that it didn't need justification. But, here goes:

Consider for a moment a claim of the following sort: "Women are well known to succumb to severe biases that make them ineligible to be fair judges."

On what grounds would you hold it to be justified for someone to make such a claim? Presumably, it would only be justified if women had that sort of bias _and men didn't_. Right? Please let me know whether you are still with me now. I hold that it would be highly immoral to make such a declaration if women had no worse bias than men did. Even if the statement were strictly speaking true, it would be horribly misleading as it stands, since it implies that women have a _worse_ bias than men do.

You agree so far, I hope?

OK. Now, suppose that the information in question came from a huge number of well-conducted studies in which _only_ women were tested. For some suspiciously odd reason, no tests were done on men at all. The people who ran the tests had, for reasons unknown to us, no apparent interests at all in checking to see whether men had a similar bias.

Do you think that it would _then_ be fair to make the claim, with the insinuation (already explained) that men don't do as well, merely on the grounds that the test didn't _investigate_ the biases of men? Would you approve of people who barred women from consideration for a particular position merely on the grounds that the study showed that women, but not men, are biased?

I hope you agree that that would be an unfair thing to do. Until the further studies about men came through, there would be no good basis for thinking that men are any worse than women in terms of bias. If you disagree, I'm happy to investigate this with you. But I take it you can now see the obviousness of what I am saying.

And as with women vs. men being biased, so with teaching reviews vs. research reviews.

For a true empiricist, there is only one conclusion: reserve judgment until one has done or seen the research on _both_ sides. But nobody has done that other research. You understand now?

Anonymous said...

But nobody has done that other research. You understand now?

You keep saying that, but it's just not true. I gave you a link earlier. There's a huge literature on the (in)effectiveness of peer reviews.

I think there's something to the point that there are biases in both research and teaching. I happen to think it's worse in teaching. But stop saying there's no research done on bias in evaluating research.