Wednesday, August 27, 2014

Avoid these grad (and career) traps!

Justin at DailyNous (still love that pun) points our attention to a post he shared last week by Daniel Silvermint about potentially-paralyzing, self-sabotaging thoughts many students encounter in grad school. Daniel called them Grad Traps; he writes:
I’m helping out at my department’s orientation for new grads...and I wanted to distribute a list of Grad Traps, or ways in which we burden ourselves early in our careers with thoughts and habits that make work and life harder....When such traps go unacknowledged, grads have an incentive to hide and conceal their struggles, for fear of being considered not as good as others. But if these kinds of traps are both common and avoidable, then an environment that openly acknowledges them is worth having.
Starting grad school? Read them. In the middle of grad school? Read them. Early or mid-career? Read them. I did and I quickly recognized that I am still prone to fall into a few of the same traps (#s 15 and 16 especially; getting better though, getting better).

While I survived grad school without this list, the surviving might've been easier with it. If you have anything to add, you can comment over at the original post.

-- Jaded, Ph.D.

Sunday, August 10, 2014

On the supposed drawbacks of Skype interviews (an inference built on anecdata)

In light of the previous thread, it seems like a good time for this.

I had four video interviews during the last job season. Three by Skype, and one by another video thing. Adobe something. They were all quite different experiences, which got me to thinking about some of the alleged shortcomings of Skype interviews. (For the record, two Skype-ing schools gave me the option to interview in person at APA, and the others conducted only video interviews.)

1. Non-Skype (no APA). Pain in the ass set-up involving downloading new software and going through a whole rigamarole, with a clunky, non-intuitive interface. Which, everybody has Skype, so why bother? It was no better than Skype in terms of image/sound quality. The chair explicitly asked interviewees to use headphones (supposedly to avoid echo and feedback), so I really couldn't hear myself speaking very well.

The committee sat in a U around a rectangular arrangement of tables, and the camera was at the opposite end of the room (the candidate's end, as it were), so they were all fairly tiny. Because of some glitchiness with the initial connection (no sound, etc.), I was afraid to switch to full screen lest I disturb the delicate balance (plus my camera is on my laptop, so even full screen is not that big), so the tiny Lego people problem was compounded by that. Given how persnickety they were in terms of demands for the candidates, you'd think they could have had a better set-up on their end. As for the interview itself, it was lousy. Boring, bored, rote questions, one per committee member, with no follow-up questions. None. It was like they were going through the motions (and maybe they were. Maybe they just weren't that into me as a candidate to begin with). Results: They sent me a PFO a few weeks later, so I give them marks for punctuality.

2. Skype. (APA option) Very professional set-up with a guy running the camera in a conference room clearly set up for video conferencing. Committee sitting on one side of a long table, facing the camera. Camera zoomed in on each member as s/he spoke, making the whole eye contact thing much easier, and making it feel like more of a conversation. One SC member showed up late, but managed to join the conversation. Friendly SC; chair started off by praising my writing sample (nice!). Good, chatty interview with interesting questions.

This is the right way to do Skype, if you ask me, and not qualitatively worse than an in person interview. But not all schools have this kind of set-up available. Mine sure doesn't. Results: Campus visit.

3. Skype. (APA option) Technical difficulties, with committee actually conducting the interview from APA, on a laptop, all crowded around a hotel table. Technical glitches, fuzzy picture, freezy screen (probably due to typically lousy hotel wifi), hard to hear or see clearly at times, a couple of people slightly off-camera and leaning in (which was kind of funny, and gave the whole thing a more freewheeling feel). Nonetheless, it was a good, engaged, friendly committee asking great questions, and a really good conversation. Everyone cool with the fact that technical problems happen, and sometimes you have to repeat yourself. Results: Campus visit.

(Fourth search was temporarily suspended, so nothing to report, except an uneventful Skype interview that went well, I think.)

So, yeah, the technical issues that can be a problem with Skype happen. Although it seems they can be remedied by having a professional set-up and good connection. But even when you don't have that, and there are glitches, it's still possible to have a good interview. Both of my Skype interviews were, I would say, equally good despite substantial differences in the technical set-up and tech quality. From my limited and admittedly anecdotal experience, and contrary to the conventional wisdom,  I don't think the technical difficulties result in an overall disadvantage for Skypees as compared to in-person interviewees. I infer this from the fact that both schools I Skyped with also did APA interviews. I assume also that getting a fly-out means the Skype interview went well. And given the financial and time costs associated with going to APA, Skype is hands-down the better way to go for job candidates. I appreciated having the option of doing the interviews via Skype. I've had far, far, far worse experiences with in-person interviews at APA.

That said, there's a benefit in doing some extra prep to optimize the Skype experience. My prep: My laptop has a good camera, but I bought a good quality USB microphone for optimal sound (I record videos for my online courses, so I wasn't buying it exclusively for interviews.)  It's possible some schools have decent USB mics available. I asked and mine did not. I interviewed from my office where I have a reliable ethernet connection (recommend this over wifi, b/c Skype is not very forgiving of wifi fluctuations). I brought a small lamp from home and put it on my desk to improve the lighting (overhead fluorescent tubes). I propped my laptop on a thick book to improve the angle, and checked the background (all books), and uncluttered my desk enough to hint that I'm productive, but not a disorganized mess. I cleaned up my office so there wouldn't be a lot of visual junk and stuff behind me. I printed out the names and photos of the committee members, and tacked it to the wall behind my laptop, so I could look at it without noticeably looking away. Also put a few notes for myself up there. I tested the image/sound/background by Skyping with a friend beforehand. And the usual stuff -- dressing as if it was an in-person, maintaining eye contact with the camera (not the screen, which is hard to do!), learning as much as I could about the school, department, and committee, etc. Also, on the assumption that the committees might be doing multiple interviews in a day, I picked interview times that were shortly after lunch, or (second choice) shortly after breakfast, because of this study.

I suspect, and hope, that more and more interviews will be conducted via Skype or somesuch (but not that Adobe crap), which is a good thing. Chime in here if you have anecdotes of your own.


Tuesday, August 5, 2014

APA interviews are morally impermissible

Since we're already talking about the job market (I'm with Zombie, holy crap!), Asst.Prof. at a Canadian School writes in to remind us of the significant hardships APA interviews impose on job seekers. There's not much for me to add; I wholeheartedly agree (and I aired my views last APA go-around). Here's Asst.Prof. at a Canadian School's take:
It’s the middle of the summer, so no one wants to think about searches for new tenure track hires. But now’s the time to talk about something important -- before those searches start.

APA interviews are really expensive for job candidates. This isn’t news, but it’s worth doing the math again. Flights can easily run to $500 for candidates on the West Coast. If people are coming from the UK or Canada, it’s closer to $800. I don’t even want to think about Australia or Asia. Then there’s hotel costs, which even if you bunk with a bunch of friends in one room, is probably going to run past $100. So we’re talking about $500, $600, or a lot more for candidates to go the APA. 
That price might have been one thing in the olden days, when everyone got ten interviews at their first APA, and then got a job, and never had to deal with the job market ever again. Back then, the APA was a one-time cost. But that’s not the world we live in now. Now people spend three, four, or five years on the market before they get permanent jobs. They go to APAs where they have one interview -- a one-in-12 shot at a job. And then they do it again the next year. And then again the year after that, and the year after that. At that point, they’ve spent $2000 or $3000 just trying to get a job.

That bears repeating: candidates can easily spend well over $2000 going to APAs for interviews.

For a grad student? For an adjunct? For some postdocs and VAPs? That is way too much money. It’s two or three months’ rent. It’s health insurance. Grad students, adjuncts, and other part-timers are the most economically marginalized, most economically vulnerable members of our discipline. To impose those costs on them is to impose on them a considerable hardship.

Now, you could argue that in the olden days, there was just no way to avoid APA interviews. Search committees had to get a first look at people before they made up their minds about who to bring out to campus. That would be a bad argument for at least two reasons I can think of, but it’s an argument you could make.

But now there’s Skype. Really. It’s a real thing and it works. I know, I know, it can be glitchy, and even when it’s not, it’s not the same as an IRL meeting.

But how much better than a Skype interview is an APA interview? So much better that it justifies forcing some adjunct to spend $500 she could have spent on her kids’ Christmas presents? Or her health insurance? Or her rent?

To recap: APA interviews impose a considerable economic hardship on the most economically vulnerable members of our discipline. And since there’s Skype, they impose that hardship for no reason at all. But to impose a considerable hardship on the weakest and poorest among us -- for no reason at all -- is an injustice. It is morally impermissible.

That point deserves to be put in the second person. If your department is hiring this year, and if you let your department do APA interviews, you are committing an injustice. You are forcing economically vulnerable people to spend way more money than they can afford, in order to have a one-in-12 shot at your job. And you’re doing it for no good reason at all. That is a despicable thing to do.

So what should you do? Easy. Don’t do APA interviews. Just refuse. Don’t wring your hands this year and think maybe you’ll skip the APA next time around. Don’t wait for the APA to come up with some new policy. Don’t wait for a few other departments to start skipping the APA before you do. Just do it yourself. Do it this year.
Abolish APA interviews.

-- Jaded, Ph.D. 

Monday, July 28, 2014

Holy crap. New TT jobs posted already.

Missouri's got an early deadline of October 13; San Diego's is November 7.

So, although I've kinda been anxious to see what jobs there will be this year, I'm really not ready to think about submitting applications yet. In July. When I'm grinding through a bunch of papers/chapters, and just starting to think about fall semester syllabi.

But I guess it's time to start thinking about that.


Tuesday, July 15, 2014

Colorado's Best Practices and Collegiality

This thread at Leiter, dealing with Spencer Case's criticism of the Colorado Best Practices document, contains a pretty interesting discussion of the question that always seems to come up in this context: whether it's ok to disparage the subfield of feminist philosophy as a whole, and if not, why not? The most interesting action starts at comment #33 and takes the form of an exchange between "thefinegameofnil" and "slacprof."

thefinegameofnil asks us to consider the hypothetical(?) case of a philosopher named Sally who
... arrives at the considered opinion that feminist philosophy isn't a fruitful research program, and that philosophy is better served by allocating its limited resources to other sub-disciplines. [...] On that basis, Sally speaks openly and dismissively of feminist philosophy's ability to advance philosophical understanding to her colleagues, she's generally against her department hiring philosophers working in feminist philosophy, she doesn't think that courses in it should be offered on a regular basis, the NEH should fund other work, etc. Sally clearly runs afoul of the APA Colorado Report's Orwellian suggestion that those who "have a problem with people doing...doing feminist philosophy...should gain more appreciation of and tolerance for the plurality of the discipline. Even if they are unable to achieve a level of appreciation for other approaches to the discipline, it is totally unacceptable for them to denigrate these approaches in front of faculty, graduate or undergraduate students in formal or informal settings on or off campus."
Whether I have any objection to what Sally is doing in this story depends substantially on what, exactly, I am supposed to take Sally to be doing in this story. Suppose she's in a faculty meeting the purpose of which is to settle on an AOS for an upcoming tenure-line hire, and she argues that the department should not advertise for a specialist in feminist philosophy because, in her informed opinion, that subdiscipline is not a fruitful research program and a specialist in it is less likely than specialists in other disciplines to advance philosophical understanding, and stuff like that.

If that's what she's doing, I can't see any problem with it. It seems to me that we philosophers ought to be free to decide for ourselves which philosophical projects and methodological approaches are interesting, worthy of attention, and/or potentially fruitful, and ought to be free to express those decisions to our colleagues. That seems right to me.

But if that's what she's doing, I'm not sure I see how Sally's behavior runs afoul of what thefinegameofnil calls the "Orwellian Suggestion." It's true that the Orwellian Suggestion tells Sally to gain more appreciation of the plurality of the discipline, and that Sally has not managed to do this in spite of what we are clearly meant to see as a good-faith effort do do so. That might indicate that Sally has violated the Orwellian Suggestion.

But the Orwellian Suggestion also gives advice for what to do in that case: she should refrain from denigrating it in front of colleagues or students. To me, this tells against reading the Orwellian Suggestion as a categorical and unconditional order to appreciate feminist approaches to philosophy, tout court. If that's what it was, it would just say, "do x," instead of, "do x, but if you can't do x, at least do y." So, Sally saying in a faculty meeting that she thinks that whatever subdiscipline or approach or whatever isn't super fruitful and that, since tenure lines are precious, we should spend it on someone who will engage in a more potentially fruitful research program strikes me as possibly consistent with the Orwellian Suggestion, depending on the specifics.

But if, on the other hand, she says all that stuff in a way that is literally openly dismissive, I find the intuition that she's not being at least a little bit of an A-hole harder to sustain. It seems to me that she should be willing to at least consider the idea, even if she ultimately thinks that the subfield is worthless and that hiring someone who works in it would be terrible. It seems to me that she shouldn't just dismiss it. She should be willing to engage with it, and to explain to her colleagues how she came to make the judgement she made and why she thinks they should share it. (In fact, it seems to me that the details of the story make it clear that Sally is not being dismissive, even if that word is used to describe her behavior.) If she's not willing to do anything other than be dismissive, then I think she's not living up to her obligations to her colleagues. There's some suggestion on the floor to hire in this or that AOS, and she doesn't think it's a good idea. She doesn't have to enter into the discussion at all if she doesn't want to, but if she does enter it, then I think she owes her colleagues more than just dismissiveness. She owes them a thoughtful explanation.

And it seems to me that this obligation is even more clear if Sally already has colleagues who work in feminist philosophy. If Sally is openly dismissive of a subfield in which her colleagues specialize, and she is dismissive in this way to those colleagues—rather than being, say, engaged in an informed way but ultimately skeptical, or neither engaged nor dismissive—then it seems to me that Sally's department has a real collegiality problem, and that Sally's behavior is a contributor. So, while I would not say that I endorse the Orwellian Suggestion unhesitatingly or in full, it seems to me that it definitely points in the right direction.

What's more, the language of the actual Best Practices document is somewhat softer than that of the Orwellian Suggestion:
2. Students and faculty should be open-minded and cultivate a wide interest in philosophical work, investigate and not disparage areas of philosophy or other disciplines with which they are not familiar. We encourage people to be respectful of those working mainly in other areas of philosophy. Constructive criticism is an important source of progress in philosophy, but it is generally better to focus criticisms on particular arguments and theories rather than whole areas of the discipline, which typically contain a wide variety of work. And we should always avoid raising criticisms that could be construed as an invidious personal attack by any reasonable person—especially in public contexts.
This doesn't say that one must actually develop an appreciation of the plurality of philosophical approaches; it just says that one should be open-minded and cultivate a wide interest in philosophy. That sounds exactly right to me, and it seems to me that Sally is described as having followed that advice. It says that one shouldn't disparage areas and disciplines without being familiar with them, but that's consistent with Sally, as she is described, "disparaging" feminist philosophy, since she is described as being highly familiar with it. What's more, the "non-disparagement clause" is accompanied by a caveat stressing the importance of constructive criticism. It admonishes us to remain respectful, but that's true. We should remain respectful. It counsels us to avoid raising criticisms that could be construed an invidious personal attack by a reasonable person (I'm not entirely sure how to parse the 'any' in that sentence), but that's true, too. If you have a criticism, you should try to avoid raising it in a way that could make a reasonable person see it as a personal attack designed to make them angry. To me, that seems like Personal Interaction 101. But it also says, fire away. To me, that seems right.

So even if the Orwellian Suggestion is unacceptable (and although I don't read it that way, I see how a reasonable person could), it seems to me that it has been superseded by what I would describe as a nice piece of concrete, sensible advice about how to get along with one's colleagues. It seems to me that departments where this advice is not followed—in which colleagues are openly dismissive of one another's work, and of the subfields into which that work can be situated, and in which they are open not only with one another but with one another's students—are likely to be unpleasant places to work (depending on the frequency and severity with which it occurs).

--Mr. Zero

Friday, July 4, 2014

On the Recent Leiter/Jennings Dustup

Five years ago, I wondered whether Brian Leiter's contention that a department's PGR rank correlates well with its job placement record was really true. Recently, over at NewAPPs, Carolyn Dicey Jennings attempted to run the numbers and was met with something of a hostile reaction from Leiter. If I'm honest, though, I'm not sure I see why such hostility was necessary. He airs four main criticisms:
First, by her own admission, the data is incomplete (indeed, woefully incomplete in some cases I know about).
Of course, she was up-front about the incompleteness, and the up-front admission of incompleteness was accompanied by a request for additional data, and she has updated her analysis in light of the additional data. It really seems fine to me to run a preliminary analysis on incomplete data, and then publicize it in the hopes of generating more data (and a discussion of your findings). Of course it would be pretty bad to publicize a preliminary analysis without mentioning that it was preliminary, but Jennings didn't do that.
Second, no one would expect a department's reputation in 2011 to have any correlation with its placement prior to 2011, but almost all the placements recorded by Prof. Jennings are from students who would have started graduate school between 2000 and 2005.   I would think philosophers are smart enough to understood that past placement success is a backward-looking measure, and that current faculty reputation, as it correlates with job placement, is a forward-looking measure.
I'm not sure about this. I would expect a departments reputation in 2011 to correlate at least somewhat with its reputation prior to 2011, so I would expect a (potentially indirect) correlation between placement in 2011 and reputation prior to 2011. I'm not sure why it matters when the recently-placed students started grad school. If my department is trying to place me now, I'd think that its current reputation is more important that whatever its reputation was 10 years ago. (I suppose it would be interesting to see whether PGR rank at the time of enrollment correlates with job market success upon graduation, but the suggestion that Jennings should be doing this study rather than the one she did is too strong.)

And I just don't get this "forward-looking/backward-looking" stuff. Correlations are not inherently directional. Obviously the past is the past, and if you're looking to the past you're looking backward. But people look to the past in the hope of learning about the future all the time. It doesn't always work, but it's not nonsense. It seems to me to make perfect sense to investigate whether current "reputation," as the PGR attempts to measure it, correlates with overall placement record.

But maybe I'm all wrong about this. It's not as though I know what I'm talking about. So if I'm wrong, I hope some Smokers will set me right.
Third, her measure of placement success takes no account of the kinds of jobs graduates secure.  2/2 is the same as 4/4, research university is the same as a liberal arts college, a PhD-granting department is the same as a community college.  I know philosophers happy in all kinds of positions, but it's not information, it's misinformation, to equate them all in purporting to measure job placement.
This criticism strikes me as patently unfair. First, the additional data that would be required to control for these factors would be prohibitively difficult to collect and manage. Second, controlling for job type suggests an unnecessary value-judgment about which jobs are best. Of course people are free to make those judgements, but I'd rather not see them reflected in an analysis of the placement data---particularly not at this preliminary stage. If someone were to do a breakdown of the placement data by job type, similar to the PGR breakdown by specialties, that would be fine and even welcome. Knock yourself out. But the idea that not doing so is "misinformation" is, like, not true.
Fourth, the placement rate is calculated nonsensically:  comparing average placement, as incompletely reported on blogs, between 2011-2014 to average yearly graduates between 2009-2013 is equivalent, in most cases, to comparing two randomly chosen numbers, since many (maybe most) of those placed in 2011-2014 will have completed their degrees well before 2009 and well after 2013.  This is so obvious that I'm mystified why anyone would think this is a relevant comparison.
Again, I just don't see how this is nonsense. The average yearly graduate figure tells you the number of job-seekers per year each program has recently produced; the average yearly placement tells you how many job-seekers per year each program has recently placed in a tenure-track job. In effect, it's a comparison of the department's recent graduation rate with its recent placement rate, and I think it makes perfect sense to make that comparison. Taking averages over several years will smooth over outlier years and compensate for the fact that the candidate's hire year might not be her graduation year.

I see why someone might want to see a straight comparison of graduates to tenure-track hires per year, but---as Leiter points out---a person might get their first tenure-track job well before or well after graduation. I see why someone might want to see a metric that strictly follows individual graduates, but doing so will raise problems in data-collection (departments often don't publicize it when their graduates are unsuccessful on the job market), and in indexing placement records to times (since, again, one's graduation year is often not one's year of first TT hire). (Of course, I don't know which comparisons Leiter would find acceptable, or if he had anything in mind at all. He doesn't suggest a better way to do it, so I'm just guessing.)

So, anyways, the comparison doesn't strike me as nonsensical, but maybe that's just because I don't know what I'm talking about. If that's how it is, I hope the Smokers will set me right.

I also don't see how the reference to NYU's placement record is instructive. Leiter complains that although NYU has "one of the best placement records in the world" but ranks only 26th on CDJ's analysis (this ranking was revised to 14th after new data came in), which Leiter thinks is mediocre. In defense of this, Leiter links to NYU's placement page. But, for one thing, the placement page doesn't tell the whole story of NYU's placement record---it shows how many people they placed (and where) without showing how many people they tried to place. But knowing how many people they put on the market every year is crucial to evaluating their placement record. If (And 26th doesn't have to be mediocre; it could be excellent if there was a large but tight group near the front. Which is one reason I don't love ordinal rankings.) And anyways, Jennings' spreadsheet indicates that NYU's placement record isn't as stellar as Leiter claims---most of their graduates get nice tenure-track jobs, obviously, but a substantial minority do not. You don't need a "perverse ingenuity" to generate that result; you just need to compare the rate at which they produce graduates with the rate at which they place those graduates into tenure-track jobs.

Now. I did not find the way the information was originally presented---as a comparison between the (ordinal) PGR rank and an ordinal "placement" rank---to be at all illuminating, and I'm glad she revised the post to present the information in terms of percentages. I think the focus our profession puts on ordinal rankings is pernicious, as is the fact that the PGR is principally organized in terms of them. But I think it is absolutely worth wondering whether whatever it is that the PGR measures is correlated with success on the tenure-track job market---as I indicated at the top of this post, I've been interested in this question for a long time---and I am grateful to Dr. Jennings for her work on this. And I appreciate her willingness to engage with her critics, to explain what she did and how she did it, and to revise her analysis in response to criticisms. To me, it seems like she has responded to her critics in exactly the right way.

So I'm not sure I see the need for such a hostile response on Leiter's part. Doesn't seem helpful. But what do I know? Nothing.

--Mr. Zero

Thursday, July 3, 2014

Should we do job talks at on-campus interviews?

A while back, Colin Marshall (UW-Seattle) initiated a discussion over yonder about the sometimes-central role that job talks play in the interview process. The discussion didn't really get going, so he wrote in to us -- we who are more familiar with "the horrors and inequities of the market," as Colin put it -- and says:
[M]ost search committees seem to think that the traditional job talk is a perfectly fair way to evaluate job candidates. After going through the market three times and watching various friends' experiences, I'm pretty sure it's not remotely fair (especially for introverts). It would be great to hear from people who have recently been on the market whether they think that job talks should be the norm, and whether there are other things that job committees should consider doing instead.
His original question was the following:
Almost every department I know of gives the job talk a central role in hiring decisions, but I'm wondering whether the traditional job talk really deserves to be sacred while other aspects of the hiring process are changing.

My main reason for skepticism is that I know a number of young philosophers who are (a) great researchers, (b) great teachers, (c) great members of the profession, and (d) great departmental citizens, but who, for various reasons, aren't great at presenting their research to a room full of judgmental strangers, most of whom are non-specialists. The latter skill isn't a bad one to have, but it's surely much less important than (a)-(d). Yet in the traditional job talk, this latter skill is what's privileged, and often used to make judgments about (a)-(d). That seems like a recipe for false negatives.

So here's my question: what alternatives to the job talk have hiring departments tried for campus visits, and are there un-tried alternatives we should consider? I have a hunch that our profession could do much better.
I like giving talks, but that just might be because I feel like I'm really good at giving them. In fact, giving talks is probably the philosophical skill I feel like I've mastered (the content, on the other hand...). Though I've only had to give 2 or so job talks, they seemed to go pretty well.

I've also had on-campus visits to other schools that did research sessions -- passing out a paper beforehand and being asked questions about the paper for an hour or so like a mini-defense; the horror! -- and I've bombed; just did an outright terrible job.

And for my current position (VAP), I didn't have to do a job talk at all. Instead, I was only interviewed over Skype, which I will never fail to plug as the most equitable, fair way for departments to do first-round interviews of graduate students, adjuncts, or other members of the profession who do not have travel budgets, but who do (likely) have internet connections.

And while I might prefer giving talks, I probably agree with Colin that they might be especially noisy for making hiring decisions (like so many other parts of the interview process). Perhaps we might do better.

Any thoughts about interviews and what to do instead?

-- Jaded, Ph.D.

Thursday, June 19, 2014

Citing Your Own Unpublished Work

In comments here, anon 8:30 asks:
Short version: Can you cite your own unpublished work? 
Longer version: What do you do when you're writing a paper and want to refer to another paper you've written that goes into more detail on a certain point or supporting argument that you don't have time to address at length in the current paper -- but that other paper is unpublished? Are you just screwed?
Short version: yes, you can cite your own unpublished work.

Longer version: even if it looks bad, it won't look bad to anyone, since no one will probably read your paper. Just kidding, kind of. But seriously, I think it's basically ok. At least, it is as long as the thing you cite is eventually published. If I'm reading your 2014 paper in 2014 and I see you do this, I might hold it against you a little, or I might not. Either way, I probably wouldn't think it was a big deal. If your 2014 paper got published, your unpublished paper will probably eventually be published, too. (Of course, I would have more confidence in this inference the farther along you are in your career.)

And if I'm reading your 2014 paper in 2020, and I see that the unpublished paper came out in 2016 (accounting for longish review times and journal backlogs), I wouldn't care at all. I'd figure you'd been working on the two things at the same time, and the one in front of me happened to come out first. No big deal.

However, if I'm reading your 2014 paper in 2024 and I see that the unpublished paper never came out, I might come to have doubts about the substance of that paper. I don't know if those doubts would infect my opinion of the paper in front of me or not. Maybe. Maybe not. Depends on how crucial the point in the unpublished paper is to the 2014 paper, and, like, whether I'm in a good mood, and stuff like that.

One thing I'd be cautious about, though, is that I've had papers change substantially over the course of the refereeing process. It would be a bit of a bummer if published version of the paper you're citing didn't connect with the 2014 paper as well as the unpublished version did.

Additionally, I'm pretty sure I've seen big-name people do this. At least, I'm pretty sure I've seen Mark Schroeder do this. But maybe he's earned the right in a way that us under-laborers haven't. And maybe I'm wrong about this whole thing.

What say you, Smokers?

--Mr. Zero

Tuesday, June 10, 2014

I wish to register a complaint...

I was thinking about a follow-up to the last thread, on your pet peeves re: journals, journal editing practices, etc. But there's this: SciRev has a database collecting data/reviews of journals. The info on philosophy journals is pretty sparse, but you can populate it with info on turnaround times, number of reviews, etc.

Providing this info can, of course, be useful to your fellow philosophers. So do it. Inconveniently, you have to register to review journals, but you can see the ratings without registering, if you're okay with being a free rider.

You can also complain anonymously here. Or heap praise upon the virtuous, as the case may be.


Thursday, May 15, 2014

Review round-up: The good, the bad, and being helpful.

I've had some mixed experiences with peer review this month. The bad: a journal that held my paper for three months, then returned it saying it was "inappropriate" for the journal with absolutely no reviewer comments. Which is bullshit. It's not "inappropriate" (at least not as I understand that word) for the journal, as they happen to have a special issue coming out on the very same (general) topic. (There was no public CFP for that special issue: the papers included are really, really, really obviously invited, all bigwigs who write about it all the time.) Methinks inappropriate in this case means, "you were not invited to write a paper for our special issue, and we don't need your stinkin' anonymous nobody paper." Which, you know, they could have told me that in a lot less than three months, so I could have moved on with my stinkin' paper.

Which reminds me of a paper that got outright rejected seven times in seven days -- I appreciate that the journals were clearly not interested and said so in a timely manner. (It was finally accepted.)

The good: I got a paper back last week with some of the most helpful, most detailed comments and suggestions for revisions I've ever received, which I am quite sure will really make the paper better. In re-reading my paper, I can clearly see what the reviewers meant, and how their suggestions can be implemented. Plus, the reviewers understood what I was doing, and their suggestions were in the spirit of making that more effective. My sole complaint there is the journal's use of AMA style (numbered references), which is a total pain when you're making revisions that will require reordering and renumbering all the references. (Maybe there's a way to make that happen automatically, and maybe I need to finally learn how to use Scrivener, but I haven't yet had time to do that. Kinda busy trying to write papers.)

So, it made me think about my own reviews for papers, and how much time I spend on them, and how well I write them. I guess one issue for me is that I don't have a lot of time, but I do agree to review papers a few times a year because it is part of the process, and a contribution to the profession.  But maybe I haven't thought enough about how much my reviews might assist authors (rather than journals), especially authors like me, who are junior and having to crank out a lot of work for tenure, working in virtual isolation from peers, and without adequate support systems in place to get useful feedback on papers. (I mean, I have friends in philosophy, but there's only so much I want to impose on them. My colleagues would not be much help.) I've reviewed some really terrible papers, and some really pretty good ones, and I think really hard about my judgments on their publication-worthiness. But now I'm really thinking my reviews could use some improvement, and I should focus more than I have been on being helpful to the authors.