Friday, October 14, 2011

'Publish or Perish' Should Perish

I hate the 'publish or perish' mantra of academia.  I really do.  To me, it takes the focus away from doing great work and waiting until it's truly ready for public consumption and instead stresses us out as we try to ensure our publication record is up to snuff.

P is for Produce, "Piled Higher and Deeper" by Jorge Cham www.phdcomics.com

I particularly dislike the mantra these days because it's so easy to document what we're up to online.  If committees for scholarships, tenure, etc want to see whether we are doing good research, they could in theory find out in other ways, for example from our discussions, blogs, websites, and more online (assuming of course that we researchers got better at taking advantage of such media).  The full process, including the failures, could be captured.  Granted, it's not as official as a peer reviewed paper and false information could be spread, but in some ways it's a more full and genuine glimpse into someone's research ability, while the published papers come to represent the most polished work possible.  Perhaps if we wanted this to become a standard, we could find ways to ensure the information available outside of published work is useful and trustworthy.

I certainly hear of many others who dislike the current way of things as well.  But, as they point out, it's the way you have to play the game in academia.  True, but is generating knowledge for the purpose of playing the game really going to result in the best outcome? And is the length of someone's publication list really a good indication of the value they are bringing to the world?

I'd love to see a fundamental shift in our thinking when it comes to publishing.  I do believe that the process of peer review and the ability to share the outcome of our research is important, but I wish the emphasis was more on high quality results than the insatiable need to just get something out there.

Do you feel the same way? How would you change the system if you could change anything?

20 comments:

  1. I think there is a place for peer-reviewed academic publications but it needs to change from what it currently is. I'd love to see us better use modern publication technology. It should be shameful that we're sending around for-print PDFs with bad metadata as the de facto documentation format. You can't even embed proper machine-readable citation data in them.

    Secondly we need to start accepting that we have to provide running code. Providing at least a runnable demo of your code (with instructions) should be mandatory. I'd love to see a streamlining of the academic publishing scene with fewer journals and conference proceedings and more "aggregators" that pull together blogs, wikis and code repos so researchers can better talk about and demo what they're currently working on.

    ReplyDelete
  2. Totally agree - the way we publish needs updating as well. That's probably a whole other rant. ;)

    ReplyDelete
  3. Someone just posted this: http://sciencecodemanifesto.org/

    Goes with Basu's comment and with the need for publish or perish to die. I suspect a lot of us don't want to release our "research code" because it's not really up to snuff. It was just barely good enough to get that paper out the door and forget about it again.

    ReplyDelete
  4. I pretty much agree with what Basu said. Publication needs to go beyond the idea of the "for-print" PDFs, as some ideas often can't be passed through static, "wall of text" documents.

    I'm all for running code demos, too, but this might cause issues when patented/copyrighted work is at stake.

    ReplyDelete
  5. I suppose we could start with the code that wouldn't cause such issues - some is better than none. ;)

    ReplyDelete
  6. I agree the current peer-review system needs work, but I don't know that I agree with many areas of this post.

    First, if scholarship/hiring board would had to read every blog post/research report of an applicant and judge it on its scientific value then NSERC applications would take years to be processed. Having your research published is a form of validation of its scientific quality, with the conference/journal pc/editorial board saying it meets certain quality standards. Anyone can make blog posts on scientific topics, but without review then who knows if it is real science or junk.

    Secondly, especially for scholarships, publications are the only way to differentiate between great students. Most grad students at good schools will have very high grades, be smart, and be good workers. Publication records help indicate which ones are good at performing and documenting science.

    A scholarship agency needs to be able to descide who gets their money and want to make sure the research produces results that others find interesting. They don't want to spend 15K per year for 4 years and have the person only produce 1 or 2 papers. It makes the funding agency look bad when they give that much money to a Ph.D student who doesn't produce when they could have given it to someone who had 2-3 publications from there Masters degree and a dozen more during their Ph.D.

    ReplyDelete
  7. Definitely valid points. I'm hoping that even if the first idea I threw out there isn't necessarily the best that it can start discussion. Perhaps some sort of standard that is more practical would need to develop over time.

    It's not as though I don't like the idea of judging research by publications in and of itself, but that there is just too much pressure to get the numbers high with less regard to quality. SudburyJay's last paragraph kind of summarizes this for me - it makes sense that scholarship agencies want to reward people who produce, but what if the person with 4 papers kind of rushed to get those out knowing that this is how scholarships are awarded, while the other took more time to ensure higher quality resulting in only two? It's not necessarily the case that the person with more papers isn't as conscientious with quality, but like the grade inflation that's been happening lately in high school and maybe even undergrad courses, things seem to escalate when the culture dictates that you have to publish as much as the next guy to keep up.

    So I guess what I'm saying is that maybe even a cultural change would be enough, though it does often seem that some kind of structural change is needed to help change the culture.

    Then again, maybe this is the viewpoint of someone who has only played the game for a few years. ;)

    ReplyDelete
  8. I agree that people shouldn't publish bad/rushed work, but I think that many people are starting to wise up to that.

    If you are applying for a major scholarship or award the people reviewing will likely be in your field and know what conferences have good reputations. From what I have heard, they may even penalize if you are publishing to junk conferences. For example, in AI I know that many (but not all) of the IEEE sponsored AI/machine learning conferences are essentially junk so you would get far less credit for publishing there compared to the ones sponsored/affiliated with AAAI or the major sub-field specific conferences.

    That being said, if a full time grad student can't produce at least a single paper per year at a good conference then I think something is very wrong. If someone 1/2 through a Ph.D (with a previous Master's) doesn't have at least 4 (one per year) publications are good conferences that is a huge red flag. It makes the outside observer think they either aren't producing or are getting rejected alot (poort writing/communication skills).

    ReplyDelete
  9. Hard to say... I've been rejected lately despite good reviews which included commentary on the paper being well written. And I have some really interesting longer term projects on the go that won't produce papers for a little while. So while I personally don't have the number of papers you're suggesting, I'm not sure it tells the whole story in my case. Maybe it's because I'm in a more applied area right now...?

    (Disclaimer: I am not claiming I am the best researcher out there, and as such, I'm not really thinking so much of my own experience when discussing this topic. But your comment did make me reflect on my own situation.)

    In any case, if people are wising up to the idea of not rushing, that does make me happy.

    Thanks for the great discussion so far!

    ReplyDelete
  10. Interesting topic for sure. Two things I notice:

    1) I expect you are submitting to good conferences with low acceptance rates, so if you get good reviews the accepted papers must get excellent ones. If and when you actually get accepted I would assume the conference proceedings are well read and therefore well cited (which can be something you can list on applications). But if only excellent papers are being excepted, then if your student colleagues who also publish there are publishing many papers there it might defeat your quality vs. quantity argument.

    2) As you said, you might not be making your contributions clear enough. Are you submitting to theory-focused conferences with an application paper (or vice-versa)? Are you making enough contributions in your paper (do your reviews score you low on novelty and importance)? Maybe it is the other way and you are trying to pack too much into a conference-length paper? Maybe you are not going into enough detail to convince the reviewers of your work. In other fields I would recommend submitting to a journal instead, but in computer science journals are often MUCH easier to get published in than good conferences.

    ReplyDelete
  11. It could pretty much be said about anything. A few tweaks to the system here or there could improve matters, and generally speaking a more socially conscious spreading of the wealth in a way that eliminates the incentive to game the system would stimulate more activity through which the more spectacular activities will shine through instead of being restrictive to the point of eliminating the lesser ones. That course on genetic algorithms is probably showing through right now but that's basically the point -- reduce competitive pressures to facilitate more diversity.

    ReplyDelete
  12. Yup, in my case there is definitely an issue of aiming really high and perhaps a little bad luck. I'm fine with that, especially because the work improves with each attempt and I want it to be the most useful it can be, but it does result in a gap in the publication record. It probably looks like a red flag (perhaps fairly, perhaps not). In the long run I'll be better off, but because of the need to publish that magic paper a year (or whatever happens to be accepted number) I would have been better off in the short term taking a less risky route in terms of venue choice and even where I put my time.

    I guess that's one of the issues to me - if we know we need to publish something in regular intervals, we might take less risks and choose projects that are more likely to succeed. Yet the riskier stuff might have the potential for even more impact.

    It would be really interesting to conduct a survey among both student and career researchers that subtly reveals publishing strategies. I'd be really curious to see how much influence the publish or perish mantra actually has, and how it influences.

    ReplyDelete
  13. There's also the issue that perhaps a lot of what we call "Computer Science" doesn't really fit into the conventional framework for sciences and hence publication count is the wrong metric. Personally I'm a hacker at heart and though I'm a grad student I view my papers as a by-product of writing interesting code. I make cool software and then write a paper about it so that people don't have to go grubbing about in my code to figure out what it does and why it's important. Paul Graham in his Hackers and Painters essay points out the impedance mismatch between hacking and academia very well.

    ReplyDelete
  14. Oli, well theoretically spreading the research wealth might be nice I also think that would punish truely great researchers who publish lots of very solid research. It would be hard to entire top minds to undertake graduate degrees if they could only get a $2500 scholarship instead of $15000-$20000 per year.

    Basu, I would almost say the opposite. I would say Computer Science is very much a science but often times university departments are more applied computer science/engineering focused (in both their undergad programs and research). But I still think that if your software is true research then it solves some problem that was previously unsolved (novel algorithm, improved algorithm, results from using in a more complex domain, etc.) and would fit quite well in traditional science. That being said, you might need to extract the interesting part of your software (the scientific contribution) and present/test it outside your software

    ReplyDelete
  15. Gail, I find that is the reason most successful researchers have several tracks of study ongoing at any one time. That was if one proves to be not working out there are still others producing meaningful results that can be published. Of course that requires balancing more parallel work, but such is the cost of having that safety net.

    ReplyDelete
  16. I would say that the top researchers deserve more money (spreading it around more evenly has never been something I agree with), but I would probably say that how we define who the top researchers are is an underlying issue I've been thinking about in this discussion. I obviously don't have a clear answer, but I am still feeling that a "by the numbers" approach on its own isn't necessarily the best way (even if it's the best we've got now). Ah well, onward and upward! :)

    ReplyDelete
  17. Human values aren't what they should be. Our value system seems to be just a small set of drives and aversions. Don't expect scientists to be much better. We are just great apes after all.

    ReplyDelete
  18. Standards vary a lot from field to field even within CS. The CS publication system has be anomalous in academia for the past 30 years (no other field values conference papers as highly, and few others have journal delays as long).

    Quality is more important than quantity, but is harder to measure. Even citation counts (which measure how influential a paper is, not how good it is) take years to gather enough data, and CS researchers are notorious for not citing other people's work properly, so the citation counts in CS are particularly bad measures.

    If I were hiring, I'd favor a recruit with one really good paper over one with a dozen mediocre ones, as long as the paper was recent enough not to look like the only good thing the person would ever do.

    ReplyDelete
  19. Interesting point about citation counts. I didn't realize that was such a problem in CS.

    ReplyDelete

Comments are moderated - please be patient while I approve yours.

Note: Only a member of this blog may post a comment.