Wednesday 17 February 2016

My commitment to open science is valuing your commitment to open science


tl:dr – to be shortlisted for interview, all future post-doctoral vacancies in my lab will require candidates to show a track record in open science practices. This applies to two posts I am currently advertising, and for all such positions henceforth. 

Twitter never ceases to amaze me. The other day I posted a fairly typical complaint about publication bias, which I expected to be ignored, but instead it all went a bit berserk. Many psychologists (and other scientists) are seriously pissed off about this problem, as well they should be.

My tweets were based on a manuscript we just had rejected from the Journal of Experimental Psychology: Applied because the results were convincingly negative in one experiment, and positive but “lacked novelty” in the other. Otherwise our manuscript was fine – we were complimented on it tackling an important question, using a rigorous method, and including a thorough analysis.

But, of course, we all know that good theory and methodology are not enough to get published in many journals. In the game of academic publishing, robust methods are no substitute for great results.

The whole experience is both teeth-grindingly frustrating and tediously unremarkable, and it reminds us of three home truths:

1) That this can happen in 2016 shows how the reproducibility movement still exists in an echo chamber that has yet to penetrate the hermitically sealed brains of many journal editors.
2) To get published in the journals that psychologists read the most, you need positive and novel results.
3) This is why psychologists p-hack, HARK and selectively publish experiments that “work”.

So what, I hear you cry. We’ve heard it all before. We’ve all had papers rejected for stupid reasons. Get over it, get over yourself, and get back to cranking the handle.

Not just yet. First I want to make a simple point: this can’t be explained away as a “cultural problem”. Whenever someone says publication bias is a cultural problem, all they are really saying is, “it’s not my problem”. Apparently we are all sitting around the great Ouija board of Academia, fingers on the glass, and watching the glass make stupid decisions. But of course, nobody is responsible – the glass just moved by itself!

Publication bias isn’t a cultural problem, it is widespread malpractice by senior, privileged individuals, just as Ted Sterling defined it back in 1959. Rejecting a paper based on results is a conscious choice made by an editor who has a duty to be informed about the state of our field. It is a choice that damages science and scientists. It is a choice that punishes honesty, incentivizes dishonesty and hinders reproducibility.

I’m a journal editor myself. Were I to reject a paper because of the results of the authors’ hypothesis tests, I would not deserve to hold such a position. Rejecting papers based on results is deliberate bias, and deliberate bias – especially by those in privileged positions – is malpractice. 

How to change incentives 

Malpractice it may be, but publication bias is acceptable malpractice to many researchers, so how do we shift the incentives to eliminate it?

Here are just three initiatives I’m part of which are helping to incentivize open practices and eliminate bias: 

Registered Reports: many journals now offer a format of article in which peer review happens before data collection and analysis. High quality study protocols are then accepted before research outcomes are known, which eliminates publication bias and prevents many forms of research bias. To date, more than 20 journals have joined the Registered Reports programme, with the first ‘high-impact’ journal coming on board later this year. 

TOP guidelines: more than 500 journals and 50 organisations have agreed to review their adherence to a series of modular standards for transparency and reproducibility in published research. For background, see our TOP introductory article. 

PRO initiative: led by Richard Morey of Cardiff University (follow him), this grassroots campaign calls for peer reviewers to withhold comprehensive review of papers that either fail to archive study data and materials, or which fail to provide a public reason for not archiving. You can read our paper about the PRO intiative here at Royal Society Open Science. If you want to see open practices become the norm, then sign PRO.

Registered Reports, TOP and PRO are much needed, but they aren’t enough on their own because they only tackle the demand side, not the supply side. So I’m going to add another personal initiative, following in the (pioneering) footsteps of Felix Schönbrodt. 

Hiring practices 

If we’re serious about research transparency, we need to start rewarding transparent research practices at the point where jobs and grants are awarded. This means senior researchers need to step up and make a commitment.

Here is my commitment. From this day forward, all post-doctoral job vacancies in my research group, on grants where I am the principal investigator, will be offered only to candidates with a proven track record in open science – one which can be evidenced by having pre-registered a study protocol, or by having publicly archived data / materials / code at the point of manuscript publication.

This isn’t me blowing smoke in the hope that I’ll get some funding one day to try such a policy. I’m lucky enough to have funding right now, so I’m starting this today.

I am currently advertising for 2 four-year, full-time post-doctoral positions on my European Research Council Consolidator grant. The adverts are here and here. Both job specifications include the following essential criterion: “Knowledge of, and experience applying, Open Science practices, including public data archiving and/or study pre-registration.” By putting this in the essential criteria, it means I won’t be shortlisting anyone who hasn’t done at least some open science.

Now, before we go any further, lets deal with the straw man that certain critics are no doubt already building. This policy doesn’t mean that every paper published by an applicant has to be pre-registered, or that every data set has to have been archived. It means that the candidate must have at least one example of where at least one open practice has been achieved.

I also realise many promising early-career scientists won’t have had the opportunity to adopt open practices, simply because they come from labs that follow the status quo. We all know labs like this; I used to work in a place surrounded by them (hell, I used to be one of them) – labs that chase glamour and status, or that just don't care about openness. It’s not your fault if you’re stuck in one of these labs. Therefore I’ve included a closing date of April 30 to give those so interested the time to generate a track record in open science before applying. Maybe it's time to test your powers of persuasion in convincing your PI to do something good for science over and above furthering their own career.

If you’re a PI like me, I humbly invite you to join me in adopting the same hiring policy. By acting collectively, we can ensure that a commitment to open science is rewarded as it should be.

10 comments:

  1. Interesting policy! Let me know how this works and I may do it next time I'm hiring... :)

    Anyway, I know I'm preaching to the choir but your story (and your rant) are precisely why we need to change the publication system. I can't tell you how tired I am off sending perfectly (or not so perfectly) crafted manuscripts from journal to journal getting rejected at every turn for lack of general interest or novelty. Usually the focus in the crafting is on beauty and accessibility rather than the actual meat of the science. It is the wrong focus generating bad incentives - and above all it's a massive waste of everybody's time.

    I get that there is a place for streamlined stories a general audience may want to read. I don't read all the methods and supplemental materials in astronomy or particle physics papers. But that doesn't mean they shouldn't be there for experts to read.

    Manuscripts should be reviewed on the science first. Suggest revisions if the science can be improved or something needs to be clarified. Reject them if there is something fundamentally wrong. But for God's sake stop this nonsense about novelty and general interest. Why should anyone care about general interest before you have checked that the science is even sound?

    When a thorough manuscript with robust results (whatever they may be) passed review, then by all means discuss the general interest and write snappy summary papers about them to be publicised in glamour journals. But that should happen after the science, not before!

    ReplyDelete
  2. Thank you for all your efforts!

    I was just looking at the Registered Reports-format and the TOP guidelines and was wondering if a separate section called "publication bias" (or something like that) could be a welcome addition to the 8 standards of the TOP guidelines. The Registered Reports format could then be put at level 3 (just like it is now under the heading "replication" only it would then also relate to "new" studies/findings).

    Would that make any sense, and more importantly, could that increase the chances of journals adopting the Registered Reports format?

    ReplyDelete
    Replies
    1. Thanks - this is an excellent idea. The nice thing about the TOP guidelines is that they are an ongoing project, and will be revised on a regular basis. I will feed your suggestion into the next round of discussions.

      Delete
    2. I was just curious and wondered whether you have gotten any news/feedback on this yet?

      Kind regards.

      Delete
    3. The situation at the moment is that we are setting the committee membership for the next round of TOP discussions. I like the idea of a Publication Bias standard and will definitely raise this, among others. Not sure on exact timeline yet as the next round of discussions hasn't been scheduled exactly (and most journals have yet to implement their TOP levels from the first round) but I or the COS will update as soon as there is further news.

      Delete
  3. A fourth initiative? Deposit a manuscript with a "preprint" server such as bioRxiv or arXiv when a manuscript is submitted to a journal? One of the problems with editors is that they presently control access to a club we want to be part of. Online "preprints" dilute some of that control. Moving "publication" farther from journal control also accelerates dissemination, which should help progress in general. "Preprint" availability may net more numerous and extensive (and more useful?) peer reviews into the bargain.

    (Quotation marks in honor of Michael Eisen, who loathes these terms if his Twitter feed is anything to go by.)

    ReplyDelete
  4. "Preprints" (and I agree with the quotation marks - printing is largely irrelevant) are basically the crude form of the publication system I would like to see. In one of our preprints we included the response to previous reviewers. I'd like the whole review process taking place in public with those preprint versions.

    ReplyDelete
    Replies
    1. I plan on uploading to the arXiv (mine are physics-y methods papers) a copy of every paper on which I'm either first or senior author as the manuscript heads to a journal. But I have already been an author on arXiv manuscripts that we didn't and won't send to a journal, and on one particularly delicious occasion we simply stopped corresponding with a journal when, months/years after the arXiv submission, we were still being asked separately "What problem?" and "What is the solution you are offering me?" by separate reviewers. We left them to it. The paper is "out" and doing its necessary work in public. For sure, it doesn't get cited like a real paper (yet), but it's there and it can't be ignored forever. What is more important is that it contributed to the debate on in-plane acceleration in a timely fashion. (Blogs do something quite similar.) That was the determining factor for me. I didn't (and don't) need the citation (lucky me!) and as a group we had moved on to other more pressing problems for us and didn't need the distraction of farting about answering opinions-masquerading-as-criticisms in reviews. My god it felt good knowing that our work was already out there. Ha! (Readers, if you haven't tried a "preprint" server as your manuscript gets submitted to a journal, give it a go. It is the most liberating scientific experience I can remember having. Really.)

      Delete
    2. I agree with that final sentence. I didn't expect to get so much liberating joy the first time I uploaded a preprint. Even though nobody comments on them it feels good to know it's out there and nobody can ever claim it's not "novel" or that you stole their idea or whatever.

      My main issue with the wild west of preprints only and never submitting to an organised review process is that this opens up a can of scientifically crappy worms. If we abandoned traditional peer review completely and everybody just uploads all their preprints only the world will just be swamped with random nonsense. Very few preprints receive public discussion, at least in my field.

      In my vision, preprints are the manuscripts. There is an academic editor who invites expert reviewers. The authors may also solicit the involvement of a glamour journal editor if they think it should need that but it's not necessary. Conversely, glamour journal editors may by themselves become aware of the manuscript and choose to request editorship.

      In addition to invited reviewers, anyone else can write a review too (or perhaps only qualified experts - that point is still a matter of debate). After sufficient review has taken place the manuscript then gets "accepted" and indexed as part of the scientific record. However, post-acceptance peer-review is always on-going, it just isn't about acceptance at this point (although it can still lead to correction or retraction in cases where this is necessary).

      At this point the authors may again wish to solicit glamour journal involvement or glamour journals may seek to re-publish studies they feel are of particular interest and have received positive reviews. The authors can then write a snappy summary paper for that journal which is published for the general interest audience - with the actual long specialist manuscript being linked to it in a similar way that supplementary information is now linked to high impact articles.

      Delete
    3. Yes, I pretty much share your vision, Sam. I am too pragmatic to expect that journals, and glam journals in particular, will lose any of their glossy appeal let alone disappear in my lifetime. I'm only an untrained amateur psychologist but my powers of observation suggest that while people are involved then cachet, egos, rankings, etc. will remain a large (likely driving) part of the equation. I'm not about to tilt at this windmill! Instead, with an arXiv submission I am simply acting in my own best interests as well as expediting the distribution. Conferences once filled this niche, I tend to view online preprint servers as today's equivalent of a conference abstract: may or may not be reviewed, may or may not be accurate, etc. But it is public! (Anyone who doesn't believe this statement, check with a patent lawyer for their definition of "public domain.") And it means the pressure is off re. traditional journal publication.

      As for a wild west of online-only preprints, sadly this is going to be a cost of doing business, just as junk email is a cost of email, ads are a cost of viewing websites, and so on. What is likely to happen is that we will apply the same sorts of rankings and filtering here as we do in journals. The arXiv checks for basic criteria before acceptance, and anyone who uploads there knows that several experts are likely to glance at a manuscript even if they don't fully review it or comment on it. This acts as a similar filter as submitting to a journal. There are also online-only journals such as The Winnower. As these new vehicles mature they will gain reputations, regular audiences etc. and thus a system should evolve where certain sources are more likely to be trusted than others.

      My plan is to use repositories like The Winnower and arXiv-only submissions as a median between a blog post and a full journal submission. I have several pieces of work that people will want to see, but I don't have the motivation or the time to generate the sample size or perform the sort of truly rigorous experiment to pass review in a trad journal. But people are going to want to point at this work because it will give them more justification for certain commonly held views than they have at present, which is often nothing but historical precedent (that can be... wait for it... wrong!) If my work is proved incorrect then so be it! At least it prompted a more considered evaluation! Either way the field should benefit.

      So there you have it. Much like Chris, but in a slightly different way, I have simply changed the way I do business when it comes to publications. I have assessed the most important criteria to me and found a way to ensure they are fulfilled. While I don't have the same strong opinions on code/data sharing or HARKing as Chris, I do very much understand the feelings of needing to do something, and of being in a position to do it. Those of us in a position to act on our proffered opinions on the future of publications should do it. Helping junior and less secure scientists to act, too, is a stellar way to proceed.



      Delete