Thursday, 29 September 2016

“Methodological terrorism” and other myths

Most readers of my blog will be aware of Professor Susan Fiske’s leaked letter to the APS Observer last week. In her article, Fiske delivers a blistering invective on the dangers of social media, warning that scrutiny of psychological research by “online vigilantes” and “self-appointed data police” is destroying careers. In actions she describes as “methodological terrorism”, Fiske warns of “graduate students opting out of academia, assistant professors afraid to come up for tenure, mid-career people wondering how to protect their labs, and senior faculty retiring early”. Her solution? Scientific critique should be restricted to moderated groups and behind the walls of peer reviewed academic journals.


Fiske refuses to identify the “terrorists” because apparently that’s unprofessional (though see this argument from Neuroskeptic as to why it is vital if such allegations are true). This leaves her in an unenviable Trumpian territory of delivering a broadside against an entire community based on what “people have been saying”. Lots of people, no doubt. Good people.

The whole premise is so flimsy that it would probably have been ignored if not delivered by someone so high ranking.

At the time the article leaked I was at a conference in Germany that Fiske was also attending. I immediately emailed her, pointing out what I saw as some of the problems with her argument, and I invited her to join our session the next morning to publicly debate the issue. She declined. 

Two days later, PhD student Anne Scheel had the courage to stand up in front of a packed auditorium and challenge Fiske directly about her “terrorist” charge (if you’re not following Anne, go do that now). In response, Fiske claimed that everyone overreacted. Apparently, she just used that term to get people’s attention, but says she would still defend the “conceptual basis for that wording”.

Oh irony.

Now you may well ask, what evidence is there that Fiske's terrorists even exist in sufficient numbers to warrant such a panic? That answer is, not much. Since 2015, UCL neuroscientist Sam Schwarzkopf has operated a blog inviting those who believe they have been bullied by methodologists or the open science movement to speak up, anonymously if they prefer. How many victims have come forward?

Not one. 

So what’s really going on here? The truth is that we are in the midst of a power struggle, and it’s not between Fiske’s “destructo-critics” and their victims, but between reformers who are trying desperately to improve science and a vanguard of traditionalists who, every so often, look down from their thrones to throw a log in the road. As the body count of failed replications continues to climb, a new generation want a different kind of science and they want it now. They're demanding greater transparency in research practices. They want to be rewarded for doing high quality research regardless of the results. They want researchers to be accountable for the quality of the work they produce and for the choices they make as public professionals. It's all very sensible, constructive, and in the public interest. 

Placed in this context, Fiske’s argument – as inflammatory as it is – is actually quite boring and more than a little naive. Social media is democratizing science, and nobody, not even the most powerful among us, can stop its inexorable rise. Does anyone really think that Pandora’s box can be closed, returning to the halcyon days of the 1990s (or earlier) when critique of science happened behind closed doors among selected elites?

What’s happened instead is that technology has empowered the new generation to speak freely and publicly on scientific issues, to critique poor quality science and practices, to bust fraud, and to break the bounds of peer review. Twitter in particular has shaken the traditional academic hierarchy to its core. On Twitter a PhD student with thousands of followers suddenly has a greater voice than an Ivy League professor who might have no social media presence at all.

If the traditionalists want to preserve their legacies (and they are worth preserving) then they would do well to remember who they truly serve. Psychology does not exist to support the interests of researchers seeking to command petty empires and prestigious careers, less still to protect the reputations of those who have already “won” in the academic game of life. Psychology has but one master: the public. That’s why psychology featured in over 400 “impact case studies” in a recent audit of UK science, and it’s why millions of dollars, pounds and euros are being invested in new opportunities for all scientists, including psychologists, to do more transparent and robust research.

We remain forever accountable to that public for the quality of research we produce. Those who are prepared to critique science – and scientists too for their choices – are performing a public service that warrants our appreciation not condemnation. To Fiske’s “terrorists” I therefore say thank you. And to Fiske herself, who complains about the tone of scientific debate while calling it terrorism, I say: physician heal thyself. 

For other responses to Fiske, see these great posts by Andrew Gelman, Ana Todorović, Xenia Schmalz, Neuroskeptic, Andy Field, and Tal Yarkoni.

Friday, 19 August 2016

Registered Reports for Qualitative Research: A call for feedback from humanities and social science researchers

tl;dr – in 2017 we are expanding the Registered Reports publication model into qualitative research for the first time. Since we’ve never done this before, we’re very interested to hear from qualitative researchers about how we can make the format attractive, and what obstacles or drawbacks we might encounter with pre-registration of qualitative studies.

Registered Reports are developing some serious momentum. What began in 2013 as a radical (and somewhat controversial) idea to curb bias in published research is rapidly becoming a standard addition to the publishing landscape. Twenty-eight journals have now signed on to offer the format, with more in the pipeline, and across a wide range of social, life and physical sciences.

For the uninitiated, Registered Reports are form of empirical publication where the peer review happens, in part, before the results of studies are known (full details here). Proposals that are scientifically robust, and which receive positive peer reviews, are then provisionally accepted for publication. Once the research is complete, the authors resubmit their manuscript with the results and discussion; it is re-reviewed and ultimately published, provided the authors adhered to their protocol, interpreted the evidence reasonably, and met any pre-agreed standards for assuring data quality.

The great strength of Registered Reports is the way it tackles bias. By accepting studies in advance of results, it prevents publication bias – the tendency for journals to selectively publish results that are considered clear or more attractive. And by requiring authors to pre-specify their research hypotheses and analysis methods, it also greatly reduces the capacity for biased reporting by authors.

As the popularity of Registered Reports has increased, so has its scholarly reach: from the initial launch within psychology and neuroscience it has expanded into political science, biology, and even physical sciences of physics and chemistry. But within every area, so far, that Registered Reports have ventured, it has been under the banner of quantitative research.

To lay my cards on the table, I'm a firm believer in the value of qualitative research. I gained a deep respect for it after being involved in this study, where we used qualitative methods (specifically, grounded theory) to explore the way parliamentarians use science and evidence in decision-making. Qualitative research can generate remarkably rich information, and in some ways it is more transparent than quantitative research. For instance, the qualitative researcher explicitly acknowledges their own bias, and – in our work at least – we publicly archived all of our data to enable qualitative replication.

Publication bias also appears to be just as much a problem in qualitative research as in quantitative research. As the authors of this study conclude:

"This suggests a mechanism by which "qualitative publication bias" might work: qualitative studies that do not show clear, or striking, or easily described findings may simply disappear from view. One implication of this is that, as with quantitative research, systematic reviews of qualitative studies may be biased if they rely only on published papers."

If publication bias is as much a problem for qualitative research as quantitative research, then there is clearly room for Registered Reports to become a popular option among qualitative researchers. This opens the door for Registered Reports to be taken up by fields such as anthropology, cultural studies, criminology, sociology, and beyond. In 2017 we will be launching Registered Reports in at least one journal that publishes papers within these fields, among other social sciences and humanities.

All that said, I am far from being an expert on qualitative methods, so if you are a qualitative researcher I would greatly appreciate your input into the design of a qualitative Registered Reports model. We need your help to get this right and to make it useful.

I am interested in any critical feedback, however positive or negative, and in a completely open-ended way. You can email me with any thoughts or leave a comment below this post. Anonymous comments are welcome. I am also interested in the views of qualitative researchers on certain specific issues, particularly: 

-- Do you believe that qualitative research within your field suffers from publication bias?

-- Have you ever had a paper rejected because, independently of the methods, the qualitative outcomes were judged to be insufficiently interesting, novel or conclusive?

-- Would you consider using Registered Reports as a submission option if it were offered at your preferred journal?

-- Which journals would you like to see Registered Reports offered within?

-- What do you see as the main benefits or drawbacks of Registered Reports in qualitative research?

Sunday, 26 June 2016

Screw Brexit – but thank you to all my European friends and colleagues

It feels like someone has died.

I’m struggling to focus on work, and in a way I can ill afford right now. I’m struggling to engage in the most mundane conversations, especially with strangers. I feel numb and angry and sad all at once. I’ve never experienced depression but I guess this is similar.

Projects that dominated my thinking on Thursday morning now appear as the tiniest specks of dust in the rear view mirror. In their place is my children’s future, once full of hope as part of a leading European nation, now diminished as a small island walled off by fear and intolerance. A murdered MP. An economy in crisis. A rudderless government, set to be led by a mercurial psychopath who believes in nothing but himself. A far right that hates all non-white humans and is bent on grasping ever more power. A UK set on a path of Balkanization. A European peace project battling to survive.

We have lanced the boil of hatred and xenophobia in this country and the poison is already destroying everything the world once admired about us. Racists and bigots now feel empowered to “say it how it is” – the ultimate badge of the irredeemable arsehole. Dragging their knuckles from their basements and white vans, they feel entitled to accost people on the street, telling them to “fuck off home.

I have to say that Nicola Sturgeon’s proud defence of immigrants and rejection of bigotry brought a genuine tear to my eye. I wish she were in charge of more than Scotland. Her clear plan for what must happen next in Scotland is thoroughly professional and considered. THAT is the Britain I believe in. We are already considering moving there, and I think a lot of people living in England and Wales will do the same.

Compare Sturgeon’s response to the cadaverous Boris Johnson, now realising he is the proverbial dog at a keyboard. The great British electorate threw the Leave campaign a treat and now they have to type an opus while the economy founders. Sure. And while we’re at it, I’m going to ask my cat to drive my son to his swimming lesson, change a flat, and pick up the shopping. Boris Johnson hasn’t got the faintest fucking clue how to fix this mess because he never thought he would win in the first place.

The Welsh are an even greater mystery. Here is a country that has benefited enormously from EU funding and migration, and STILL it voted to leave, surrendering its economic future to what will likely be a generation of Conservative rule from Westminster. Turkeys voting for Christmas doesn’t even come close to describing this act of self destruction. It’s more like Turkeys cutting their own heads off, rolling themselves in duck fat, wrapping themselves in foil, and thrusting themselves stumpfirst into a 200-degree oven. Perhaps they felt they have nothing to lose. Perhaps they just didn’t give a shit and thought “Hey, this oven looks like a change of scene, and that must be good, right?” 

The psychology of Brexit is complicated, but not that complicated. People who have been left behind by society and basically distrust anyone who isn’t the same as them were fed a diet of pernicious but convincing-sounding bullshit by politicians who thought (quite correctly) that a large chunk of their audience would either be unwilling or unable to challenge it, at least until it was too late. The remain camp had a simpler but less enterprising argument. They tried to convince us not to drive our car over the cliff. An endless parade of experts, politicians and business leaders warned us that the rocks at the bottom would kill us. “Seriously, this is a bad idea”, they all said in unison. But rational concerns over impending self destruction became badged as Project Fear. “We’re sick of experts”, snorted Gove, as the car we are all travelling in went hurtling into the abyss.

It’s easy to ignore experts when you hate what they are saying. And there is nothing easier than believing in bullshit you already agree with. In psychology we call this confirmation bias. We have psychological terms to describe all of these behaviours, but really this all boils down to a few simple facts. It is easy to reject people who belong to an outgroup ("them"). It is easy to find evidence that agrees with our preconceptions. It is easy to embrace ignorance and groupthink. These are all easy because they require no cognitive effort, only emotion. Acting by impulse is straightforward and dangerous and exactly what the Leave campaign wanted us all to do.

When I moved here ten years ago from Australia, my new home seemed like a progressive, quirky, outward-looking country. A nation that embraced (albeit sometimes reluctantly) its connections with Europe and the rest of the world. I was excited to discover the mother country– the land where my father’s side of the family had come from. I was proud to gain my UK citizenship in 2014. I was particularly proud that the front page of my UK passport declared me to be a citizen of the European Union. I had suddenly joined a large, inclusive, and dynamic community – one that my children would one day be able to join too.

Now that is gone and I fear all of society will suffer for it. The hit to science and medicine – my profession – will be extremely harsh. The warnings were loud and clear. We gain billions more in funding from the EU than we contribute, which is one reason why the UK leads in science. I imagine a lot of Leavers don’t care very much about science, but they will notice the impact on one part of their lives that touches everyone.


Cancer research – perhaps the only kind of science that Sun readers give a shit about – will be seriously damaged. If you voted to leave the EU, you voted against cancer research. It’s that simple.

I am angry about this because I feel that my children’s futures, and even their health, has been betrayed by a nasty and bigoted middle England, who in turn feel (perhaps rightly) betrayed by everyone. And they will be betrayed again because, apart from the act of leaving the EU, none of the promises they were fed by Johnson, Gove or Farage will ever happen. There will be no economic boom. There will be no extra billions for the NHS. There will be no sizable reduction in immigration. If your life sucks now, it will still suck. Maybe more.

Young people have also been betrayed by the rich baby boomers who, for no reason in particular, voted to leave a peaceful union that their own parents fought and died to create. The stupidity and selfishness of this decision is staggering.

But I also feel thankful. I’m thankful to every continental European I work with. I want you to know that I feel ashamed by this referendum result and I am sorry that it has made you feel unwelcome in this country. This referendum does not speak for millions of UK citizens who value you enormously. To me you are invaluable. Science is a global endeavour and I have the privilege of friends and colleagues from an extraordinary array of countries in Europe and beyond. My research would not be possible without you. Thank you for your different world views. Thank you for your fascinating languages. Thank you for your food and art and culture. Thank you for sharing your stories of adversity in your home countries, which put our often trivial British problems into perspective.

I now see three tasks before us. First, we have to somehow make this Brexit nightmare as painless as possible for everyone, especially the most vulnerable. I have no idea how to do this but we all have to come together to make it work. Second, we need to ensure that the younger generation is sufficiently educated to detect bullshit being spouted by political liars, and that they always vote and speak up. We can do much of this work as parents and teachers. And finally, my message to the younger generation who voted overwhelmingly to remain is the same one I say to young scientists who question the status quo: one day all this will be yours. Be patient, cope with the dark times as best you can, and we can turn this around.

Because we will turn this around. A boil of hatred has been lanced but the Britain I know and love is still alive. As long as its heart still beats, there is hope of one day rejoining the European Union.

Monday, 25 April 2016

The things you hate most about submitting manuscripts

A few days ago I asked the twittersphere what rubs people the wrong way when it comes to submitting manuscripts to peer reviewed academic journals. Oh let us count the ways. From the irritation of having to reformat references to fit some journal’s arbitrary style, to consigning figures and captions to the end of a submission as though it really is still 1988, to the pointlessness of cover letters where all you want to say is “Dear Editor, here is our paper” but feel the need to throw in some gumpf about how amazing your results are. (Hint: aside from when the cover letter has a specific purpose, such as summarising a response to reviewers or conveying vital information about a key issue, I can tell you that a lot of editors -- maybe most -- ignore this piece of puffery).

The tweet proved a lot more popular than I expected and for a good two days you could see a steam of delicious rage rising from my timeline. 

I had an ulterior motive in seeking out this information from your good selves. As most of you will know, one of my aims is to help improve the transparency and reproducibility of published research, and one of the journals I edit for is working through its (future) adoption of the new Transparency and Openness Promotion (TOP) guidelines. The TOP guidelines are a self-certification scheme in which journals voluntarily report their level of policy compliance with a series of transparency standards, such as data sharing, pre-registration, and so forth. TOP is currently endorsed by over 500 journals and promises to make the degree of transparency adopted by journals itself more transparent. I guess you could call this "meta-transparency".

Now, in putting together our TOP policy at this journal at which I serve, we realised that it involves the addition of some new submission bureaucracy for authors. There will be a page of TOP guidelines to read beforehand and a 5-minute checklist to complete when actually submitting. We realise extra forms and guidelines are annoying for authors, so at the same time as introducing TOP we are going to strive to cut as much of the other (far less important) shit as possible. 

Here are the things you hated the most, and your most popular recommendations. For fun, I calculated an extremely silly and invalid score of every interaction to this tweet, adding up RTs, favourites and the number of independent mentions of specific points:

1. Abolish trivial house style requirements, including stipulations on figure dimensions & image file types, especially for the initial submission, as well as arbitrary house referencing and in-text citations styles. This is by far the most popular response. (score 112)

2. Allow in-text figures and tables according to their natural position until the very final stage of submission. (score 61)

3. Abolish all unnecessary duplication of information about the manuscript (e.g. word count, keywords), main author details and (most especially) co-author contact details that is otherwise mentioned on the title page or could be calculated automatically; abolish any requirement to include postal addresses of co-authors at least until the final stage (affiliation and email address should be sufficient, and should be readable from title page without requiring additional form completion); eliminate fax numbers altogether because, seriously, WTF are those fossils doing there anyway. (score 50)

4. Abolish requirement for submissions to be in MS Word format only. (score 36)

5. Abolish endnotes and either replace with footnotes or cut both. (score 33)

6. Allow submission of LaTeX files. (score 29)

7. Allow submission of single integrated PDF until the final stage of acceptance. (score 27)

8. Abolish cover letters for initial submissions. (score 21)

9. Abolish the Highlights section altogether because 
* Highlights are Stupid 
* Everyone knows Highlights are Stupid
* I can't think of anything else to say here, so I'll just repeat the conclusion that Highlights are Stupid (score 18) 

10. Remove maximum limits on the number of cited references. (score 7) 

11. Abolish the requirement for authors to recommend reviewers. (score 7) 

12. Increase speed of user interface. (score 6)

Not all of these apply to our journal, but we’ll try and improve on the things that do, and which we can change. 

Oh, and lucky number 13, which actually scored the same as abolishing cover letters, goes to Sanjay Srivastava: "Getting rejected, can you do away with that?” Alas that is beyond my current lowly powers, although...cough....I am getting there.* 


* Shameless plug alert: At one journal I edit for (Cortex), submitting a pre-registered article called a Registered Report greatly increases your chances of being published.  The rejection rate for standard (unregistered) research reports? Just over 90%.  The rejection rate for the 50% of Registered Reports that pass editorial triage and proceed to in-depth Stage 1 peer review? About 10%.  

The reason the rejection rate is so low for Registered Reports isn’t because our standards are any less (if anything they are higher, in my opinion) but because this format attracts  particularly good submissions and also gives authors the opportunity to address reviewer criticisms of their experimental design before they do their research – a point made by Dorothy Bishop who recently published an excellent Registered Report with Hannah Hobson.

Thursday, 7 April 2016

So you've been scooped

It’s the moment every junior researcher dreads – and more than a few senior ones too. You’re on the verge of submitting that amazing paper describing a new and exciting finding, or a hot new method, and someone beats you to the post

That sinking feeling when you read the abstract in a zeitgeist journal announcing that  “Here we show for the first time….” followed by something achingly similar to what you have done. The rug has been ripped out. You’ve been cruelly gazumped with nothing left but doubts and self-recriminations. They will get all the credit and nobody will care what you did. You’ll be seen as some lame copycat following in their illustrious tailwind, even though you conceived your idea long before they published theirs. If only you’d worked harder. Worked more Sundays instead of spending time with family or friends. Written faster. Spent less time on Twitter. And the worst part is you had no clue that you were about to be gazumped. You’ve been blindsided.

The chances are, if you work in a busy or popular area using techniques that are widely available, this is going to happen to you at some point. And I’m going to try to convince you that unless your research falls within a very narrow set of parameters, it doesn’t matter. Not one bit.

It really doesn’t. Despite all the feelings of frustration and disappointment it provokes, this is all in your head. It is your own ego screaming into the void. On the contrary there are several positive sides to being “scooped”. (Note I refer to “scooped” here to refer to the kind of inadvertent gazumping that can happen when multiple researchers work independently but in parallel – I am not referring to the deliberate theft of ideas, which is extremely rare if it happens at all).

Here are some tips for junior researchers on how to come to grips with being scooped and why you shouldn’t feel so bad.

1.    It means you are doing something other people care about. Getting scooped is a sign that your research is important and that you are probably asking the right questions. If someone finds something similar to you it also adds to the convergent validity of your methods and suggests you may be doing work that is reproducible. Note: the corollary of this is not the case – just because you never get scooped doesn’t mean your research is unimportant. You might have cornered the market in a particular technique, or the field might be small, or your approach might be unusual or specialised in some other way.

2.    Being first isn’t necessarily a sign of being a good scientist. Why? Because many initial discoveries are wrong or overclaimed. As a post-doc, I was the “first” to show that TMS of the right inferior frontal cortex can impair response inhibition in healthy people. So what? Does that make my methods or results more convincing, or any better than later convergent findings? Does it make me a better scientist? Nope, nope, nope. If anything, my paper is weaker because it overclaimed. When I and my co-authors wrote it we knew we were the first to report this particular effect, so we aimed “high” with journals and over-egged the cake. We initially submitted it to a bunch of zeitgeist journals where it was predictably rejected, one after another (after all, we were only repeating what had already been concluded on the basis of brain injury). The spin remained, though, until it found its way into a specialist journal, and on the basis of the results we claimed evidence for a selective role of the IFG in response inhibition. We were wrong, as we and others later discovered – the original results turned out to be repeatable but our explanation was trite and erroneous.

3.    Most senior scientists know this. Many PIs – me included – are sceptical of researchers who claim to be the first to show something. For one thing it is almost never the case; the vast majority of science is a process of derivative, incremental advance, despite whatever spin the authors cake their abstracts in. When I’m assessing fellowship applications or job applications by junior researchers, the type of questions I’m asking are: is this research important either to theory or applications? Is it robust, feasible and transparent? Is the applicant an excellent communicator? I am not asking whether they were the first at making previous claims. I couldn’t care less. Knowing what I do about statistics and research culture, I know that s/he who claims they are first most likely did a small study, did not take the time to replicate their findings, fell prey to research bias, benefited from publication bias, and probably exaggerated the implications. Are these attractive characteristics in a scientist?

4.    In the vast majority of cases you don’t show you are a brilliant scientist or intellectual force by being the first to claim something. You prove your mettle by shaping the theoretical landscape in which everyone works. You set the scene, one of two ways. One way is by accruing a coherent body of important and credible work that changes the way people think about a topic (and not just by publishing a long list of glamour publications, but through the transparent accumulation of knowledge). Or, you construct a robust and falsifiable theory that could explain something better than all the other theories out there, and then set about trying to disconfirm it. If it is brilliant, others will try doing the same, and if nobody can disconfirm it then you've probably discovered something for real.

5.    There are a few cases where being the first might matter and can have career benefits. If you’re the first to describe an amazing new technique, or the first to make a Nobel-level discovery then scooping might count. But how many of us fall into that category? 0.0001%? The rest of us are labouring away in the trenches. Our discoveries are small and, frankly, none of us individually matter a great deal. Our value lies in our collective contribution as scientists. A large part of getting over being scooped is getting over yourself and realising that you are a small cog in a very big machine.

6.    Remember that what matters in science is the discovery, not the discoverer. That’s why the public pays your salary or stipend. When someone scoops you, it provides an opportunity for you to reflect on their findings in preparing your own paper. What can you learn from what they found, or from the data itself? If you have access to their data, can you perform a meta-analysis to aggregate evidence usefully between their study and your own? Might they be someone you could collaborate with on a future study to do something even bigger and better than either of you could do alone? Remember that in the quest to make discoveries, competition is for climbers and egomaniacs. Cooperation beats competition every time.

7.    Finally, if you really feel you have an idea for a study that is unique and you want to declaw the Scoop Monster, consider submitting it as a Registered Report. This might seem counterintuitive – after all, aren’t Registered Reports only for incremental research or replications? Aren't you risking being scooped by sharing your amazing idea with reviewers? Actually, you're more protected than you think, and Registered Reports are not limited to replications; they are simply an avenue for robust, transparent, hypothesis-driven research, and they can (and often do) describe novel ideas or critical tests of theory. Aside from all those benefits, Registered Reports offer something very simple that asserts intellectual primacy: when they are published, the date that the initial Stage 1 protocol was first received is published in the margin, right above all the other received and accepted dates. This means that if anyone publishes anything similar in the meantime, you will always be able to prove – if it really matters – that you had your idea before they published theirs. Plus your study will probably be three times the size and relatively bias-free.

Now, get back to sciencing (or chilling out) and leave the worrying about scooping to scientists who don't really understand how science works or why they are doing it.

Wednesday, 17 February 2016

My commitment to open science is valuing your commitment to open science

tl:dr – to be shortlisted for interview, all future post-doctoral vacancies in my lab will require candidates to show a track record in open science practices. This applies to two posts I am currently advertising, and for all such positions henceforth. 

Twitter never ceases to amaze me. The other day I posted a fairly typical complaint about publication bias, which I expected to be ignored, but instead it all went a bit berserk. Many psychologists (and other scientists) are seriously pissed off about this problem, as well they should be.

My tweets were based on a manuscript we just had rejected from the Journal of Experimental Psychology: Applied because the results were convincingly negative in one experiment, and positive but “lacked novelty” in the other. Otherwise our manuscript was fine – we were complimented on it tackling an important question, using a rigorous method, and including a thorough analysis.

But, of course, we all know that good theory and methodology are not enough to get published in many journals. In the game of academic publishing, robust methods are no substitute for great results.

The whole experience is both teeth-grindingly frustrating and tediously unremarkable, and it reminds us of three home truths:

1) That this can happen in 2016 shows how the reproducibility movement still exists in an echo chamber that has yet to penetrate the hermitically sealed brains of many journal editors.
2) To get published in the journals that psychologists read the most, you need positive and novel results.
3) This is why psychologists p-hack, HARK and selectively publish experiments that “work”.

So what, I hear you cry. We’ve heard it all before. We’ve all had papers rejected for stupid reasons. Get over it, get over yourself, and get back to cranking the handle.

Not just yet. First I want to make a simple point: this can’t be explained away as a “cultural problem”. Whenever someone says publication bias is a cultural problem, all they are really saying is, “it’s not my problem”. Apparently we are all sitting around the great Ouija board of Academia, fingers on the glass, and watching the glass make stupid decisions. But of course, nobody is responsible – the glass just moved by itself!

Publication bias isn’t a cultural problem, it is widespread malpractice by senior, privileged individuals, just as Ted Sterling defined it back in 1959. Rejecting a paper based on results is a conscious choice made by an editor who has a duty to be informed about the state of our field. It is a choice that damages science and scientists. It is a choice that punishes honesty, incentivizes dishonesty and hinders reproducibility.

I’m a journal editor myself. Were I to reject a paper because of the results of the authors’ hypothesis tests, I would not deserve to hold such a position. Rejecting papers based on results is deliberate bias, and deliberate bias – especially by those in privileged positions – is malpractice. 

How to change incentives 

Malpractice it may be, but publication bias is acceptable malpractice to many researchers, so how do we shift the incentives to eliminate it?

Here are just three initiatives I’m part of which are helping to incentivize open practices and eliminate bias: 

Registered Reports: many journals now offer a format of article in which peer review happens before data collection and analysis. High quality study protocols are then accepted before research outcomes are known, which eliminates publication bias and prevents many forms of research bias. To date, more than 20 journals have joined the Registered Reports programme, with the first ‘high-impact’ journal coming on board later this year. 

TOP guidelines: more than 500 journals and 50 organisations have agreed to review their adherence to a series of modular standards for transparency and reproducibility in published research. For background, see our TOP introductory article. 

PRO initiative: led by Richard Morey of Cardiff University (follow him), this grassroots campaign calls for peer reviewers to withhold comprehensive review of papers that either fail to archive study data and materials, or which fail to provide a public reason for not archiving. You can read our paper about the PRO intiative here at Royal Society Open Science. If you want to see open practices become the norm, then sign PRO.

Registered Reports, TOP and PRO are much needed, but they aren’t enough on their own because they only tackle the demand side, not the supply side. So I’m going to add another personal initiative, following in the (pioneering) footsteps of Felix Schönbrodt. 

Hiring practices 

If we’re serious about research transparency, we need to start rewarding transparent research practices at the point where jobs and grants are awarded. This means senior researchers need to step up and make a commitment.

Here is my commitment. From this day forward, all post-doctoral job vacancies in my research group, on grants where I am the principal investigator, will be offered only to candidates with a proven track record in open science – one which can be evidenced by having pre-registered a study protocol, or by having publicly archived data / materials / code at the point of manuscript publication.

This isn’t me blowing smoke in the hope that I’ll get some funding one day to try such a policy. I’m lucky enough to have funding right now, so I’m starting this today.

I am currently advertising for 2 four-year, full-time post-doctoral positions on my European Research Council Consolidator grant. The adverts are here and here. Both job specifications include the following essential criterion: “Knowledge of, and experience applying, Open Science practices, including public data archiving and/or study pre-registration.” By putting this in the essential criteria, it means I won’t be shortlisting anyone who hasn’t done at least some open science.

Now, before we go any further, lets deal with the straw man that certain critics are no doubt already building. This policy doesn’t mean that every paper published by an applicant has to be pre-registered, or that every data set has to have been archived. It means that the candidate must have at least one example of where at least one open practice has been achieved.

I also realise many promising early-career scientists won’t have had the opportunity to adopt open practices, simply because they come from labs that follow the status quo. We all know labs like this; I used to work in a place surrounded by them (hell, I used to be one of them) – labs that chase glamour and status, or that just don't care about openness. It’s not your fault if you’re stuck in one of these labs. Therefore I’ve included a closing date of April 30 to give those so interested the time to generate a track record in open science before applying. Maybe it's time to test your powers of persuasion in convincing your PI to do something good for science over and above furthering their own career.

If you’re a PI like me, I humbly invite you to join me in adopting the same hiring policy. By acting collectively, we can ensure that a commitment to open science is rewarded as it should be.