Monday 19 May 2014

Comments on study pre-registration and Registered Reports


** You can download our 25-point Q&A about Registered Reports here **

As part of today's Guardian post on study pre-registration in psychology, I sought feedback on three questions from a number of colleagues. Due to space constraints I couldn’t do their insights justice, so I’ve reproduced their complete answers below. 

At the bottom of the post I've included a full list of journals offering Registered Reports and related initiatives. Enjoy! 

Question 1: What would you say to critics who argue that pre-registration puts "science in chains"? Are their concerns justified? 

Professor Dorothy Bishop, University of Oxford 

I think there's a widespread misunderstanding of pre-registration. It's main function is to distinguish hypothesis-testing analyses from exploratory analyses. It should not stop exploratory research, but should make it clear what is exploratory and what is not. Most of the statistical methods that we use make basic assumptions that are valid only in a hypothesis-testing context. If we explore a multidimensional dataset, decide on that basis what is interesting, and then apply statistical analysis, we run a high risk of obtaining spurious 'significant' findings. Currently science is not so much in chains as bogged down in a mire of non-replicable findings, and we need to find ways to deal with this. I increasingly find myself reading papers and wondering just what I can believe - particularly in areas of neuroscience where there are huge multidimensional datasets and multiple researcher degrees of freedom in choosing how to analyse findings. I would not insist that pre-registration is mandatory, but I think it's great to have that option and I hope that as the new generation of scientists learn more about it, they will come to embrace it as a way of clarifying scientific findings and achieving better replicability of research. 

Professor Tom Johnstone, University of Reading 

I think the concern that scientists have of being "put in chains" is understandable. We've all probably had the frustrating experience of confronting a reviewer or editor who believes there's one way, and one way only, to collect data or perform analysis, for example. Creativity and adaptive thinking and problem solving are very much a part of science, and mustn't be stifled. 

Yet the solution is to make sure that the move towards pre-registration is accompanied by an expansion of the ways in which researchers can openly report innovative exploratory research, and the iterative development of new methods. As you've pointed out, if we didn't try to shoehorn all of our research into the hypothesis-testing model, then we'd relieve a lot of the pressure for people to engage in post hoc hypothesis creation. 

Dr Daniël Lakens, Eindhoven University of Technology

Science is like a sonnet. There is a structure within which scientists work, but that does not have to limit our creativity. As Goethe remarked: ‘In der Beschränkung zeigt sich erst der Meister’ - Mastery is seen most clearly when constrained. 

Dr Brendan Nyhan, Dartmouth College 

I think the idea that pre-registration will put “science in chains” is attacking a straw man. No one is proposing that it should be the only way to conduct research. There will still be every opportunity to pursue unanticipated findings. The widespread availability of pre-registered journal articles will more clearly distinguish between true hypothesis-testing and exploratory research. For instance, a researcher might observe an unanticipated result and then pre-register a replication study to test the effect more systematically.

Professor Dan Simons, University of Illinois 

Frankly, this criticism is nonsense. Pre-registration just eliminates the ability to fool yourself into thinking some post-hoc decision was actually an a-priori one. Specifying a plan in advance just means that you actually did plan your "planned" analyses. As psychologists, we should know how easily we can convince ourselves that the analysis that worked was the logical one to do, after the one we first thought to try didn't work. If your theory makes a prediction, you should be able to specify it in advance and you should be able to specify what outcomes would support it. Yes, it takes more work up front to pre-register a plan. But, if you truly are conducting planned analyses, all you are doing is shifting when you do that work, not what you're doing.  

Nothing about pre-registration prevents a researcher from conducting additional exploratory analyses that were not part of the registered plan. Pre-registration just makes clear which analyses were planned and which ones were exploratory. How does that constrain science in any way? 

Question 2: Do you think pre-registration will influence the future of publishing in psychology, neuroscience and beyond?  

Professor Tom Johnstone, University of Reading 

I do think that the move towards registered studies will be of benefit to science, not only because it will encourage better research practice, but also because it will lessen the file-drawer problem by ensuring that "null" results are published. It will also hopefully catalyse a shift towards more informative statistics than standard NHST. That's not to say there won't be problems; undoubtedly there will be (concerns about research timelines especially for junior researchers need to be tackled head-on, for example). 

Dr Daniël Lakens, Eindhoven University of Technology

It will complement the way we work in important ways. Especially in ‘hot’ research areas, which are at a higher risk of increased Type 1 errors (Ioannides, 2005), pre-registration will greatly facilitate our understanding of how likely it is things are true. 

Dr Brendan Nyhan, Dartmouth College 

Pre-registration could transform the future of publishing if funders, government agencies, reviewers, editors, and tenure and promotion committees demand it. The movement will only succeed if it changes expectations about research credibility among a wider group of scholars and stakeholders than its most devoted advocates. It should also take further steps to broaden its appeal to researchers - most notably, by encouraging journals to adopt formats like Registered Reports that reduce risk to scholars concerned about their ability to publish pre-registered null results given the publication biases in scientific journals. 

Professor Dan Simons, University of Illinois 

Pre-registration effectively eliminates hypothesizing after the results are known. It keeps us from convincing ourselves that an exploratory analysis was a planned one. It is perhaps the best way to keep yourself from inadvertent p-hacking and to convince others that your hypotheses predicted rather than followed from your results. Ideally, more journals will begin reviewing the registered plans as the basis for publication decisions. Doing so would effectively eliminate the file drawer problem. If a study is well designed, its results should be published.  

Question 3: Why do you think psychology and neuroscience are spearheading these initiatives, rather than other sciences? 

Professor Dorothy Bishop, University of Oxford 

I think there are two reasons. First, most psychologists (though not neuroscientists in general) get a good grounding in statistics at undergraduate level, so they have been quicker to appreciate the problems that are inherent in 'false positive psychology'. Second, psychologists study how people think and are aware of how easy it is to deceive yourself at all kinds of levels: after all, one of the first things that many students learn about is the Muller-Lyer visual illusion, where you are convinced that two lines are different lengths when in fact they are the same. That should make us more vigilant about always questioning whether our findings are correct; we are taught to look for counter-evidence rather than just confirming our pre-conceptions. 

Professor Tom Johnstone, University of Reading 

As to why this is being lead by psych/neuro, hard to say. Probably a case of the right combination of factors coinciding (e.g.recent high-profile spotlight on QRP and fraud in social psychology, links to medical research and associated ethics, in which registration has been recently enforced, a few people willing to actively push this forward), plus peculiarities of psych research compared to some other disciplines (for example, speaking with my physics training hat on, the almost complete reliance on NHST in psychology and neuroscience, rather than accurate quantitative description of effects, and the almost total lack of replication). There is, I think, a research culture difference here. That will be difficult to change, but one has to start somewhere. 

Dr Daniël Lakens, Eindhoven University of Technology

According to Parker (1989), ‘psychology is in a continuous crisis’. Psychology has a tradition of self-criticism. It is sometimes remarked that psychology’s greatest contribution is methodology (e.g., Scarr, 1997), so it is not surprising we are on the forefront of methodological improvements in the current debate about ways to improve our science.

Dr Brian Nosek, University of Virginia

The reproducibility challenges facing science are strongly influenced by the incentives and social context that shape scientists' behavior.  Understanding and altering incentives, motivations, and social context are psychological challenges.  Psychologists are ahead because they are just applying their domain expertise on themselves. 

Links to Registered Reports initiatives and related formats 

Journal: AIMS Neuroscience 
Detailed guidelines: http://www.aimspress.com/reviewers.pdf (Nb. The AIMS website is currently down but I am told it will be back up soon).
Editorial: http://orca.cf.ac.uk/59475/1/AN2.pdf 

Journal: Attention, Perception and Psychophysics
Detailed guidelines: http://link.springer.com/content/pdf/10.3758%2Fs13414-013-0502-5.pdf 

Journal: Journal of Experimental Psychology: General 
Announcement inviting registered replications: http://www.apa.org/pubs/journals/xge/ 

Journal: Perspectives on Psychological Science 
Guidelines: http://www.psychologicalscience.org/index.php/replication 
Guidelines: To come...

Friday 31 January 2014

Research Briefing: Does TMS-induced ‘blindsight’ rely on ancient reptilian pathways?


Source Article: Allen C.P.G., Sumner P., & Chambers C.D. (2014). Timing and neuroanatomy of conscious vision as revealed by TMS-induced blindsight. Journal of Cognitive Neuroscience, in press.  [pdf] [study data]  

-----------

One of the things I find most fascinating about cognitive neuroscience is the way it is shaping our understanding of unconscious sensory processing: brain activity and behaviour caused by imperceptible stimuli. Lurking below the surface of awareness is an army of highly organised activity that influences our thoughts and actions.

Unconscious systems are, by definition, invisible to our own introspection but that doesn’t make them invisible to science. One simple way to unmask them is to gradually weaken an image on a computer screen until a person reports seeing nothing. Then, when the stimulus is imperceptible, you ask the person to guess what type of stimulus it is, for instance, whether it is “<” or “>”. What you find is that people are remarkably good at telling the difference. They’ll insist they see nothing yet correctly discriminate invisible stimuli much higher than predicted by chance – often at 70-80% correct. It’s really quite head-scratching.

Back in the 1970s, a psychologist named Larry Weiskrantz found that this contrast between conscious and unconscious processing was thrown into sharp relief following damage to a part of the brain called the primary visual cortex (V1). Weiskrantz (and later others) found that patients with damage to V1 would report being blind to one part of their visual field, yet, when push came to shove, they could discriminate stimuli above chance or even navigate successfully around invisible objects in a room. He coined this intriguing phenomenon “blindsight”.

Since then, blindsight has drawn the attention of psychologists, neurologists and philosophers. One of the major debates in the literature has centred on the neurophysiology of the phenomenon: how, exactly, is this unconscious vision achieved? Blindsight proved that information was somehow influencing behaviour without being processed by V1.

Two schools of thought took shape. One argued that, during blindsight, unconscious information reached higher brain systems by activating spared islands of cortex near the damaged V1. An opposing school argued that the information was taking a different road altogether: an ancient reptilian route known as the retinotectal pathway, which bypasses visual cortex to reach frontal and parietal regions.

In our latest study, published in the Journal of Cognitive Neuroscience, we sought to pit these accounts against each other by generating blindsight in healthy people with transcranial magnetic stimulation (TMS). The study was originally conceived by Chris Allen, then a PhD student in my lab and now a post-doctoral researcher. We hadn’t used TMS like this before but we knew from the work of Tony Ro’s lab that it could be done with a particularly powerful type of TMS coil.

Knocking out conscious awareness with TMS was one thing – and apparently doable – but how could we tell which brain pathways were responsible for whatever visual ability was left over? Fortunately I’d recently moved to Cardiff University where Petroc Sumner is based. Some years earlier, Petroc had developed a clever technique to isolate the role of different visual pathways by manipulating colour. When presented under specific conditions, these coloured stimuli activated a type of cell on the retina that has no colour-opponent projections to the superior colliculus. These stimuli, known as “s-cone stimuli”, were invisible to the retinotectal pathway (1). We teamed up with Petroc, and Chris set about learning how to generate these stimuli.

Now that we had a technique for dissociating conscious and unconscious vision (TMS), and a type of stimulus that bypassed the retinotectal pathway, we could bring them together to contrast the competing theories of blindsight. Our logic was this: if the retinotectal pathway is a source of unconscious vision then blindsight should not be possible for s-cone stimuli because, for these stimuli, the retinotectal pathway isn’t available. On the other hand, if blindsight arises via cortical routes at (or near) V1 then blocking the retinotectal route should be inconsequential: we should find the same level of blindsight for s-cone stimuli as for normal stimuli (2).

There were other aspects to the study too (including an examination of the timecourse of TMS interference), but our main result is summarised in the figure below. When we delivered TMS to visual cortex about a tenth of a second after the onset of a normal stimulus, we found textbook blindsight: TMS reduced awareness of the stimuli while leaving unaffected the ability to discriminate them on ‘unaware’ trials. 

Crucially, we found the same thing for s-cone stimuli: blindsight occurred even for these specially coloured stimuli that bypass the retinotectal route. Since blindsight occurred for stimuli that weren’t processed by the retinotectal pathway, our results allow us to reject the retinotectal hypothesis in favour of the cortical hypothesis. This suggests that blindsight in our study arose from unperturbed cortical systems rather than the reptilian route.

Our key results. The upper plot shows conscious detection performance when TMS was applied to visual cortex at 90-130 milliseconds after a stimulus appeared. Compared to "sham" (the control TMS condition), active TMS reduced conscious detection for both the normal stimuli and S-cone stimuli that bypass the retinotectal pathway. The lower plot shows the corresponding results for discrimination of unaware stimuli; that is, how accurately people could distinguish "<" from ">" when also reporting that they didn't see anything. For for both normal stimuli and S-cone, this unconscious ability was unaffected by the TMS. And because this TMS-induced blindsight was found for stimuli that bypass the retinotectal route, we can conclude that the retinotectal pathway isn't crucial for blindsight found here.







While the results are quite clear there are nevertheless several caveats to this work. There is evidence from other sources that the retinotectal pathway can be important and our results don’t explain all of the discrepancies in the literature. What we do show is that blindsight can arise in the absence of afferent retinotectal processing, which disconfirms a strong version of the retinotectal hypothesis.

Also, we don’t know whether the results will translate to blindsight in patients following permanent injury. TMS is a far cry from a brain lesion – unlike brain damage, it is transient, safe and reversible, which of course makes it highly attractive for this kind of research but also distances it from work in clinical patients. Furthermore, even though we can rule out a role of the retinotectal pathway in producing blindsight as shown here, we don’t know which cortical pathways did produce the effect. 

Finally, our paper reports a single experiment that has yet to be replicated – so appropriate caution is warranted as always.

Still, I’m rather proud of this study. I take little of the intellectual credit, which belongs chiefly to Chris Allen. Chris brought together the ideas and tackled the technical challenges with a degree of thoroughness and dedication that he’s become well known for in Cardiff. This paper – his first as primary author – is a nice way to kick off a career in cognitive neuroscience.


1. By “afferent” I mean the initial “feedforward” flow of information from the retina. It’s entirely possible (and likely) that s-cone stimuli activate retinotectal structures such as the superior colliculus after being processed by the visual cortex and then feeding down into the midbrain. What’s important here is that s-cone stimuli are invisible to the retinotectal pathway in that initial forward sweep. 

2. Stats nerds will note that we are attempting to prove a version of the null hypothesis. To enable us to show strong evidence for the null hypothesis, we used Bayesian statistical techniques developed by Zoltan Dienes that assess the relative likelihood of H0 and H1.

Thursday 16 January 2014

Tough love for fMRI: questions and possible solutions


Let me get this out of the way at the beginning so I don’t come across as a total curmudgeon. I think fMRI is great. My lab uses it. We have grants that include it. We publish papers about it. We combine it with TMS, and we’ve worked on methods to make that combination better. It’s the most spatially precise technique for localizing neural function in healthy humans. The physics (and sheer ingenuity) that makes fMRI possible is astonishing.

But fMRI is a troubled child. On Tuesday I sent out a tweet: “fMRI = v expensive method + chronically under-powered designs + intense publication pressure + lack of data sharing = huge fraud incentive.” This was in response to the news that a post doc in the lab of Hans Op de Beeck has admitted fraudulent behaviour associated with some recently retracted fMRI work. This is a great shame for Op de Beeck, who it must be stressed is entirely innocent in the matter. Fraud can strike at the heart of any lab, seemingly at random. The thought of unknowingly inviting fraud into your home is the stuff of nightmares for PIs. It scares the shit out of me.

I got some interesting responses to my tweet, but the one I want to deal with here is from Nature editor Noah Gray, who wrote: “I'd add ‘too easily over-interpreted.’ So what to do with this mess? Especially when funding for more subjects is crap?”

There is a lot we can do. We got ourselves into this mess. Only we can get ourselves out. But it will require concerted effort and determination from researchers and the positioning of key incentives by journals and funders.

The tl;dr version of my proposed solutions: work in larger research teams to tackle bigger questions, raise the profile of a priori statistical power, pre-register study protocols and offer journal-based pre-registration formats, stop judging the merit of science by the journal brand, and mandate sharing of data and materials.

Problem 1: Expense. The technique is expensive compared to other methods. In the UK it costs about £500 per hour of scanner time, sometimes even more.

Solution in brief: Work in larger research teams to divide the cost.

Solution in detail: It’s hard to make the technique cheaper. The real solution is to think big. What do other sciences do when working with expensive techniques? They group together and tackle big questions. Cognitive neuroscience is littered with petty fiefdoms doing one small study after another – making small, noisy advances. The IMAGEN fMRI consortium is a beautiful example of how things could be if we worked together.

Problem 2: Lack of power. Evidence from structural brain imaging implies that most fMRI studies have insufficient sample sizes to detect meaningful effects. This means they not only have little chance of detecting true positives, there is also a high probability that any statistically significant differences are false. It comes as no surprise that the reliability of fMRI is poor.

Solution in brief: Again, work in larger teams, combining data across centres to furnish large sample sizes. We need to get serious about statistical power, taking some of the energy that goes into methods development and channeling it into developing a priori power analysis techniques.

Solution in detail: Anyone who uses null hypothesis significance testing (NHST) needs to care about statistical power. Yet if we take psychology and cognitive neuroscience as a whole, how many studies motivate their sample size according to a priori power analysis? Very few, and you could count the number of basic fMRI studies that do this on the head of a pin. There seem to be two reasons why fMRI researchers don’t care about power. The first is cultural: to get published, the most important thing is for authors to push a corrected p value below .05. With enough data mining, statistical significance is guaranteed (regardless of truth) so why would a career-minded scientist bother about power? The second is technical: there are so many moving parts to an fMRI experiment, and so many little differences in the way different scanners operate, that power analysis itself is very challenging. But think about it this way: if these problems make power analysis difficult then they necessarily make the interpretation of p values just as difficult. Yet the fMRI community happily embraces this double standard because it is p<.05, not power, that gets you published.

Problem 3: Researcher ‘degrees of freedom’. Even the simplest fMRI experiment will involve dozens of analytic options, each which could be considered legal and justifiable. These researcher degrees of freedom provide an ambiguous decision space for analysts to try different approaches and see what “works” best in producing results that are attractive, statistically significant, or fit with prior expectations. Typically only the outcome that "worked" is then published. Exploiting these degrees of freedom also enables researchers to present “hypotheses” derived from the data as though they were a priori, a questionable practice known as HARKing. It’s ironic that the fMRI community has put so much effort into developing methods that correct for multiple comparisons while completely ignoring the inflation of Type I error caused by undisclosed analytic flexibility. It’s the same problem in different form.

Solution in brief: Pre-registration of research protocols so that readers can distinguish hypothesis testing from hypothesis generation, and thus confirmation from exploration.

Solution in detail: By pre-specifying our hypotheses and analysis protocol we protect the outcome of experiments from our own bias. It’s a delusion to pretend that we aren’t biased, that each of us is somehow a paragon of objectivity and integrity. That is self-serving nonsense. To incentivize pre-registration, all journals should offer pre-registered article formats, such as Registered Reports at Cortex. This includes prominent journals like Nature and Science, which have a vital role to play in driving better science. At a minimum, fMRI researchers should be encouraged to pre-register their designs on the Open Science Framework. It’s not hard to do. Here’s an fMRI pre-registration from our group.

Arguments for pre-registration should not be seen as arguments against exploration in science – instead they are a call for researchers to care more about the distinction between hypothesis testing (confirmation) and hypothesis generation (exploration). And to those critics who object to pre-registration, please don’t try to tell me that fMRI is necessarily “exploratory” and “observational” and that “science needs to be free, dude” while in same breath submitting papers that state hypotheses or present p values. You can't have it both ways.

Problem 4: Pressure to publish. In our increasingly chickens-go-in-pies-come-out culture of academia, “productivity” is crucial. What exactly that means or why it should be important in science isn’t clear – far less proven. Peter Higgs made one of the most important discoveries in physics yet would have been marked as unproductive and sacked in the current system. As long as we value the quantity of science that academics produce we will necessarily devalue quality. It’s a see saw. This problem is compounded in fMRI because of the problems above: it’s expensive, the studies are underpowered, and researchers face enormous pressure to convert experiments into positive, publishable results. This can only encourage questionable practices and fraud.

Solution in brief: Stop judging the quality of science and scientists by the number of publications they spew out, the “rank” of the journal, or the impact factor of the journal. Just stop.

Solution in detail: See Solution in brief.

Problem 5: Lack of data sharing. fMRI research is shrouded in secrecy. Data sharing is unusual, and the rare cases where it does happen are often made useless by researchers carelessly dumping raw data without any guidance notes or consideration of readers. Sharing of data is critical to safeguard research integrity – failure to share makes it easier to get away with fraud.

Solution in brief: Share and we all benefit. Any journal that publishes fMRI should mandate the sharing of raw data, processed data, analysis scripts, and guidance notes. Every grant agency that funds fMRI studies should do likewise.

Solution in detail: Public data sharing has manifold benefits. It discourages and helps unmask fraud, it encourages researchers to take greater care in their analyses and conclusions, and it allows for fine-grained meta-analysis. So why isn’t it already standard practice? One reason is that we’re simply too lazy. We write sloppy analysis scripts that we’d be embarrassed for our friends to see (let alone strangers); we don’t keep good records of the analyses we’ve done (why bother when the goal is p<.05?); we whine about the extra work involved in making our analyses transparent and repeatable by others. Well, diddums, and fuck us – we need to do better.

Another objection is the fear that others will “steal” our data, publishing it without authorization and benefiting from our hard work. This is disingenuous and tinged by dickishness. Is your data really a matter of national security? Oh, sorry, did I forget how important you are? My bad.

It pays to remember that data can be cited in exactly the same way papers can – once in the public domain others can cite your data and you can cite theirs. Funnily enough, we already have a system in science for using the work of others while still giving them credit. Yet the vigor with which some people object to data sharing for fear of having their soul stolen would have you think that the concept of “citation” is a radical idea.

To help motivate data sharing, journals should mandate sharing of raw data, and crucially, processed data and analysis scripts, together with basic guidance notes on how to repeat analyses. It’s not enough just to share the raw MR images – the Journal of Cognitive Neuroscience tried that some years ago and it fell flat. Giving someone the raw data alone is like handing them a few lumps of marble and expecting them to recreate Michelangelo’s David.

---

What happens when you add all of these problems together? Bad practice. It begins with questionable research practices such as p-hacking and HARKing. It ends in fraud, not necessarily by moustache-twirling villains, but by desperate young scientists who give up on truth. Journals and funding agencies add to the problem by failing to create the incentives for best practice.

Let me finish by saying that I feel enormously sorry for anyone whose lab has been struck by fraud. It's the ultimate betrayal of trust and loss of purpose. If it ever happens to my lab, I will know that yes the fraudster is of course responsible for their actions and is accountable. But I will also know that the fMRI research environment is a damp unlit bathroom, and fraud is just an aggressive form of mould.