Think of the brain as a jam-packed sports arena and the voxels as all the fans. If you ask everyone in the stadium a bunch of questions, you might, by chance, see a pattern emerge, such as a cluster of people standing in line for the bathroom who love pistachio ice cream and skipped a grade in school. You need to statistically account for the possibility of coincidence before drawing any conclusions about ice cream, intellect and bladder control, just as you would for areas in the brain that light up or don’t light up in response to stimuli.
The authors of the paper on the software glitch found that a vast majority of published papers in the field do not make this “multiple comparison” correction. But when they do, they said, the most widely used fM.R.I. data analysis software often doesn’t do it adequately.
Other statistical problems in analyzing fM.R.I. data have been pointed out. But these kinds of finger-wagging methodological critiques aren’t easily published, much less funded. And on the rare occasions they do make it into journals, they don’t grab headlines as much as studies that show you what your brain looks like when you believe in God.
“There is an immense amount of flexibility in how anybody is going to analyze data,” said Russell Poldrack, who leads a cognitive neuroscience lab at Stanford University and is a co-author of the “Handbook of functional MRI Data Analysis.” And, he continued, “some choices make a bigger difference than others in the integrity of your results.”
To try to create some consistency and enhance credibility, he and other leaders in the field recently published a lengthy report titled “Best Practices in Data Analysis and Sharing in Neuroimaging Using MRI.” They said their intent was to increase transparency through comprehensive sharing of data, research methods and final results so that other investigators could “reproduce findings with the same data, better interrogate the methodology used and, ultimately, make best use of research funding by allowing reuse of data.”
The shocker is that transparency and reproducibility aren’t already required, given that we’re talking about often publicly funded, peer-reviewed, published research. And it’s much the same in other scientific disciplines.
Indeed, a study published last year in the journal Science found that researchers could replicate only 39 percent of 100 studies appearing in three high-ranking psychology journals. Research similarly hasn’t held up in genetics, nutrition, physics and oncology. The fM.R.I. errors added fuel to what many are calling a reproducibility crisis.
“People feel they are giving up a competitive advantage” if they share data and detail their analyses, said Jean-Baptiste Poline, senior research scientist at the University of California, Berkeley’s Brain Imaging Center. “Even if their work is funded by the government, they see it as their data. This is the wrong attitude because it should be for the benefit of society and the research community.”
There is also resistance because, of course, nobody likes to be proved wrong. Witness the blowback against those who ventured to point out irregularities in psychology research, dismissed by some as the “replication police” and “shameless little bullies.”
Nevertheless, the fM.R.I. community seems determined to be an exemplar. The next issue of the journal NeuroImage: Clinical will lead with an editorial announcing that it will no longer publish research that has not been corrected for multiple comparisons, and there is a push for other journals to do the same, as well as to require authors to make publicly available their data sets and analyses. Data-sharing platforms such as OpenfMRI and Neurovault have already been established to make fM.R.I. data and statistical methods more widely accessible. In fact, it was data sharing that revealed the fM.R.I. software glitch.
Data repositories have been established in other branches of science, too. Many are supported by the National Institutes of Health, which now requires researchers who receive $500,000 or more in federal funding (as well as those doing large-scale genomic research regardless of funding level) to have a data-sharing plan, although there remain some loopholes as well as limits on who can access the data.
The Wellcome Trust and the Bill & Melinda Gates Foundation have begun to make receiving grants contingent on unfettered data sharing. And the International Committee of Medical Journal Editors has floated a proposal to require authors to share data underlying clinical trials no later than six months after publication of their studies. This, despite stiff opposition from some researchers.
Perhaps the most encouraging sign of increasing transparency is the colorful and meticulously detailed new brain map recently released by the N.I.H.-supported Human Connectome Project. It was compiled using shared data from a variety of technologies, including fM.R.I., so that it could be reviewed, added to and improved upon.
”If we don’t have access to the data, we cannot say if studies are wrong,” said Anders Eklund, an associate professor at Linkoping University in Sweden and a co-author of the study that found the fM.R.I. software bug. “Finding errors is how scientific fields evolve. This is how science gets done.”
No comments:
Post a Comment