An article by Kate Murphy in the New York Times discusses a recent controversy in the field of fMRI over statistics. Although Murphy correctly observes that flawed methods of data analysis are a problem in neuroimaging, she falsely implies that our 2009 study of the neural correlates of belief employed the methods in question. Here is the letter that Mark S. Cohen, the senior author on that paper, sent to the Times.—SH
Mark S. Cohen is a Professor of Psychiatry, Neurology, Radiology, Psychology, Biomedical Physics and BioEngineering at the University of California, Los Angeles. Further information can be found at www.brainmapping.org/MarkCohen.
To the Editors:
In her opinion piece, “Do You Believe in God, or is that a Software Glitch,” Kate Murphy explores two very real problems: that science in the modern era often accepts a statistically low probability of error as a proxy for truth, and that analytic methods have become too complicated for many scientists to deeply understand.
Murphy calls out “fM.R.I.” (sic) as a glaring exemplar of the problem. As one of the earliest workers in the field, and as the senior author on the paper she references in the title and text, I feel compelled to comment.
The idea that fMRI has the potential for sloppy use has actually escaped no one. Shortly after its invention, I wrote an editorial drawing attention to the fact that this truly groundbreaking method of peering into our thoughts and consciousness carried with it the potential to become a quantum physics-based means of neo-phrenology [1]. That dual-edge of technology exists in proportion to its power in computers, genetic engineering, and in essentially any method that engages humans in interpreting results.
As scientists our core method is to form a hypothesis, then to devise a test that can controvert it. As data become more extensive, we rely on statistics and probability as a check against overenthusiastic belief in our guesses. Few understand that the science lies in the care that is placed in the theories, rather than in the calculated probabilities. The fMRI literature contains thousands papers with very little prior theoretical basis, substituting instead a post-hoc speculation to explain the statistical findings; bad science, to be sure.
In the months since its publication, the paper by Eklund, Nichols and Knutsson [3] has been debated strenuously by the neuroimaging community. While some scientists responded defensively, as Murphy herself pointed out, researchers in this field have long been alarmed by the potential for statistical abuse. Scientists like Poldrack, Poline, Yarkoni, Nichols, and many others, have worked with passion and diligence to do whatever is possible to protect the integrity of the discipline, but scientists share the same foibles as all people: we are biased by our own beliefs and by our desire for recognition. Nothing, and certainly not statistics, can really protect us from this enthusiasm.
The technical issues in the math and statistics used in fMRI data analysis are manifold: assumptions about sphericity, autocorrelation, spatial point spread functions, random field theory, and other abstruse factors are demonstrably false, yet we use them because they frequently are the best approximations available given our current state of knowledge. To me, however, the most damning observation in the Eklund, et al., article was their discovery that a disproportionate number of published “positive” results came from labs that used a single piece of code that was discovered later to contain a numerical error that made it more likely to find “significant” results. For those of us who do the work, it is hard to avoid the conclusion that many researchers actively selected this tool because it was known to produce rosier pictures: a truly remarkable level of self-deception, bordering on the delusional.
I will stand happily by the work of ours that was cited, almost mockingly, by Ms. Murphy [2]. In planning those experiments, we were well aware of the potential for controversy, and were at great pains to use the most rigorous methods available to us at the time. We did not cherry-pick for the tools that gave us the results we preferred. Does this make our conclusions correct? Of course not, but neither does the fact that statistics is a flawed science make us wrong.
In the end, good scientists are highly critical of every instrument, algorithm, measurement, calibration standard, and analysis program they use. It is inexcusable to be too lazy to question methods just because the tools are complicated to understand. Of course, it is equally inexcusable for journalists to accept each new observation with wide and unblinking eyes. The fMRI method has protected countless patients in surgery. It has given us a means to communicate with seemingly locked-in subjects, it has given us crucial insights into the nature of psychiatric and neurological diseases including schizophrenia, epilepsy, autism and addiction. And yes, it is flawed.
Mark S. Cohen
References
1. MS Cohen, “Functional MRI: A Phrenology for the 1990’s?” Journal of Magnetic Resonance Imaging, 6: p. 273-274. 1996.
2. S Harris, JT Kaplan, A Curiel, SY Bookheimer, M Iacoboni and MS Cohen, “The neural correlates of religious and nonreligious belief.” PLoS One, 4(10): p. e0007272. 2009.
3. A Eklund, TE Nichols and H Knutsson, “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates.” Proceedings of the National Academy of Science U S A, 113(28): p. 7900-7905. 2016. PMCID: PMC4948312
https://www.samharris.org/blog/item/the-new-phrenology
No comments:
Post a Comment