fMRI Bugs


  • ♿ (Parody)

    Saw this post:

    Which links to the underlying paper:
    http://www.pnas.org/content/113/28/7900.abstract

    Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.


  • area_pol

    Analysing brain function through power consumed (volume of blood flow which brings oxygen) by its regions is like trying to understand a computer by looking at it with a thermal camera to see which parts get hot.



  • @Adynathos That method would kind of work though.

    People like to say that "you can't study a CPU and find the part that plays Minesweeper", but the fundamental difference is that the brain has its program "built into the computational units" while the CPU stores it somewhere else entirely. In fact you could consider that the CPU actually just runs the following program: "fetch the next instruction from memory and interpret it in a certain way".


  • area_pol

    @anonymous234 You would find the approximate area where the CPU is in the case.
    But it would tell you nothing about the trainsistors and their arrangement.



  • @Adynathos said in fMRI Bugs:

    @anonymous234 You would find the approximate area where the CPU is in the case.
    But it would tell you nothing about the trainsistors and their arrangement.

    AIUI, that's what the fMRI tries to do: show which parts of the brain are involved with which process. E.g. when someone looks at a picture, the visual cortex "lights up"; when the person hears sounds, the auditory cortex "lights up"; and when the person recognizes something the memory cortex "lights up".

    To use the computer analogy, when the computer displays something on the screen, the GPU warms up a little (or a lot, depending); but when the computer sends a file across the network, the network card gets warmer; and when the computer accesses a file, the HDD and its controller become more active.

    That part is fairly consistent, even across different people. It's not exact, but it's better than nothing, and can help guide other research into more specific interactions.

    Supposedly, but apparently maybe not, if the article is true (I haven't read it yet).


  • :belt_onion:

    E_THOUGHT_NOT_FOUND


  • Discourse touched me in a no-no place



  • @djls45 said in fMRI Bugs:

    Supposedly, but apparently maybe not, if the article is true (I haven't read it yet).

    I just read the blog post (it really is quite short), and it looks like the author wasn't trying to say that fMRI results are all wrong, but that using Type I and Type II errors (which is how the research article was written) is probably the wrong way to treat the results of hypothesis testing.

    I continue to think that the false-positive, false-negative thing is a horrible way to look at something like brain activity, which is happening all over the place all the time. The paper discussed above looks like a valuable contribution and I hope people follow up by studying the consequences of these FMRI issues using continuous models.

    That link on the word "horrible" is his explanation that hypothesis testing of things with fuzzy parameters cannot properly use the terms "false positive" or "false negative". He argues that Type S and Type M errors are much more useful in these cases:

    A Type S error is an error of sign. I make a Type S error by claiming with confidence that theta is positive when it is, in fact, negative, or by claiming with confidence that theta is negative when it is, in fact, positive. I think it’s fair to say that classical 2-sided hypothesis testing fits this framework: for example, if our 95% interval for theta is [.1, .3], or if we say that theta.hat = .2 and is statistically significantly different from zero, then our scientific claim is that theta is positive, not simply that it’s nonzero.

    A Type M error is an error of magnitude. I make a Type M error by claiming with confidence that theta is small in magnitude when it is in fact large, or by claiming with confidence that theta is large in magnitude when it is in fact small. The well-known problem of publication bias could lead to systematic Type M errors, with large-magnitude findings more likely to be reported.


  • Grade A Premium Asshole

    @boomzilla said in fMRI Bugs:

    Saw this post:

    Which links to the underlying paper:
    http://www.pnas.org/content/113/28/7900.abstract

    Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data. Here, we used resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses. Using this null data with different experimental designs, we estimate the incidence of significant results. In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.

    MRIs don't work on furries.


  • BINNED

    @Polygeekery
    They just can't have metal in their furrrrrrrrsuit


  • Grade A Premium Asshole

    @Luhmann it's probably best to just put them down. They are furries.


  • BINNED

    @Polygeekery said in fMRI Bugs:

    it's probably best to just put them down

    You can't MRI them afterwards!



  • @Luhmann you can, like the paper above that MRI'd a dead salmon, just don't expect a useful response.


  • BINNED

    @Arantor
    I would guess that possible bullet fragments are a no-no



  • @Arantor said in fMRI Bugs:

    just don't expect a useful response

    They're furries. Why would you expect a useful response, ever?



  • @Arantor said in fMRI Bugs:

    @Luhmann you can, like the paper above that MRI'd a dead salmon, just don't expect a useful responsebrain activity.

    OT'd that for you...


Log in to reply