Science and proving theories



  • @boomzilla said in New Gravity:

    @CoyneTheDup said in New Gravity:

    If your argument is that the statement is imprecise, well so is this...

    No, it's not that it's imprecise. It's that it's fucking gibberish. What is a "high level of significance?" I can practically guarantee you that it's statistical gibberish.

    @CoyneTheDup said in New Gravity:

    If your argument is that the statement is imprecise, well so is this...

    "It also proves that if this quantum jitter exists, it is either much smaller than the Holometer can detect, or is moving in directions the current instrument is not configured to observe."

    That's not gibberish, though. It makes sense! It's actually a defensible statement.

    @boomzilla said in New Gravity:

    What is a "high level of significance?"

    It is high statistical significance.

    In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.[11][12] But if the p-value of an observed effect is less than the significance level, an investigator may conclude that that effect reflects the characteristics of the whole population,[1] thereby rejecting the null hypothesis.[13]

    So "high significance" means "it is very improbable the result is wrong."

    In science, it is hard to know anything with absolute certainty. Generally, we're exploring a black box by poking it with an input and seeing what output we get. We theorize, and poke the box different ways to determine if the box responds in accordance with the theory.

    But if we poke the box once, how do we know the box will respond the same way if we poke it again?

    We don't...we never do. We can only estimate the probability it will behave the same way.

    This is why this statement...

    The fact that the Holometer ruled out his theory to a high level of significance proves that it can probe time and space at previously unimagined scales, Hogan says.

    ...makes sense. They formed a test hypothesis to validate the theory and they poked the box a lot, and now they have a result. How likely is it that the result is in error? Very improbable, which means the result has high significance.

    Which is probably not helpful, so let's put it in the realm of dice, or just one die. We want to know if the die is fair. So first the hypothesis: "This die is fair." Then we roll it 1,000 times in a row...and let's say it rolls a 6 every single one of those times.

    Does that prove the hypothesis is wrong and the die is unfair? No. Statistics does not deal in absolutes, only in probability. A truly fair die has 1-in-61000 chance of rolling 1000 6's in a roll, not likely but possible. If we rolled it 9,999,000 more times, we might find out that the die is perfectly fair.

    Probability doesn't deal in absolutes like "proven fair" or "proven unfair". It deals only in "how probable?" How probable is it that the die is fair or not?

    The testing method is called a significance test, which first requires a hypothesis: "This die is unfair." Then a test, then an analysis, and then and only then we can say whether the hypothesis is accepted or rejected, and the significance of that rejection.

    To clarify low versus high significance, let's say I roll the die just 1 time, and it rolls a six. I say, "It is probable this die is unfair, because it rolled a 6." Well, it should be pretty obvious that isn't very probable at all, is it? A test with every low sample count yields a result of very low significance. I roll it a second time, a second 6, now the probability is a bit higher, but still low significance. The significance only becomes high when you do a lot of tests--and there are statistical rules to tell you how many tests are needed for a given significance.

    So, translating, they're saying they did a lot of tests, and statistically the hypothesis they were testing has been shown to be extremely improbable.

    But not proven, because statistics doesn't deal in proof and neither does science, mostly. That is why this statement disturbs me more:

    It also proves that if this quantum jitter exists, it is either much smaller than the Holometer can detect, or is moving in directions the current instrument is not configured to observe.

    ...because it doesn't prove any such thing. It "makes it highly probable" or "highly significant" but proves nothing.


  • ♿ (Parody)

    @CoyneTheDup said in Science and proving theories:

    "In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.[11][12] But if the p-value of an observed effect is less than the significance level, an investigator may conclude that that effect reflects the characteristics of the whole population,[1] thereby rejecting the null hypothesis.[13]"

    So "high significance" means "it is very improbable the result is wrong."

    Fuck p-values right in their star holes. That's not what a low p-value means.

    @CoyneTheDup said in Science and proving theories:

    ...makes sense. They formed a test hypothesis to validate the theory and they poked the box a lot, and now they have a result. How likely is it that the result is in error? Very improbable, which means the result has high significance.

    Absolute bullshit, which they admit in their next sentence where they say that they've only tested anything down to a particular size.

    @CoyneTheDup said in Science and proving theories:

    So, translating, they're saying they did a lot of tests, and statistically the hypothesis they were testing has been shown to be extremely improbable.

    Yes. Statistical gibberish that needs to die.

    @CoyneTheDup said in Science and proving theories:

    But not proven, because statistics doesn't deal in proof and neither does science, mostly. That is why this statement disturbs me more:

    "It also proves that if this quantum jitter exists, it is either much smaller than the Holometer can detect, or is moving in directions the current instrument is not configured to observe."

    ...because it doesn't prove any such thing. It "makes it highly probable" or "highly significant" but proves nothing.

    I agree, my inner pendant hates the typical use of "prove" or "we know" for similar reasons. But the statistics don't mean what you're interpreting them to mean, either.


  • FoxDev

    @CoyneTheDup said in Science and proving theories:

    It "makes it highly probable" or "highly significant" but proves nothing.

    There are three kinds of lies.

    Lies.

    DAMN LIES

    and

    STATISTICS



  • @accalia said in Science and proving theories:

    STATISTICS

    Statistics are like bikinis.
    What they reveal is suggestive, but what they hide is vital.



  • @boomzilla said in Science and proving theories:

    Absolute bullshit, which they admit in their next sentence where they say that they've only tested anything down to a particular size.

    Oh, I see. You didn't understand that the statements are on different topics.

    It's like the hypothesis is: "Hunter automobiles can go 300 MPH."

    The first statement, the high significance statement would be: "We tested a shitload of Hunter automobiless and none of the ones we tested can even get close to 300 MPH. So we can say with 99.999999% certainty that no Hunter can go 300 MPH."

    The second statement was: "And by the way, we proved that we can measure the speed of an automobile within ±0.000001 MPH."

    Related by the experiment, but not on the same topic at all.


  • ♿ (Parody)

    @CoyneTheDup said in Science and proving theories:

    Oh, I see. You didn't understand that the statements are on different topics.

    What?

    @CoyneTheDup said in Science and proving theories:

    Related by the experiment, but not on the same topic at all.

    No, that's completely wrong. They said, "We super duper proved this! We proved it so hard that the most likely way it's not true is if it's not true at scales beyond that which we can measure."

    It's exactly the same topic.



  • @boomzilla Well all I can say is you're entitled to be wrong.


  • ♿ (Parody)

    @CoyneTheDup said in Science and proving theories:

    @boomzilla Well all I can say is you're entitled to be wrong.

    Yes, my privilege knows few bounds, but once again you're talking nonsense.



  • You're wrong, BZ...

    p-Values and significance values are "formally" the same thing (for symmetrical distributions, anyway), but play different roles in different kinds of statistical calculations.

    In particular, you validate a hypothesis by creating a statistical test and model for it, and then calculate the probability that the test fails given that the model is correct. (I.e., you use the probability model to calculate the probability that some experimental variable does something).


  • ♿ (Parody)

    @Captain said in Science and proving theories:

    You're wrong, BZ...

    Except that you are agreeing with me here:

    @Captain said in Science and proving theories:

    In particular, you validate a hypothesis by creating a statistical test and model for it, and then calculate the probability that the test fails given that the model is correct.

    Ignoring the general craptitude of reporting p-values, the "significance" is related to their null hypothesis being false. Presumably, it amounts to "there is observable quantization." Well, sure, they weren't able to observe any, which is what the non-gibberish statement says.


Log in to reply