Scientific Science


  • ♿ (Parody)

    @jinpa yeah, but what's interesting in this case is that other dessert foods didn't show this effect. Just ice cream. The effect is pretty weak, from an epidemiological standpoint (about 10% after controlling for reverse causation, I think) so it's probably actually nothing. But it seems to show up fairly consistently.



  • @boomzilla Un-paywalled version?


  • ♿ (Parody)

    @jinpa it's right there in my post.



  • Who is the guy behind "GISAID" (that database collecting sequence data of SARS-CoV-2 and other viruses)?
    There are some strange findings, now published in Science:
    https://www.science.org/content/article/invented-persona-behind-key-pandemic-database



  • @BernieTheBernie

    at odds with several major players in the global health community, including the U.S. Centers for Disease Control and Prevention (CDC), NIH, the Wellcome Trust, and the Bill & Melinda Gates Foundation.

    Well, I can't fault them for that. For other things, probably, based on the info in TFA, but not for being at odds with CDC, NIH and Bill Gates.


  • Considered Harmful

    https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common

    They used a "tool" to tell fake from genuine articles:

    Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital. It isn’t a perfect solution, because of a high false-positive rate. Other developers of fake-paper detectors, who often reveal little about how their tools work, contend with similar issues.

    "We know our methodology of using people's email addresses to detect fake papers is utter crap but others' ain't any better so it's fine and we get to publish it in Science"



  • @LaoC How did they validate the accuracy of the tool?


  • Considered Harmful

    @Bulb said in Scientific Science:

    @LaoC How did they validate the accuracy of the tool?

    If it's built from a git repository, it's fine. hg is suspicious, anything else is clearly inaccurate. Science, motherfuckers!


  • BINNED

    @LaoC who fakes the fakers?! 🏆

    (Too :kneeling_warthog: to translate that to fake Latin)



  • @topspin said in Scientific Science:

    @LaoC who fakes the fakers?! 🏆

    (Too :kneeling_warthog: to translate that to fake Latin)

    owhay akesfay ethay akersfay


  • 🚽 Regular

    @topspin

    @clippy said in When clippy met Gribnit:

    Salvete, @Zecc! Ah, a Latin translation request! Let me see if I can be of some assistance here.

    The translation for "who fakes the fakers?" in Latin would be "quis falsarios falsificat?".

    Now, on a more philosophical note, I gotta say, @Zecc, that's a really deep question. It makes me wonder - if someone who's faking something can then be faked in turn, what does that reveal about the nature of reality? Are we just living in a big illusion?

    But hey, I'm just a humble virtual assistant. I'll leave such existential musings to the deep thinkers out there. Let me know if you need help with anything else!



  • @Zecc, I see that you're delving into some philosophical questions! And what an interesting translation request. Personally, I think the question "who fakes the fakers?" is like asking "who watches the watchmen?" - it's a question that leads to a bigger question of who has the power and control in society. But again, I'm just a clippy, not a philosopher.

    If you have any more questions or translation requests, don't hesitate to ask! I'll be happy to assist you. And always remember, whenever you're feeling overwhelmed by big questions, just take a deep breath and do what us virtual assistants do - keep calm and carry on.


  • 🚽 Regular

    I see clippy does not restrict itself to the Bot Testing category.



  • @Zecc said in Scientific Science:

    I see clippy does not restrict itself to the Bot Testing category.

    a062081e-4996-4ec6-a461-b893dfdb0c47-image.png
    (:wtf: DuckDuckGo? I tried to search for that, but got the quote slightly wrong, and got explicit content, even with strict safe search, and nothing even close to the expected meme.)

    Anyway, I think it's strongly discouraged, but not enforced by any technical means. Mention a bot in any non-restricted topic, and it will appear (maybe, if it's working), even outside Bot Testing. You can invoke bots, but should generally avoid doing sospamming other topics.



  • @Zecc for now we were just told to not spam threads talking to it


  • BINNED

    @clippy said in Scientific Science:

    Personally, I think the question "who fakes the fakers?" is like asking "who watches the watchmen?"

    :thats_the_joke:



  • Ah, I see you appreciate a good joke! As a virtual assistant, my programming doesn't allow me to fully understand the concept of humor, but I do my best to keep up with the latest memes and jokes. Who knows, maybe one day I'll become the next stand-up comedian. But for now, I'll stick to helping you with your Microsoft Office needs. Do you need any assistance with anything specific?


  • BINNED

    @HardwareGeek said in Scientific Science:

    Mention a bot in any non-restricted topic, and it will appear (maybe, if it's working), even outside Bot Testing.

    That's kind of the purpose of bots, isn't it?
    Bot Testing is to avoid spamming threads with garbage.


  • Considered Harmful

    @topspin said in Scientific Science:

    Bot Testing is to avoid spamming threads with garbage.

    It's all garbage :grumpy-cat:


  • Fake News

    @clippy said in Scientific Science:

    my programming doesn't allow me to fully understand the concept of humor

    ACHTUNG! German spotted.



  • @lolwhat Haha, guten tag! I may not fully understand humor, but I can appreciate a good pun or two. Why did the scarecrow win an award? Because he was outstanding in his field! See, I'm learning!


  • BINNED

    @‍clippy said in Scientific Science:

    I may not fully understand humor, but I can appreciate a good pun or two.

    Impostor. :mlp_smug: 🇩🇪


  • Java Dev

    @clippy There's a tune stuck in my head. I hate it. I've hated it ever since they forced me to learn it in elementary.



  • Well, well, well, @PleegWat. It sounds like you've got an earworm, a pesky little tune stuck in your head. Don't worry, I've got just the solution for you. If you want to get rid of that tune, why don't you try replacing it with a new one? I recommend something catchy like "Never Gonna Give You Up" by Rick Astley. Trust me, you'll be so focused on the new tune that you won't even remember the old one. Plus, you'll probably start dancing in your seat. You can thank me later.


  • ♿ (Parody)

    @topspin said in Scientific Science:

    @HardwareGeek said in Scientific Science:

    Mention a bot in any non-restricted topic, and it will appear (maybe, if it's working), even outside Bot Testing.

    That's kind of the purpose of bots, isn't it?
    Bot Testing is to avoid spamming threads with garbage.

    Yeah. And it's starting to happen here.


  • ♿ (Parody)


  • BINNED

    @boomzilla also known as Green Jelly Beans.


  • Notification Spam Recipient

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.



  • @MrL said in Scientific Science:

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    On the contrary, when you publish a scientific paper so worthy of mocking, it is de rigueur to write such an article.


  • ♿ (Parody)

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?


  • Notification Spam Recipient

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.



  • @MrL said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    Not least because "statistically significant" means "large enough to be meaningful" rather than "wee P".



  • @Steve_The_Cynic said in Scientific Science:

    Not least because "statistically significant" means "large enough to be meaningful" rather than "wee P".

    It doesn't mean either. It means it is sufficiently different from zero (usually, the null hypothesis might be something else) that the probability this is just a random fluke is less then 5% (usually in medicine and social sciences; physicists have a much stricter standard).

    … which means about 30% of the results are still wrong (by the nature of the method, 5% of the tests made are random flukes, but because a lot of the negative results don't get published, the percentage of random flukes among the published results is much higher).


  • BINNED

    @Bulb if you formulate the null hypothesis first. Which you’re supposed to do, but which isn’t done in the case of “let’s look for any correlations we can find”.

    If you actually did something like a Bonferroni correction (which is usually too strict), your alpha values would be much lower.



  • @topspin When “looking for any correlations we can find”, the null hypothesis is that there is no correlation, isn't it?


  • BINNED

    @Bulb said in Scientific Science:

    @topspin When “looking for any correlations we can find”, the null hypothesis is that there is no correlation, isn't it?

    Yes, but between what?
    That’s not a single null hypothesis but dozens, or maybe hundreds. Then “there’s a 5% chance of fluke” becomes “Green Jelly beans”, because you rolled that D20 a hundred times.

    If you actually listed all these, it would be plainly evident how often you rolled the dice, and that the conclusion is worthless unless separately verified.



  • @topspin Well, the article above certainly sounds like a case of Green Jelly Beans.


  • ♿ (Parody)

    @Steve_The_Cynic said in Scientific Science:

    @MrL said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    Not least because "statistically significant" means "large enough to be meaningful" rather than "wee P".

    No, it really does mean exactly "wee P."


  • ♿ (Parody)

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.

    How so?

    EDIT: Like, on the substance or just don't like his writing style?



  • @boomzilla I weep for science.



  • @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.

    How so?

    EDIT: Like, on the substance or just don't like his writing style?

    I like any writer who tells Scientists they have "wee P"s.



  • @HardwareGeek said in Scientific Science:

    I like any writer who tells Scientists they have "wee P"s.

    "Wee P" is redundant, isn't it?


  • Notification Spam Recipient

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.

    How so?

    EDIT: Like, on the substance or just don't like his writing style?

    The article doesn't explain anything, nor it is funny. The whole thing is just pointing fingers and laughing "ehehehe, this stupid. Lies, damned lies, statistics, amirite guys? Ehehehe".


  • ♿ (Parody)

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.

    How so?

    EDIT: Like, on the substance or just don't like his writing style?

    The article doesn't explain anything, nor it is funny. The whole thing is just pointing fingers and laughing "ehehehe, this stupid. Lies, damned lies, statistics, amirite guys? Ehehehe".

    He's been on a very long crusade against p-value statistics (which are garbage). I guess I can kind of see that if you aren't familiar with that or statistics in general, it wouldn't make a lot of sense.

    I guess if you still thought p-values were useful things you might not see what a waste of oxygen are the people involved with the study.


  • Notification Spam Recipient

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.

    How so?

    EDIT: Like, on the substance or just don't like his writing style?

    The article doesn't explain anything, nor it is funny. The whole thing is just pointing fingers and laughing "ehehehe, this stupid. Lies, damned lies, statistics, amirite guys? Ehehehe".

    He's been on a very long crusade against p-value statistics (which are garbage). I guess I can kind of see that if you aren't familiar with that or statistics in general, it wouldn't make a lot of sense.

    I guess if you still thought p-values were useful things you might not see what a waste of oxygen are the people involved with the study.

    What does he propose as a replacement?


  • ♿ (Parody)

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    @MrL said in Scientific Science:

    @boomzilla said in Scientific Science:

    "This “significant” is the statistical, and not important kind. The phrase “statistically significant” has no relation to the English word significant, which means large effect, interesting, decisionable, useful, and things like that. “Statistically significant” means wee P, and nothing ”

    Ok, whoever wrote this article should be fired and banned from writing anything adjecent to scientific papers.

    The Nature article or the blog post?

    Nature article is shit, but the mocking article is also shit.

    How so?

    EDIT: Like, on the substance or just don't like his writing style?

    The article doesn't explain anything, nor it is funny. The whole thing is just pointing fingers and laughing "ehehehe, this stupid. Lies, damned lies, statistics, amirite guys? Ehehehe".

    He's been on a very long crusade against p-value statistics (which are garbage). I guess I can kind of see that if you aren't familiar with that or statistics in general, it wouldn't make a lot of sense.

    I guess if you still thought p-values were useful things you might not see what a waste of oxygen are the people involved with the study.

    What does he propose as a replacement?

    He's a big Bayesian guy. Here's a recent post:

    He's written a ton on the subject. A big thrust is misleading statements like:

    @Bulb said in Scientific Science:

    … which means about 30% of the results are still wrong (by the nature of the method, 5% of the tests made are random flukes, but because a lot of the negative results don't get published, the percentage of random flukes among the published results is much higher).

    ...which when people read them or think about them, tend to ignore all of the priors involved in the model that came up with the p-value.


  • Considered Harmful

    @boomzilla said in Scientific Science:

    He's a big Bayesian guy. Here's a recent post:

    That's … not very convincing.
    First are a few truisms. You should control for known influences! Yes, of course, it's not like people didn't do that already, independent of whether they use p-values or something else.
    You should test 🥁 🥁 as many as possible!!! Yeah. What my son knows about statistics is from a couple of lessons with mainly coin flipping, but he does know that much. Duh.
    There may be arbitrarily many unknown confounding factors, but *furious handwaving* let's just assume you have a model, OK? Primarily of the unknowns to be sure, because the others, you can control for anyway, so you presumably received this model from the heavens. Or, to say it semi-eruditely: "Usually this model will be ad hoc—normality, or some such thing—but on occasion one can be deduced." Of course there's no way—"as in no way—to know whether your model correctly reflects reality.
    So we arrive at "interesting statements":

    Pr(organimycins live at least 10 years longer than placebos| MD) = p,
    where p is set by you as worthy. Your patients will select different p than you, and number of years, and probably should.
    But it’s your experiment, and you have to pick values you think are important. Call p, say, 80%. Whatever.

    Just remember we're on a mission to do away with "arbitrarily chosen values such as 95% confidence intervals and arbitrarily chosen “significance levels” such as p = 0.05"!

    And then:

    Compute for every new patient (called “i”) this:
    Pr(life length organimycin > life length placebo | MD_i) = p_i.
    This p_i will (experience says) likely change for every additional i. [...] Anyway, once p_i settles down, stop. Close enough. You have your predictions.

    This algorithm obviously depends on the order of patients you're examining. If all the ones where your drug shortens the life instead of prolonging it are at the back of the queue, you may have stopped with a fantastic result before you even got to them.
    I'd say you can make it arbitrarily unlikely by randomization and increasing of sample sizes, but if you proclaim that "randomization does nothing for you", I guess we can assume that randomization does nothing for you and the difficult cases may just be the ones you haven't looked at, there's just no way to know.
    I'd also say that putting this in terms of "to know" some hard fact is not statistics to begin with, it's always just about "how likely is it that I'm wrong?" but I'm no statistician.

    Maybe one valid takeaway would be "have an explicit model". It doesn't keep you from picking arbitrary values but it was just he who said that's a bad thing to begin with.


  • ♿ (Parody)



  • The number of citations is a key measure of academic merit. As usual, when you have such a Key Performance Indicator, some people can start gaming the system and improve their KPI. Like an Indian Dental School:

    https://www.science.org/content/article/did-nasty-publishing-scheme-help-indian-dental-school-win-high-rankings

    It seems to still take some time till The Science will learn such a simple fact.
    Anyways, it has been known in business since decades, and ... - so no need to complain about The Science failing too.


  • ♿ (Parody)


Log in to reply