Science and Replication


  • ♿ (Parody)

    @sebastian-galczynski and? I mean...a giant multi-billion dollar facilityboondoggle is not really in the same realm as professors applying for money for grants to do some small study. Anywho, we still send sciency shit into space for billions of dollars.



  • @sebastian-galczynski said in Science and Replication:

    If there's no legal basis to choose A over B, then the bureaucrat exercising this choice is abusing his power.

    That seems a bit odd. If a choice between A and B needs to be made, and there's no law explicitly laying out criteria for choosing between them, and it's considered abuse of power to exercise personal judgment in making that choice, then how does the choice get made?


  • Discourse touched me in a no-no place

    @boomzilla The SSC had a really big problem with cost controls, even by comparison with the CERN upgrade (to make the LHC). It needed to be stopped.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in Science and Replication:

    That seems a bit odd. If a choice between A and B needs to be made, and there's no law explicitly laying out criteria for choosing between them, and it's considered abuse of power to exercise personal judgment in making that choice, then how does the choice get made?

    A politician gets put on the committee, which allows the committee to use their gut feelings, golf scores, and slick presentations from vendors as bases for decision making.


  • ♿ (Parody)

    @dkf said in Science and Replication:

    @boomzilla The SSC had a really big problem with cost controls, even by comparison with the CERN upgrade (to make the LHC). It needed to be stopped.

    Indeed.



  • @dkf said in Science and Replication:

    @boomzilla The SSC had a really big problem with cost controls... It needed to be stopped.

    So, exactly the same as every government program, ever.


  • Discourse touched me in a no-no place

    @HardwareGeek said in Science and Replication:

    So, exactly the same as every government program, ever.

    Why pay a low price when you can pay a high one?


  • Trolleybus Mechanic

    @Mason_Wheeler said in Science and Replication:

    then how does the choice get made?

    They make an explicit list of rules. Usually it's simple (the cheapest offer wins), but often there are more points for e. g. having produced a similar system in last 5 years, having longer warranty etc*. That's for procuring. When it comes to permits, licences etc it's almost always so that they must be issued if some publicly accessible criteria are met. That's the general principle.

    *or, in case there's some foul play, irrelevant things like 'having main office in a blue building', still totally objective and in theory possible to do for anyone.

    In general bid-rigging has become an art and a very useful skill. I learned a bit myself when we had to procure some very specific piece of equipment (only one model from one vendor made sense due to technical considerations) at the uni. You must invent some innocent sounding generic criteria so that only one thing matches, otherwise you either buy the wrong (but cheaper) thing, or some inspection will make a stink, possibly event throwing you in jail.


  • Fake News

    @sebastian-galczynski said in Science and Replication:

    @Mason_Wheeler said in Science and Replication:

    then how does the choice get made?

    They make an explicit list of rules. Usually it's simple (the cheapest offer wins), but often there are more points for e. g. having produced a similar system in last 5 years, having longer warranty etc*. That's for procuring. When it comes to permits, licences etc it's almost always so that they must be issued if some publicly accessible criteria are met. That's the general principle.

    *or, in case there's some foul play, irrelevant things like 'having main office in a blue building', still totally objective and in theory possible to do for anyone.

    In general bid-rigging has become an art and a very useful skill. I learned a bit myself when we had to procure some very specific piece of equipment (only one model from one vendor made sense due to technical considerations) at the uni. You must invent some innocent sounding generic criteria so that only one thing matches, otherwise you either buy the wrong (but cheaper) thing, or some inspection will make a stink, possibly event throwing you in jail.

    So... it's like marketing, but in reverse.


    Filed under: I Am Not A Marketeer



  • Sorry for the re-railing attempt, but any thoughts about the proposed mitigation mesures? Generally they seem promising to me, but I'd appreciate some opinions from people who are more familiar with the academic milieu.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Science and Replication:

    Sorry for the re-railing attempt, but any thoughts about the proposed mitigation mesures?

    Let's list them:

    1. Earmark 60% of funding for registered reports
    2. Earmark 10% of funding for replications
    3. Earmark 1% of funding for progress studies
    4. Increase sample sizes and lower the significance threshold to .005
    5. Ignore citation counts
    6. Open data
    7. Financial incentives for universities and journals to police fraud
    8. Do away with the journal system altogether
    9. Have authors bet on replication of their research
    10. Just stop citing bad research
    11. Read the papers you cite
    12. When doing peer review, reject claims that are likely to be false
    13. Stop assuming good faith

    Points #2, #6, #7, #10, #11 and #12 I strongly agree with (though not necessarily the exact figure quoted in #2); many of them apply to other disciplines as well. For points #1 and #3, I can't really comment as the exact nature of what is published varies so much by discipline. #4 is nice if you can manage it, but it's just not going to be possible in everything; the devil is in the details there. #9 is… well, frankly available to them right now; requiring them to do it is going to be unlikely to fly unless you have the funding body as the bookie, and that's a rotten perverse incentive right there.

    Points #5 and #8 come under the heading of “so what do you replace it with that isn't just as bad?” There might be other ways to fix things, such as not just counting publications but also weighting them by something. Like an impact factor. It changes the nature of the publication game so that at least it isn't all about how many turds papers can be crapped out per minute. Any system will still have some perverse incentives though.

    As for #13… I'm not quite sure how to comment on it. 😆



  • @sebastian-galczynski said in Science and Replication:

    @Mason_Wheeler said in Science and Replication:

    then how does the choice get made?

    They make an explicit list of rules. Usually it's simple (the cheapest offer wins), but often there are more points for e. g. having produced a similar system in last 5 years, having longer warranty etc*. That's for procuring.

    Indeed. We're now requesting bids for tender for a new WLAN controller where one of the criteria is: "Compatible with existing access points of type Foo"

    Three years back when we were searching for a new company to support us regarding IT, we were able to rule out one bid because - while the bid was considerably lower than everyone else - it was a one-man show from a town 300 km away.


  • ♿ (Parody)

    @dkf said in Science and Replication:

    For points #1 and #3, I can't really comment as the exact nature of what is published varies so much by discipline.

    Not sure what #3 is but for #1 (registered reports) I believe it means that before you do your study you tell some central authority / bookkeeping entity what you're going to do in your study and what you plan to test. This prevents the forking paths or fishing for correlations. It's something desperately needed by a lot of the fields in the study.



  • @boomzilla and the proposal is to accept papers at the registration stage, not the results stage. Conditional on them following the registered path of course.

    In the social sciences especially, I think this would help a lot by itself.



  • @boomzilla said in Science and Replication:

    @dkf said in Science and Replication:

    For points #1 and #3, I can't really comment as the exact nature of what is published varies so much by discipline.

    Not sure what #3 is but for #1 (registered reports) I believe it means that before you do your study you tell some central authority / bookkeeping entity what you're going to do in your study and what you plan to test. This prevents the forking paths or fishing for correlations. It's something desperately needed by a lot of the fields in the study.

    What's stopping anybody from doing the experiments, fishing for correlations, registering it, waiting a year and than publishing the results?

    … because that's precisely what generally happens with all those research and development grants that want to know what you are going to produce in advance – you start when you already have it.


  • ♿ (Parody)

    @Bulb said in Science and Replication:

    @boomzilla said in Science and Replication:

    @dkf said in Science and Replication:

    For points #1 and #3, I can't really comment as the exact nature of what is published varies so much by discipline.

    Not sure what #3 is but for #1 (registered reports) I believe it means that before you do your study you tell some central authority / bookkeeping entity what you're going to do in your study and what you plan to test. This prevents the forking paths or fishing for correlations. It's something desperately needed by a lot of the fields in the study.

    What's stopping anybody from doing the experiments, fishing for correlations, registering it, waiting a year and than publishing the results?

    Money, presumably. I guess it's likely that some will try to game the system. No system is perfect and cheaters and criminals will always be with us.

    Perhaps what's really needed is for journals to publish negative results.



  • @boomzilla Publishing negative results is the only way how someone could ever not do that, indeed. See, because if you say in advance you'll be looking for this and then don't find it, you still need to publish something to account for it to the grant agency. So either you can publish negative results, or you have to register and request grant only after you actually have the results anyway.

    See, it already works that way for other grant-funded endeavours at this time. Our company did some such projects (though I wasn't part).

    I think the key part is reputation. Do consistently decent work that reproduces and you'll have easier time getting funding, do work that rarely reproduces and you'll have harder time – and do a fraud and be basically done. It is basically variation on #9, but I am not letting anyone place bet or choose how big it is, I am simply saying the replication history has effect on further funding.

    And then I think something needs to be done with the citation plague. Citing very bad works should also lose you reputation, which should hopefully make people only cite works they actually reviewed and have some basic trust in.

    But at the end of the day no “objective” metrics is going to work. It needs to somehow involve opinion of people who actually understand the field in the reputation. But setting that up so it keeps the people honest is of course hard.


  • Discourse touched me in a no-no place

    @Bulb said in Science and Replication:

    … because that's precisely what generally happens with all those research and development grants that want to know what you are going to produce in advance – you start when you already have it.

    That works fine for development. It's rotten for research. The former is for where you've got a result and want to do something with it (e.g., taking it towards being a product or service), but the latter is for where you're trying to find out something basic and you really don't know what's there.

    Publication of negative and null results is important, and too damn rare. And according to my mother (who at least trained as a psychologist long ago) doing experiments and collecting data in this general area is extremely difficult. Often studies are small precisely because it is so insanely difficult to collect large amounts of data. The biggest breakthrough in recent years has been to use social media and online polling; the results obtained aren't great, but are much much better because the sample sizes are so much larger, a real methodological revolution.



  • @dkf said in Science and Replication:

    1. Stop assuming good faith
      As for #13… I'm not quite sure how to comment on it. 😆

    Actually, above the easy quip, I think there is something fundamental about research and publications here. Given the amount of stuff that's published, and how the very nature of one piece of research is to build on all the other that was done before (which is why you're supposed to cite other research in your paper!), if you don't assume that the paper you're reading was written in good faith, then you can equally well not read it at all. Because to check that assumption you'll have not only to read the paper in minute details (i.e. not only follow it, but analyse each statement to see if it isn't subtly misleading), which takes time, but you will also have to read all the research quoted to see whether it really says what the author says, and apply recursively. Basically, you have to reinvent the whole field the paper is in before you can read it. Which is simply not feasible. And at that point you don't even know if they've cheated with their numbers, and there is simply no way to check that one.

    In theory, this is more an advice that applies to reviewers (because their role is to be the gatekeepers), but even ignoring the fact that they don't do it, if they tried they would have the same issue. They are (or should be) probably more stringent than a regular reader, but ultimately, I don't think you can really not start by assuming people are not lying to you.

    1. Just stop citing bad research

    That's easier said than done. Because while there are some truly awful papers from which there is nothing to keep, true, but even a paper that has a faulty result, or a faulty analysis, might still have value for their methodology, or for some intermediate result, or even maybe just the literature review they did at the start! (it's fairly common in my field to cite a paper saying "for a more complete overview of [technique I'm going to use and that would take pages to describe], see e.g. Smith & Smith, 2015", just because Smith and Smith happened to have a nice write-up of the technique in that paper, regardless of how they use it afterwards).

    Overall I think there is an underlying assumption about citations that's never explicitly laid out, nor challenged (in TFA and elsewhere), which is that you should cite a lot of stuff in your papers. Try writing a paper where you only cite a couple of references and see how it goes when you have it reviewed even just by your colleagues (not even submit it to a journal, which is when the reviewers will ask you to cite them...). Part of that is normal and good (making sure you build the track that theoretically allows someone to rebuild the whole knowledge on which you paper relies), but it's become cargo cult more than anything. And the fact that things like citation indexes exist only exacerbate this issue, since adding a citation doesn't cost you anything but helps the author you're citing. So because we all assume that we need at least one page of references and that anything less is not serious, we naturally all tend to pad references. Which means we'll grab anything, good quality or not -- even when we know it's a bad paper, it'll be cited because it's one more reference in the list!

    Getting rid of that fundamental assumption is probably very hard since that's what science relies on, but doing so would almost immediately remove any incentive to cite bad research, to not read the papers you cite and so on.


  • Discourse touched me in a no-no place

    @remi said in Science and Replication:

    Try writing a paper where you only cite a couple of references and see how it goes when you have it reviewed even just by your colleagues

    I've reviewed quite a few of that sort of paper. My review did ask “have you been contact with the rest of the world at all while you were conducting this research?”

    I think I must've been Reviewer #2. 😆



  • @remi said in Science and Replication:

    if you don't assume that the paper you're reading was written in good faith, then you can equally well not read it at all.

    I think the idea is "read carefully and beware of things that look fishy" rather than "don't trust anything you read".



  • @Zerosquare I agree with that one, but to me that's the same thing as #11 (read papers you cite). If you don't read a paper carefully and with a critical eye, then you might as well not read them (and just skim the abstract).

    But phrasing it as "don't assume good faith" breaks down at the most basic level. If you don't assume some level of good faith from the author, how do you know they've even performed the experiment they're claiming they have, and haven't just made up everything? That's simply an unworkable principle.



  • They should probably have said "Don't always assume good faith".



  • @remi said in Science and Replication:

    just skim the abstract

    TFA alleges that many authors appear to not even do that.



  • @HardwareGeek Yes, which is why #11 (read papers you cite) is actually a useful recommendation.

    (edit: and TFA is right on that, I can confirm it. Not that, uh, I, uh, would ever have, um... 👀 💦 :seye:)

    Though re-reading what I said... OK, I get that you could possibly construe #11 (read papers you cite) as "at least skim the abstract" and #13 (don't assume good faith) as "actually, read them (i.e. with a critical eye)." But that would have been a weird way to say things.



  • @remi said in Science and Replication:

    If you don't assume some level of good faith from the author, how do you know they've even performed the experiment they're claiming they have, and haven't just made up everything? That's simply an unworkable principle.

    Rejecting the principle of good faith is one of the mechanisms flat-earthers use to defend their position "scientifically": assume that any evidence that refutes their position is faked; if a test isn't something they can do themselves in their basement then it's not something they can trust anyone else to be honest about doing.



  • @Watson said in Science and Replication:

    insert other anti-science cult to taste

    No. No desire to insert, be inserted, or taste. Way, way, way, way too far on the wrong side of the hot/crazy graph.



  • @remi said in Science and Replication:

    @Zerosquare I agree with that one, but to me that's the same thing as #11 (read papers you cite). If you don't read a paper carefully and with a critical eye, then you might as well not read them (and just skim the abstract).

    But phrasing it as "don't assume good faith" breaks down at the most basic level. If you don't assume some level of good faith from the author, how do you know they've even performed the experiment they're claiming they have, and haven't just made up everything? That's simply an unworkable principle.

    I think we should generally assume good faith, but it is important to stop trusting authors and institutions, definitely and irrevocably, when evidence of lack of such good faith on their part is uncovered. The consequences of straight out fraud (rather than just shoddy job, which is still basically in good faith) need to be dire.


  • kills Dumbledore

    @Mason_Wheeler said in Science and Replication:

    @sebastian-galczynski said in Science and Replication:

    If there's no legal basis to choose A over B, then the bureaucrat exercising this choice is abusing his power.

    That seems a bit odd. If a choice between A and B needs to be made, and there's no law explicitly laying out criteria for choosing between them, and it's considered abuse of power to exercise personal judgment in making that choice, then how does the choice get made?

    D20



  • @Bulb said in Science and Replication:

    I think we should generally assume good faith, but it is important to stop trusting authors and institutions, definitely and irrevocably, when evidence of lack of such good faith on their part is uncovered. The consequences of straight out fraud (rather than just shoddy job, which is still basically in good faith) need to be dire.

    That one I fully agree with. And I think it should be in part handled at the publications/review level: there should be some sort of repository of retracted/corrected papers, or proven instances of fraud, or all those things, and when someone submits a paper their status on that repository should be given to the reviewers, who would then (in an ideal world...) check the paper more thoroughly, maybe ask more details about the data itself and how it was gathered rather than just assuming it's OK and so on. This way when you read a paper (i.e. that has successfully passed this enhanced review) you can still assume some degree of good faith, but known offenders are treated more stringently to try and ensure that this is not misplaced trust.

    Think of it as some sort of criminal record. If you just done goofed once, a long time ago, it shouldn't matter much, but if you're a serial offender, that would definitely affect your future prospects.

    Obviously that's just wishful thinking, but one can always dream...?


  • Discourse touched me in a no-no place

    @remi said in Science and Replication:

    there should be some sort of repository of retracted/corrected papers

    I've heard of people retracting retractions. 🤷♂ To my mind, that's just attempting to game the system; they should submit a new paper instead, and it needs to be substantively different to the old one.



  • @dkf Here's a question: How do you tell the difference between a retraction when the author(s) found a legitimate problem with the research, and a retraction because the paper stepped on the wrong people's toes and they pressured the author into taking it back?


  • Trolleybus Mechanic

    @Rhywden said in Science and Replication:

    Indeed. We're now requesting bids for tender for a new WLAN controller where one of the criteria is: "Compatible with existing access points of type Foo"

    AFAIK the relevant bureaucrat inside the university was very wary of these kind of criteria, because mentioning specific companies would somehow attract the auditors, somewhat like blood attracts sharks.

    Three years back when we were searching for a new company to support us regarding IT, we were able to rule out one bid because - while the bid was considerably lower than everyone else - it was a one-man show from a town 300 km away.

    Oh, these f***rs can delay every project by years. They send some impossibly cheap offer, and then they sue if you reject it. If they win, they hire the more expensive bidders as subcontractors, pay them for a short period of time and go bankrupt (they're LLC, so the CEO's salary is pocketed already), causing a wave of bankrupcies and halting the project (and many others). That happened with road construction in Poland a lot.



  • @sebastian-galczynski That's why I'm rather glad that our superior agency has explicitly stated that price is not the sole determining factor and that this policy has also been upheld by the courts.

    Mind, you need to have a good reason as to why you're telling a cheaper bidder to take a hike but it is indeed possible.



  • @remi said in Science and Replication:

    there should be some sort of repository of retracted/corrected papers, or proven instances of fraud

    Those are quite different things. When the author retracts a paper it actually shows they have some self-awareness and ability to admit error, so that shouldn't be a problem (though yes, you probably want to do a more careful review). Fraud on the other hand is a completely different league. Convicted fraudsters should be at least banished from the scientific community.



  • @Bulb They are different things, but they should be in the same place (in my ideal-world scheme), so that a reviewer can see everything at the same time. Kind of like theft and murder are different crimes, but they are both in your criminal record.



  • @Jaloopa said in Science and Replication:

    @Mason_Wheeler said in Science and Replication:

    @sebastian-galczynski said in Science and Replication:

    If there's no legal basis to choose A over B, then the bureaucrat exercising this choice is abusing his power.

    That seems a bit odd. If a choice between A and B needs to be made, and there's no law explicitly laying out criteria for choosing between them, and it's considered abuse of power to exercise personal judgment in making that choice, then how does the choice get made?

    D20

    I tried to read that as D2O (D2O) instead of D20, and wondered what deuterium oxide had to making a decision. :facepalm:


  • Trolleybus Mechanic

    @remi said in Science and Replication:

    Kind of like theft and murder are different crimes, but they are both in your criminal record.

    No, this is like comparing theft to accidentally taking someone's pen or lighter. Errors happen. You think for example the guy who reported the faster-than-light neutrinos in CERN (which was, AFAIK, due to a bad cable) should face some consequences? He did the right thing - he saw a weird result, couldn't find any reasonable cause, so he reported it and then more people started investigating and found the truth.



  • @sebastian-galczynski Well call it like parking fines and murder then (although it's still not a perfect comparison, but just be glad it's not a car analogy...).

    The point is not to have an honest error punished, but to have a single repository of all recognised errors, and letting reviewers access it to judge how thorough they should be when evaluating a new paper.

    Because as soon as you start not putting in some errors on the basis that it wasn't the author's fault, you're opening the door to not putting that other error because the author got mislead (um, maybe they should not have trusted so easily that other source?), or not putting the error because the author got pressured into it by someone else (was it really pressure, or is someone trying to salvage a scientist by saying it's all the fault of a scapegoat?), or not putting the error unless there has been a lengthy trial by peers, with an appeal because of course you need an appeal, and basically at that point you have just replicated the whole current broken system.

    A system where you filter which errors are put in becomes a system where just being in is a permanent sin. A system that just reports all errors is simply a database. And remember the reviewers wouldn't outright reject a paper from someone with an entry in that database, they would just check it a bit more strenuously, e.g. checking more carefully the bits that caused those entries (the starting point of this whole idea was a discussion on "don't assume good faith").

    Besides, I would argue that yes, someone who reported a false result should face some consequences. Because if he were to do that several times, for one thing that might say something about his ethics (maybe he is a bit too fast to publish without properly checking things first?), and for another regardless of whether it's an honest mistake or not, it's still a mistaken paper that got published, which is bad for science overall (I'm not saying that retracting a paper is bad, I'm saying that publishing a paper that's later retracted is bad -- less than not retracting it, but more than not having published it).

    It might be an honest mistake, and I wouldn't want him to suffer for it the first time that happened, but it would be a different thing for a repeat offense, and how could you know it's a repeat offense if it's not logged from the first time? (think of the first one as a warning: no immediate consequences, except that if the same thing happens again, there may be consequences)


  • Trolleybus Mechanic

    @remi
    I don't object to a retraction register, I just wouldn't lump together retractions due to error with deliberate fraud. These are different things, and lumping them together will disincentivise people from correcting their own mistakes. In fact I would say that we have a spectrum of things here:

    • error due to 'force of nature' - an optical cable slipped and introduced a consatnt timing error, and nobody could find it in a reasonable timeframe. This should not tarnish anyone's reputation.
    • error due to sloppines (this is probably what you have in mind) - for example easy to spot mistakes in calculations
    • biased reporting - hiding the attempts that failed to find a correlation, then publish the one that succeeded with p<0.05 and don't mention others. This one is actually on journal editors mostly, not individual researchers, but probably the most important.
    • outright fraud (faking experimental results)

    We definitely should punish the last one, and not the first one. Where exactly to draw the boundary and how to prove malicious intent is of course debatable.



  • @sebastian-galczynski said in Science and Replication:

    We definitely should punish the last one, and not the first one. Where exactly to draw the boundary and how to prove malicious intent is of course debatable.

    I agree on the principle (and of course as always the devil is in the details, though that's actually part of my point, see below...), but I still think every known instance of someone's slipping should be acknowledged, and I would still advocate for all of them to be in the same register.

    Again, the main point is not that being in that register would be a black mark against your name that would prevent you from publishing, but just a warning light to reviewers. So it doesn't really matter if the same register contains honest mistakes as well as grievous crimes, as long as a reviewer is smart enough to be able to tell between the two (and if they can't, then there is a more serious problem about the review than that...). And even the most serious offenses wouldn't prevent you from publishing, it would just make the reviewers more cautious when checking your claims (see up-thread, "assuming good faith"). So even if you are "wrongly" put in that register (i.e. for something that really isn't your fault), that wouldn't prevent you from publishing, it would just maybe add a couple of points in the review (amongst the billions of tiny things that some reviewers ask you to change).

    Remember that the goal of this register is not to punish faulty scientists (although that would certainly help), but to improve the quality of published works overall, by helping the reviewers focus on the cases where they should not "assume good faith" and thus by contrast, allowing them to be a bit less strenuous for papers where they can do so (and e.g. check the maths a bit less thoroughly because the author is known to do a good job). Reviewers already do that in their tiny field where they know everyone, they instinctively know that such-and-such is prone to going a bit too fast and that such-and-such is always very thorough, so the register would only help makes this a bit more official and widespread.

    I'll repeat what I've said, but for me the main reason to have everything in that register is that any sort of filtering process to control what's going in would be subject to bias and almost certainly would end up corrupting the usefulness of the register itself. Take your first two categories: who will decide whether an error was due to sloppiness (not being careful enough in checking the result of a calculation) or to "force of nature" (a cable that slipped but wasn't detectable before the whole machine was taken apart for maintenance months later)? Maybe that machine should have been checked more regularly and wasn't, so the "force of nature" really is sloppiness? Maybe that calculation was wrong because of a subtle bug hidden in a scientific software so the sloppiness really is "force of nature"? The only way to avoid all those questions is to have everything, without filter, in the register.

    And again, because of all those details, it's impossible to know how much the scientist is at fault if you look at a single incident, but having a full register makes it possible (for a reviewer) to spot patterns. If a scientist has to systematically correct his results a few months afterwards because of "force of nature" errors, well for one thing maybe it's sloppiness, and for another even if it isn't, maybe a reviewer should tell that scientist to wait a few months to publish, just to avoid issues with the apparatus. It's not doing a service to science to accept a publication that is likely to have to be corrected later even if it's not the fault of the author -- arguably at that point, accepting it becomes sloppiness not from the authors, but from the reviewer!


  • ♿ (Parody)

    @sebastian-galczynski said in Science and Replication:

    I don't object to a retraction register, I just wouldn't lump together retractions due to error with deliberate fraud. These are different things, and lumping them together will disincentivise people from correcting their own mistakes.

    Of course, if the same guy keeps showing up in the error column...maybe he should find a new job.


  • Banned

    @error_bot xkcd 2533


Log in to reply