COVID-19 CovidSim Model



  • It's pretty scary to know that the world's medical systems run on something far less powerful at simulating illness than Dwarf Fortress


  • Considered Harmful

    @Benjamin-Hall said in COVID-19 CovidSim Model:

    @Rhywden said in COVID-19 CovidSim Model:

    @Benjamin-Hall Yeah, insulting people will make them trust more. Just look at Trump - so much trust!

    Insulting people? No. Insulting their work? You've never been in academia, have you? That's the norm there. Anyone offended by that can't have made it that far.

    Is calling an author "Mr smarty pants" really the norm in the kind of academia you come from?



  • @topspin said in COVID-19 CovidSim Model:

    Then there’s random number implementations which usually are not reproducible either. Take C++ <random> for example: I think the random generators are completely defined and thus portable, but the distributions are only defined up to, well, which distribution to generate (or was it the other way round?).

    Yes, you got it right. I had that exact issue some time ago. std::mt1933333337 returns the same sequence on all systems (well, on both systems that we're using at least...), but e.g. std::uniform_int_distribution returns different sequences (even when used with the same RNG). In the end in my case I settled on "output is reproducible when rerun on the same machine", which is good enough for my use case. I'm also trying to not break reproducibility when changing the code, but of course that one can only go so far (at least when it happens I put it in the release notes)...


  • BINNED

    @xaade said in COVID-19 CovidSim Model:

    @topspin said in COVID-19 CovidSim Model:

    Unless you enable strict math, even release vs. debug runs of floating point math might not create the exact same binary result in the unit in the last place of precision.

    Ok, when it's off by half the value on a 5 digit precision value, then I'll maybe give you this point. Maybe.

    After which I'll immediately distrust all computations ever made.

    That's not what the post was about. The argument was

    the authors above only guarantee reproducibility only on a single machinge/compiler setup

    This sounds like an academic curiosity rather than something that would have any value when reality hits.

    meaning it is not "reproducible" if it is off in the 15th decimal, either. That happens very easily, e.g. if you enable -ffast-math the compiler will use associativity rules (which work for real numbers but not floating point) to transform this:

    >>> pi*pi*pi*pi
    97.40909103400242
    >>> (pi*pi)*(pi*pi)
    97.40909103400243
    

    So if you disregard it as obviously wrong just because it's only guaranteed reproducible for the exact same setup, you're being unreasonably strict in your requirements.

    @xaade said in COVID-19 CovidSim Model:

    @cvi said in COVID-19 CovidSim Model:

    I guess I'm having my "somebody is wrong on the internet" when people pick on issues that don't really matter and then believe they've just invalidated something that they are (I'm relatively sure) completely clueless about. (Fuck, I know I don't know anything about simulating pandemics, either.)

    Maybe so.

    I DO know that if this was banking software, there would be lawsuits.

    But hey, what's a ruined economy?

    I mean, we're only just complaining about it being completely off by the entire value of magnitude on a second run, when it's not even correct when it's run without the bug.

    The first part is just an absurd comparison, for the second part I'm not sure if you got the point about Monte Carlo simulations. If I simulate throwing a dice it might be even off by six times "the magnitude of the second run", that's just how dice work. Yet a reasonable ensemble will reliably give me a mean of 3.5, a standard deviation of ~1.7, and an approximately uniform distribution, exactly like I expect.

    I haven't even taken a look at the program in question, so I'm not defending it. Just pointing out these arguments against it are weak.



  • @topspin two runs of the same compiled executable should not return vastly different results for the same user-provided seed

    in fact, they should not return different results for the same seed at all


  • BINNED

    @ben_lubar said in COVID-19 CovidSim Model:

    @topspin two runs of the same compiled executable should not return vastly different results for the same user-provided seed

    in fact, they should not return different results for the same seed at all

    And I said that where?
    Note that “compiled on g++/libstdc++” vs “compiled on msvc/msvcrt” is not the same executable.



  • @topspin said in COVID-19 CovidSim Model:

    @ben_lubar said in COVID-19 CovidSim Model:

    @topspin two runs of the same compiled executable should not return vastly different results for the same user-provided seed

    in fact, they should not return different results for the same seed at all

    And I said that where?
    Note that “compiled on g++/libstdc++” vs “compiled on msvc/msvcrt” is not the same executable.

    If your simulation depends on quantum mechanics, it's not a simulation, it's just a random number generator


  • BINNED

    @ben_lubar said in COVID-19 CovidSim Model:

    @topspin said in COVID-19 CovidSim Model:

    @ben_lubar said in COVID-19 CovidSim Model:

    @topspin two runs of the same compiled executable should not return vastly different results for the same user-provided seed

    in fact, they should not return different results for the same seed at all

    And I said that where?
    Note that “compiled on g++/libstdc++” vs “compiled on msvc/msvcrt” is not the same executable.

    If your simulation depends on quantum mechanics, it's not a simulation, it's just a random number generator

    What the hell are you talking about?



  • @topspin said in COVID-19 CovidSim Model:

    “compiled on g++/libstdc++” vs “compiled on msvc/msvcrt” is not the same executable

    It's not but you need some fairly good reasons to use a RNG that doesn't work the same on both. Arithmetic should work the same on any mainstream computer (they all use the same chipset and the same definition of floating point numbers).

    I'd say though that it's ok if the reproducibility is 'we ran it on GCC and used these seeds' - that's widely available to other researchers who want to replicate the results. What's not ok is 'it runs consistently on my machine but it won't on any other', or 'it gives different answers for the same input even on one machine'.



  • @ben_lubar said in COVID-19 CovidSim Model:

    If your simulation depends on quantum mechanics, it's not a simulation, it's just a random number generator

    You should go and tell that to all the chemists and theoretical physicists then, I don't think they're aware...

    ETA and also all the people working on quantum computing, and in fact all people working with computers because transistors fundamentally depend on quantum mechanics to work.



  • @ben_lubar said in COVID-19 CovidSim Model:

    It's pretty scary to know that the world's medical systems run on something far less powerful at simulating illness than Dwarf Fortress

    It's also pretty scary to know that the people deciding to "lock-down" the world either are not using mathematical formulas to make these decisions, complete with assumptions that people can tweak to see how they change the results, or if they are, are not releasing them to the public.



  • @jinpa sure they are. nx³+c is a mathematical formula.



  • @Buddy said in COVID-19 CovidSim Model:

    @jinpa sure they are. nx³+c is a mathematical formula.

    I was expecting an equals sign.



  • @ben_lubar said in COVID-19 CovidSim Model:

    If your simulation depends on quantum mechanics, it's not a simulation, it's just a random number generator

    :pendant: Doesn't each and every simulation run on modern hardware depend on quantum mechanics?


  • BINNED

    @bobjanova said in COVID-19 CovidSim Model:

    @topspin said in COVID-19 CovidSim Model:

    “compiled on g++/libstdc++” vs “compiled on msvc/msvcrt” is not the same executable

    It's not but you need some fairly good reasons to use a RNG that doesn't work the same on both.

    A fairly good reason being you used the RNG provided by the standard library instead of going the extra mile to implement (or integrate) your own. Which is extremely common and not something that makes this useless.

    Arithmetic should work the same on any mainstream computer (they all use the same chipset and the same definition of floating point numbers).

    Mainstream computers, yes, HPC hardware, maybe I dunno. Also, as mentioned, different compiler optimizations will produce different code, so the arithmetic performed is not the same to begin with. Which won't result in a big difference, but if the requirement is "it needs to be exactly the same", then a completely insignificant difference in the last few mantissa bits gets it thrown in the "not reproducible" bin.

    I'd say though that it's ok if the reproducibility is 'we ran it on GCC and used these seeds' - that's widely available to other researchers who want to replicate the results. What's not ok is 'it runs consistently on my machine but it won't on any other', or 'it gives different answers for the same input even on one machine'.

    But the former is (according to the discussion we were having here, I didn't check the original) what is happening and what is being criticized as not good enough.
    At best it seems like people take the fairly reasonable "it's only reproducible on one machine/compiler setup" and interpret that as "it's not reproducible at all!" without understanding the details.


  • ♿ (Parody)

    @topspin said in COVID-19 CovidSim Model:

    But the former is (according to the discussion we were having here, I didn't check the original) what is happening and what is being criticized as not good enough.

    That wasn't my understanding and it wasn't the basis for my criticism. The review made it sound like results couldn't be reproduced across different (single threaded) runs on the same machine with the same seed.

    If your assumption is correct then I'd withdraw my criticisms based on RNGs.

    I'd argue that my criticisms that the model doesn't produce recognizably correct predictions to still be valid. Also the stuff about too many parameters and the overall complexity of the model.

    And just like climate models, I wouldn't call it useless for scientific inquiry, just not very appropriate for input to policy decisions. IOW, look at the details of what the model is doing and compare it to observations and figure out where it's wrong. Which you need to do in any case, or you're risking getting accidentally accurate answers, no different than looking at observations of outcomes with a drug as opposed to a controlled trial.


  • BINNED

    @boomzilla said in COVID-19 CovidSim Model:

    @topspin said in COVID-19 CovidSim Model:

    But the former is (according to the discussion we were having here, I didn't check the original) what is happening and what is being criticized as not good enough.

    That wasn't my understanding and it wasn't the basis for my criticism. The review made it sound like results couldn't be reproduced across different (single threaded) runs on the same machine with the same seed.

    If your assumption is correct then I'd withdraw my criticisms based on RNGs.

    I wasn't answering to your sub-thread of the discussion, it's entirely possible that there are valid criticisms that do make it unusable, but the part I talked about wasn't valid criticism.

    As far as I understood the later discussion of other people's take on it, the problem here was only in the "restarting from checkpoints" code, to which @dkf's correct response is rip that out and run from beginning, like a real man.

    I'd argue that my criticisms that the model doesn't produce recognizably correct predictions to still be valid. Also the stuff about too many parameters and the overall complexity of the model.

    Certainly too complex.

    And just like climate models, I wouldn't call it useless for scientific inquiry, just not very appropriate for input to policy decisions. IOW, look at the details of what the model is doing and compare it to observations and figure out where it's wrong. Which you need to do in any case, or you're risking getting accidentally accurate answers, no different than looking at observations of outcomes with a drug as opposed to a controlled trial.

    Yeah, quite possibly.

    But one complaint the "Mr smarty pants" guy had was this bug report, which just shows another misunderstanding (at least according to the person who replied): It produces something that doesn't fit the observation because the chosen input was for an unmitigated worst case, which obviously isn't what happened. If you input something that didn't happen, you can't expect an output that did happen.


  • ♿ (Parody)

    @topspin said in COVID-19 CovidSim Model:

    But one complaint the "Mr smarty pants" guy had was this bug report, which just shows another misunderstanding (at least according to the person who replied): It produces something that doesn't fit the observation because the chosen input was for an unmitigated worst case, which obviously isn't what happened. If you input something that didn't happen, you can't expect an output that didn't happen.

    I'm not even sure what he's trying to say.

    Enter various inputs for R0 and other factors
    Result: Prediction of over 1M deaths in the US, 500k in the UK

    Like...no matter what you put in for R0? Yeah, if he's just talking about the initial runs or whatever...it's still too vague to be actionable. It sounds like more of a shitpost than a useful bug report.


  • Banned

    @ben_lubar said in COVID-19 CovidSim Model:

    Three days for a form letter response and no action?

    What did you expect? They're moderators. :kneeling_warthog:



  • @bobjanova said in COVID-19 CovidSim Model:

    @topspin said in COVID-19 CovidSim Model:

    “compiled on g++/libstdc++” vs “compiled on msvc/msvcrt” is not the same executable

    It's not but you need some fairly good reasons to use a RNG that doesn't work the same on both.

    Is "using the C++ std library" a good-enough reason for you? Because as I said upthread, std::uniform_int_distribution (and probably all other distribution-generating functions) has that exact issue.

    (and "just roll out your own" is an extremely bad idea, there are soooo many ways a distribution generator can be subtly flawed that unless your whole job is writing that kind of code, it's almost guaranteed that your own implementation will be incorrect)



  • @topspin said in COVID-19 CovidSim Model:

    Also the stuff about too many parameters and the overall complexity of the model.

    Certainly too complex.

    I'm not sure it's really "too" complex, given the complexity of what it's trying to model. The type of code I'm working with will routinely have tens or maybe hundreds parameters, some of them being arrays of up to some billions of values (i.e. values of some spatial property defined on a mesh), and they're still useful, from a scientific point of view (note that I'm not saying anything here about the applicability to political decision-making).

    If that code is anything like those I work with, it's likely that out of those hundreds of parameters, only a handful really matter or cannot be reasonably constrained to a range where changes don't matter much. It's also likely that not all of them are used all the time (e.g. there may be a flag and when it's on some parameters are used and when it's off some other parameters are used), which inflates the total the number of parameters but not the actual complexity of the model. And it's probably even worse in that case, in the sense that being research code it probably contains some (a lot of?) parameters that are for all intents and purposes useless -- either straight out useless because someone tried to model something only to discover it didn't matter but the code has been left there, or useful only in some tiny edge case that was only ever investigated as part of subsection 3b of a master thesis.



  • @topspin said in COVID-19 CovidSim Model:

    if the requirement is "it needs to be exactly the same", then a completely insignificant difference in the last few mantissa bits gets it thrown in the "not reproducible" bin.

    And if the system is so chaotic that the "completely insignificant" difference in the last few mantissa bits causes an order of magnitude difference in the output?


  • BINNED

    @HardwareGeek said in COVID-19 CovidSim Model:

    @topspin said in COVID-19 CovidSim Model:

    if the requirement is "it needs to be exactly the same", then a completely insignificant difference in the last few mantissa bits gets it thrown in the "not reproducible" bin.

    And if the system is so chaotic that the "completely insignificant" difference in the last few mantissa bits causes an order of magnitude difference in the output?

    Then that sucks, which is besides the point I was making in reply to arguing that not having reproducibility across different machines (architectures) / compilers means it's "an academic curiosity rather than something that would have any value when reality hits".

    Which, by itself and ignoring any other issues the code may or may not have, is simply wrong.


    Besides that, the impression I got (which may be wrong) is that the code doesn't produce something that's off by orders of magnitudes due to this, but really 1) buggy restart code makes it not reproducible (bad, but easily fixed by throwing that out), 2) threading issues (bad, easily fixed by not using it), 3) it produces different results with different seeds and people complain because they don't understand Monte Carlo, and 4) the "orders of magnitude" people are saying it is obviously off by are because they have input high R0 values which didn't correspond with what happened, i.e. "duh, you don't say."
    Which is not to say the code may not be bad anyway.


  • Discourse touched me in a no-no place

    @xaade said in COVID-19 CovidSim Model:

    The problem is that it can't be verified.

    It can be verified, by comparing runs with the same seeds but with other supposedly-unimportant variables (like the number of CPU cores) different. Sometimes it's as far as comparing with an independent implementation of the model, but that's a lot of work to create. (That's when you discover all sorts of :fun: with differences in rounding modes used by different programmers. And yes, it's horrible. We've also had problems when comparing our solutions against supposedly gold standard ones done in common tools like Mathematica where the problems were actually infelicities in ODE solving within Mathematica itself. Shit like that really does occur.)

    It's all the sort of difficult thankless task that a new researcher is often set to work on, as it teaches them some humility if nothing else.


  • Discourse touched me in a no-no place

    @Carnage said in COVID-19 CovidSim Model:

    Yeah, and many companies seem hell bent on core business, even selling off bits that make a profit but aren't core business.

    Some decide that their core business is a part that's not actually profitable, sell off the other parts that are keeping the overall business afloat, and then discover that they're in a ton of trouble. 🍿



  • @dkf said in COVID-19 CovidSim Model:

    @Carnage said in COVID-19 CovidSim Model:

    Yeah, and many companies seem hell bent on core business, even selling off bits that make a profit but aren't core business.

    Some decide that their core business is a part that's not actually profitable, sell off the other parts that are keeping the overall business afloat, and then discover that they're in a ton of trouble. 🍿

    Yeah, they do the profitable bits a good deed when they dumb it up like that


  • Discourse touched me in a no-no place

    @xaade said in COVID-19 CovidSim Model:

    Ok, when it's off by half the value on a 5 digit precision value, then I'll maybe give you this point. Maybe.

    Getting all calculations correct to the nearest ULP is seriously difficult work. It requires a very talented numerical analyst to do. I know enough to be very careful when dealing with such code, but not enough to actually do the required numerical analysis.

    The problem is made worse by the fact that most equations of any real interest are non-linear and all too often chaotic. (Truly linear systems aren't very interesting to simulate; you can solve them by hand on a piece of paper.) Many of the equation systems that people really want to know about are hybrid systems where there's a decision on something that causes a change of operation mode somewhere (e.g., most people stop going to work when they think they're sick) and very small changes in values can change when that happens, and that change really cascades in terms of impact. (Squint a bit and you'll see that things are similar, but they're definitely not the same.) Getting that sort of thing right is minimum PhD level numerical work, so a lot of the time it isn't right because it is insanely easy to screw things up.

    If you're doing ensemble forecasting (useful because you're probably uncertain what the input state actually is; we definitely are for everything to do with this pandemic) then the effect is probably not too big a deal, as the uncertainties from the ensembles probably dominate as they're many orders of magnitude larger. But for exact reproduction work, this matters a lot (and it can cause results to drift from true over long time series).



  • @dkf said in COVID-19 CovidSim Model:

    It can be verified, by comparing runs with the same seeds but with other supposedly-unimportant variables (like the number of CPU cores) different.

    Hey, are you looking at my screen right now????

    (this is exactly what I am doing at the moment, as I'm working on parallelizing some code I inherited -- and before you ask, it's not fun at all, not even :fun:)


  • Discourse touched me in a no-no place

    @ben_lubar said in COVID-19 CovidSim Model:

    in fact, they should not return different results for the same seed at all

    If it is single-threaded, then that's exactly true. If it's multi-threaded and not using lots of synchronization barriers, it most definitely can be false unless you're also taking steps to verify that you're closely maintaining realtime constraints. Lots of programmers don't even try to detect if they've got any kind of hazard like that, let alone attempt to figure out if it is actually causing them a problem.

    Why would anyone want to run like that? Well, if you get it right then your code can go a lot faster. Potentially thousands of times faster (depending on the nature of the synchronization bottlenecks; it's not fucking magic, it's hard work).


  • Discourse touched me in a no-no place

    @remi said in COVID-19 CovidSim Model:

    and "just roll out your own" is an extremely bad idea

    Actually, better is to get an RNG written by someone who actually knows what they're doing (unlike some authors of C++ standard library implementations). There are some that are definitely written to behave exactly the same in every build. For example:

    https://www.nag.co.uk/numeric/mb/manual64_24_1/html/g05/g05intro.html


  • BINNED

    @dkf said in COVID-19 CovidSim Model:

    @remi said in COVID-19 CovidSim Model:

    and "just roll out your own" is an extremely bad idea

    Actually, better is to get an RNG written by someone who actually knows what they're doing (unlike some authors of C++ standard library implementations). There are some that are definitely written to behave exactly the same in every build. For example:

    https://www.nag.co.uk/numeric/mb/manual64_24_1/html/g05/g05intro.html

    As @remi has confirmed for me, it's not the generators that are implementation defined, it's the distributions. I.e. how to get from a stream of random bits (generated by a portably specified PRNG) to a uniform distribution, normal distribution, etc.
    As usual, the trade-off here was that specifying it exactly would have specified the implementation (forever) and not allowed for better implementations, but it was probably the wrong choice anyway.



  • @topspin Yes, that is also what I've read. Some of the distributions are not implemented very well either (the normal distribution in several of the standard libraries is several times slower than what e.g. Julia uses for theirs, and I remember having a test checking the sanity of one of the distributions that would occasionally get into an infinite loop when queried).

    Somebody upthread mentioned that implementing these correctly is difficult. I can second that - I implemented an alternative conversion to normal distributed values. We throw a pile of tests at it, and it passes all of those. A few weeks later, a student working on the code finds a typo in the core of the code, where I mixed up two variables. 🤷


  • ♿ (Parody)

    @cvi said in COVID-19 CovidSim Model:

    A few weeks later, a student working on the code finds a typo in the core of the code, where I mixed up two variables.

    1659cd9e-2c82-42df-850e-88c6a4b95f84-image.png


  • Discourse touched me in a no-no place

    @cvi said in COVID-19 CovidSim Model:

    the normal distribution in several of the standard libraries is several times slower than what e.g. Julia uses for theirs

    I work in one of the very few environments where it is reasonable to have homebrew versions of all those sorts of functions (it's small and it's fixed point only). The code for getting a normally distributed random number is a couple of hundred lines of C, of which about half is some big tables of deeply mysterious (to me) constants. I think it's doing some sort of patch-wise quartic polynomial curve fitting? It also has some very careful use of compiler intrinsics.

    I look at the code… and I think “yup, not going to edit that”. 😆


  • BINNED

    @dkf said in COVID-19 CovidSim Model:

    I think it's doing some sort of patch-wise quartic polynomial curve fitting?

    That might work reasonably well in the center, but I think (as someone who has no expertise in this) it sounds like you'd be cutting of the tails at some point, which would skew the distribution.



  • @dkf said in COVID-19 CovidSim Model:

    of which about half is some big tables of deeply mysterious (to me) constants.

    Yeah, that sounds like the method that I also ended up using, described in the Ziggurat method paper. They don't have the table in the paper, but rather some code to compute it.


  • BINNED

    @cvi said in COVID-19 CovidSim Model:

    @dkf said in COVID-19 CovidSim Model:

    of which about half is some big tables of deeply mysterious (to me) constants.

    Yeah, that sounds like the method that I also ended up using, described in the Ziggurat method paper. They don't have the table in the paper, but rather some code to compute it.

    Only 7 pages without crazy formulas? Now you're almost forcing me to read that because I don't have the usual excuse of not reading 50+ pages of stuff I don't understand anyway.



  • @topspin You can probably skim it very efficiently. The idea is kinda neat (and should be applicable a bit more generally than just normal distributions). So, yeah, that one is relatively painless.


  • Discourse touched me in a no-no place

    @topspin said in COVID-19 CovidSim Model:

    That might work reasonably well in the center, but I think (as someone who has no expertise in this) it sounds like you'd be cutting of the tails at some point, which would skew the distribution.

    It's entirely possible that the code switches to a different algorithm for the tails.


  • Discourse touched me in a no-no place

    @cvi said in COVID-19 CovidSim Model:

    They don't have the table in the paper, but rather some code to compute it.

    Having just tried to read that code, can I have the tables instead? The coding style they use is… umm… not exactly one I would have chosen? (That brace placement… wow!)


  • BINNED

    @dkf said in COVID-19 CovidSim Model:

    @cvi said in COVID-19 CovidSim Model:

    They don't have the table in the paper, but rather some code to compute it.

    Having just tried to read that code, can I have the tables instead? The coding style they use is… umm… not exactly one I would have chosen? (That brace placement… wow!)

    I totally expected the style/formatting/etc to be ugly as sin. Nothing unusual. What did throw me off is the first sentence starting with “In the early 80’s” and the document not having a publication date.


  • Discourse touched me in a no-no place

    @topspin said in COVID-19 CovidSim Model:

    the document not having a publication date.

    It looks like a preprint. Those don't have publication dates because that's something set by the journal (or conference) and not the authors. But the date must be at least 1998, going by the references, and no later than 2017, going by the page metadata. Fortunately, I've also found with a trivial bit of searching the publication metadata including the DOI (10.18637/jss.v005.i08) and publication date (January 2000).



  • @dkf Formatting code for inclusion in a paper is ... special. Page limits > clarity, sometimes. Page limits > code formatting pretty much always.

    It's entirely possible that the code switches to a different algorithm for the tails.

    The Ziggurat thing? Yeah, it does. Tail is made to be unlikely (exact percentage depends on the parameters you pick, which affects the table size), since it's expensive. IIRC, it uses a rejection approach there, potentially drawing additional numbers.


  • Discourse touched me in a no-no place

    @cvi I was actually talking about our code, but it applies elsewhere too.

    Precalculated tables are great as they can be built to very high accuracy indeed, and are very fast to use, but they're large and mostly incomprehensible. There's quite a lot that can be done to reduce the size too; lots of functions are locally modellable as polynomials. (We actually use clamped normal distributions rather than full normal; other parts of our code really assume strongly some upper bounds on values in order to not take crazy much space for inconsequential matters.)


  • BINNED

    @cvi said in COVID-19 CovidSim Model:

    @topspin You can probably skim it very efficiently. The idea is kinda neat (and should be applicable a bit more generally than just normal distributions). So, yeah, that one is relatively painless.

    I skimmed it and it's beautifully simple. Neat idea indeed, might be missing a section about numerical precision (or maybe that's done in their earlier work), but I wouldn't have read that anyway.

    As @dkf already noticed the code is incredibly ugly. I wouldn't allow that to be checked in if I reviewed it, but that brings me back to the original topic: it's a mess, but it works.


  • Discourse touched me in a no-no place

    @topspin said in COVID-19 CovidSim Model:

    As @dkf already noticed the code is incredibly ugly.

    FWIW, the usual fix for that is to have pseudocode in the paper, trimmed to be as short as possible and in something that's a sort of mix of Algol and Python (but not as strict as either), and then to have the real code (formatted correctly, but still trimmed to what is relevant) in an appendix which doesn't count toward the page count provided it isn't crazy long.

    Fortunately, with modern online-only publishing page counts are largely irrelevant.


Log in to reply