WTF Bites



  • @gąska said in WTF Bites:

    It's only impossible because we don't even try.

    Do you want to release your software before the heat death of the universe? Then you have to prioritize.

    "Thorough" might be the wrong word, but it's definitely correct that you cannot test all combinations. Not even hardware guys do it - I know this for a fact because I attended an Intel talk a while ago where they explained their technique for finding a reasonable test set.

    What hardware companies try to do in addition to testing (or at least tried on the past, if you believe that Reddit post by an alleged former Intel employee) is formal verification. But you just have to look at the errata document for the latest Intel processor generation to see that they're far away from proving the absence of severe bugs. And I'm not talking about side-channel attacks like Spectre here.

    Testing all combinations is literally impossible and if you want your test runs to take "only" hours instead of weeks, you cannot even get close to full coverage. (Unless you're using the most basic coverage metric "this line has been executed by some test" - that's possible.)


  • Banned

    @topspin said in WTF Bites:

    @gąska to exhaustively black-box test even a super simple function like bool is_odd(uint64_t) you need to test 2^64 = 18,446,744,073,709,551,616 inputs.

    False. By applying some mathematical laws, you can significantly reduce number of required tests by dozens of orders of magnitude. It might not be literally every case, but it will test the code as if it were.


  • BINNED

    @gąska I can also just proof the correctness of the probably one line instead of testing it. But that's not what the statement was about. If you reduce the number of inputs you have not exhaustively tested it.



  • @gąska said in WTF Bites:

    False. By applying some mathematical laws, you can significantly reduce number of required tests by dozens of orders of magnitude.

    You're confusing formal verification and black-box testing here. They're not the same.



  • WTF Bite: Travis CI builds that are terminated for having logs that are too long count as "successful".

    Example: https://travis-ci.org/DFHack/dfhack/builds/390093408

    Don't worry, I've contacted support.

    0_1528546388345_70abad78-b3d3-4e66-ba08-e1e67599383f-image.png



  • @ben_lubar said in WTF Bites:

    Don't worry, I've contacted support.

    I've noticed that pretty much every big open source software development related service has a really good support response time.

    One time I got a Google alert about a (free) game I developed on Gist and a few other similar sites and I reported the pages to the various companies and within like five hours every single one of the sites had banned the spam account and taken down the spam pages.


  • Banned

    @topspin yes you did. When making a chip, you're not testing it for every possible voltage value on every connector. No, you're only testing high states and low states, and that's enough to make your tests exhaustive. Similarly, if you have integer negation function, you don't have to test everything - you just test for a positive number, a negative number, and zero - because you know that there are no other test cases to test. You know that because you have the access to code and can analyze it. So yeah, it's a white-box-only approach, but most tests are white box anyway.

    Yes, you can absolutely use theorem proving to replace tests. That's exactly what static typing is all about! The problem is, you can't prove everything because of the halting problem. So the best solution is to apply theorem proving (static analysis) wherever possible, and testing for the rest. By having access to code, you can define for each function what classes of values each input has, based on what decision making instructions are present in code. Then you test for all combinations of classes rather than all combinations of all values - which guarantees you to hit every possible execution path, even if you didn't use literally all values. Of course, this only works if you can define what is an input - and to do this, you're pretty much forced to do functional programming so you can be sure that global state doesn't influence your functions. Fuzzers find bugs not because there will always be bugs, but because not all input classes were previously covered, or global state wasn't considered properly.

    I know it sounds crazy. Every groundbreaking idea sounds crazy at first. I'm not saying we shouldn't release any software until it's fully tested - it's absurd, at least in the nearest 20 years. I'm just saying that it is not true that there will always be bugs. And not just theoretically - it is entirely possible that if all our software was made this way right from the start, we could have it developed and fully tested for less than 100x what it cost in real world. An absurdly high number, but much closer to reality than infinity. But even though 100% is utopia, 90% test coverage would be much cheaper than that, and still provide massive benefits - mostly in reducing maintenance costs, which currently make a huge chunk of total development spending! Not to mention how much less frustrating computers would become. But for that, we'd need to stop telling every young developer that testing everything is impossible. Because it discourages them from making good tests that cover all the important cases. We'd need to stop using mutable state wherever possible, and we'd need to go back to using static typing, and using it correctly - without (too many) casts! I know that each of these things is extremely unlikely to happen. But not because there's any technical barrier - the only problem is the cultural one; most developers don't care about code quality, don't care about alternative programming paradigms, and/or don't believe it matters if their program is correct as long as it can (with the right incantation) do its job, as imagined by original developer. Just look at the Scala topic and Blakey's et al. reaction to the mere suggestion that functional paradigm might be actually usable for general purpose programming!

    We need a cultural change, similar to the one that gave us 40-hour workweek - something as unimaginable back in the day as 95% test coverage is now.


  • BINNED

    @gąska All you did was move the goal posts and not proved the original statement you took offense with wrong. It's still correct.

    Your post might have been justified if the original statement was followed up with "we can't do it, so let's do nothing". I'm sure it wasn't.



  • @polygeekery said in WTF Bites:

    Fucking sparkies.

    What's a sparkie?



  • @anotherusername said in WTF Bites:

    fairly small animated GIF

    IIRC they are resized with css. It's probably a full hd 20GB video for that spinning shit


  • Fake News

    @sockpuppet7 said in WTF Bites:

    @polygeekery said in WTF Bites:

    Fucking sparkies.

    What's a sparkie?

    • A slang term for an electrician

  • Banned

    @topspin said in WTF Bites:

    @gąska All you did was move the goal posts and not proved the original statement you took offense with wrong. It's still correct.

    You can't just dismiss my post and declare I haven't proven anything. I did make a proof - it's on you to show where is the error in my reasoning and why it's an error. I have no intention to take part in the game of "lalalalala can't hear you".

    Your post might have been justified if the original statement was followed up with "we can't do it, so let's do nothing". I'm sure it wasn't.

    My problem is with the statement exactly as quoted, "testing everything is impossible". Using mathematics, you can prove that testing with the entire set of all possible input values is equivalent to testing with a much smaller set of values, given you pick them properly, and testing with this smaller set is well within "possible" territory, and for most projects it's very near the "feasible" border.


  • BINNED

    @gąska You're confusing using formal verification with testing, or using a mixture of both.
    I can easily "proof using mathematical models", as you put it, that the one line of code I'd probably use for is_odd has only two equivalence classes and test those. That's almost the same as just providing a formal verification of the function.
    What you're proposing is mixing verification and testing methods. The original statements was purely about testing.
    You can give me any strict subset of possible inputs and I can write an is_odd function that works on all inputs of your set but fails on (at least) one input not in that set. That's why exhaustive testing means you strictly need to test all possible inputs.

    You're raging about a statement which is correct as given because you took it to say something which it didn't say to begin with.


  • Banned

    @topspin said in WTF Bites:

    @gąska You're confusing using formal verification with testing, or using a mixture of both.

    The latter. I'm using a mixture of both to guarantee correctness in every case.

    I can easily "proof using mathematical models", as you put it, that the one line of code I'd probably use for is_odd has only two equivalence classes and test those. That's almost the same as just providing a formal verification of the function.

    Yep. And the result in either case is a guarantee of correctness in every case. But proofs of correctness are often very expensive to derive by hand, and impossible to derive automatically in general case. But a proof that input will belong to one of finite (usually small) set of classes is much cheaper, and the right tooling combined with right programming patterns can be a huge help. And once you have classes, you just test those classes.

    What you're proposing is mixing verification and testing methods. The original statements was purely about testing.

    The original statement was that there will always be bugs. The slide said there will always be bugs because you can't test for every possible value. My argument is that it is possible to derive a (relatively) small set of test cases that provide coverage equivalent to testing literally every combination, so it's possible to do absolute testing, so it's possible to prove lack of bugs whatsoever.

    You can give me any strict subset of possible inputs and I can write an is_odd function that works on all inputs of your set but fails on (at least) one input not in that set.

    That's why this method is only compatible with white-box approach. I can derive all input classes, provided I know the full source code of the function.


  • BINNED

    @gąska said in WTF Bites:

    The original statement was that there will always be bugs. The slide said there will always be bugs

    The part you quoted didn't, unless you've left it out.

    because you can't test for every possible value.

    It says that you can't test for every possible value. It doesn't say this implies you can't prove code correct. That was your conclusion.

    My argument is that it is possible to derive a (relatively) small set of test cases that provide coverage equivalent to testing literally every combination

    Let me condense this for you:
    👴 Testing everything (all combinations of inputs and initial conditions) is possible only in trivial cases
    Gąska: It is possible (if you don't test all combinations of inputs)
    topspin: :moving_goal_post:

    You might as well say "I can spread my wings and fly over this street!", "I doubt that, show me", "Look, I just walked over the street. That's the same effect."
    Yeah, it's not the same though, and not what was stated.


  • Grade A Premium Asshole

    @sockpuppet7 said in WTF Bites:

    @polygeekery said in WTF Bites:

    Fucking sparkies.

    What's a sparkie?

    Electrician.


  • Banned

    @topspin said in WTF Bites:

    @gąska said in WTF Bites:

    The original statement was that there will always be bugs. The slide said there will always be bugs

    The part you quoted didn't, unless you've left it out.

    Oh, sorry. It's hard to do English translation of random Polish mumble imtertwined with bad translations from English.



  • @gąska said in WTF Bites:

    When making a chip, you're not testing it for every possible voltage value on every connector. No, you're only testing high states and low states, and that's enough to make your tests exhaustive. Similarly, if you have integer negation function, you don't have to test everything - you just test for a positive number, a negative number, and zero - because you know that there are no other test cases to test.

    Ouch. That analogy is so bad that it physically hurts.

    The equivalent of testing all integers is testing all possible combinations of high/low states. Making sure that only those two states can occur is a separate verification step.

    You're trying so hard to miss the point here that it seems intentional. It is literally impossible to test all input combinations in an average software project. (Even more so when multithreading, network I/O, etc. are involved.) This is a fact and not debatable.



  • @gąska said in WTF Bites:

    Yes, you can absolutely use theorem proving to replace tests. That's exactly what static typing is all about!

    Not really. That's just a side effect; the (originally) more important effect of static typing is that it allows the compiler to easily generate more efficient code.

    By having access to code, you can define for each function what classes of values each input has, based on what decision making instructions are present in code.

    So you're testing the implementation now, not the interface. This is a horrible idea. Good luck if you ever need to refactor or change anything.

    Then you test for all combinations of classes rather than all combinations of all values - which guarantees you to hit every possible execution path, even if you didn't use literally all values.

    Good luck doing that in a software project that is larger than your weekend hobby project. What you're talking about here is exponentially more expensive than trying to achieve simple statement coverage.

    Of course, this only works if you can define what is an input - and to do this, you're pretty much forced to do functional programming so you can be sure that global state doesn't influence your functions.

    Ugh. I hate people who claim FP is the silver bullet. It's not.

    I know it sounds crazy. Every groundbreaking idea sounds crazy at first.

    It doesn't sound crazy because it's groundbreaking, but because you need an incredible amount of computing power to even run those tests. Not to mention the lost man hours waiting days for your tests to complete. I'm starting to seriously doubt you've ever seen a large software project; even covering all possible platforms is often challenging and expensive.

    I'm not saying we shouldn't release any software until it's fully tested - it's absurd, at least in the nearest 20 years.

    No, it will always be absurd.

    I'm just saying that it is not true that there will always be bugs. And not just theoretically - it is entirely possible that if all our software was made this way right from the start, we could have it developed and fully tested for less than 100x what it cost in real world. An absurdly high number, but much closer to reality than infinity.

    You're still ignoring that the number of necessary tests grows exponentially with the number of inputs. 100x is a gross underestimation. Not to mention the other flaws with your idea.

    But even though 100% is utopia, 90% test coverage would be much cheaper than that

    I'm still not sure you even realize that the test coverage you're talking about is not what people usually mean when they say "test coverage", but something exponentially more expensive.

    […] reducing maintenance costs, which currently make a huge chunk of total development spending!

    The fine-grained white-box testing you're talking about here will not reduce maintenance costs, but increase them to the point where change becomes almost impossible once the project is large enough.

    But for that, we'd need to stop telling every young developer that testing everything is impossible. Because it discourages them from making good tests that cover all the important cases.

    Who is saying that your unit tests shouldn't cover all cases? That is a straw man!

    We'd need to stop using mutable state wherever possible, and we'd need to go back to using static typing, and using it correctly - without (too many) casts!

    Silver bullet, again.

    most developers don't care about code quality, don't care about alternative programming paradigms, and/or don't believe it matters if their program is correct as long as it can (with the right incantation) do its job, as imagined by original developer.

    The only sentence in your rant I mostly agree with - with the exception that I don't actually think programmers don't care about quality; it's often the environment that leads to bad and barely tested code.

    We need a cultural change, similar to the one that gave us 40-hour workweek - something as unimaginable back in the day as 95% test coverage is now.

    We have 95% statement coverage in our product, otherwise we wouldn't be allowed to release it. That is still not even close to what you're proposing above.


  • Discourse touched me in a no-no place

    @anotherusername said in WTF Bites:

    Now the question is why Firefox is chugging 50% of a CPU to render a fairly small animated GIF, and why it's doing so even when the GIF is scrolled off the screen.

    It's not that small (it is rescaled in CSS only), and it's 330kB because there's quite a few frames (I can't be bothered to count them since that'd require finding a tool to do the job). Also, the GIF renderer isn't very efficient in modern browsers, at least for multi-frame animations.


  • Discourse touched me in a no-no place

    @gąska said in WTF Bites:

    entirely possible to test everything for every case

    That's not really true, given that the state space of any non-trivial system is massive (because it includes, e.g., every configuration of data in a 1TB database). So exhaustive searches are impossible, and doubly so is checking every path through the full state space. So you have to employ state compression techniques (which leaves you wondering whether the analysis you're doing is actually correct or if it conceals critical failure modes) or you have to switch to testing other things like reachability of lines of code and that's very much not the same thing at all.

    If you want to do testing, you need to think carefully about what you are trying to do with your tests: are you trying to show correctness or just absence of some sorts of simple and/or known bugs?


  • Banned

    @dfdub said in WTF Bites:

    @gąska said in WTF Bites:

    When making a chip, you're not testing it for every possible voltage value on every connector. No, you're only testing high states and low states, and that's enough to make your tests exhaustive. Similarly, if you have integer negation function, you don't have to test everything - you just test for a positive number, a negative number, and zero - because you know that there are no other test cases to test.

    The equivalent of testing all integers is testing all possible combinations of high/low states. Making sure that only those two states can occur is a separate verification step.

    And with that approach, you'll miss a bug that only happens if the voltage on some connector is between 4.93V and 4.95V, which is somewhere in the middle of high signal range. But you don't care, because you know that even if the voltage is in that range, it's logically impossible for it to have any effect on how the chip works (physical malfunction on the transistor level notwithstanding). Similarly, if you see that your code only contains "if x < 0", it's logically impossible for 4 to yield any other result than 5. And it's not just restricted to trivial cases - you can do it for every function, as long as you can identify and control every input. And to make it practical, you need a lot of mocking too.

    Wanted to respond to the other part but I'm on mobile and too scared I'll lose what I already have.



  • @gąska said in WTF Bites:

    Similarly, if you see that your code only contains "if x < 0", it's logically impossible for 4 to yield any other result than 5. And it's not just restricted to trivial cases - you can do it for every function, as long as you can identify and control every input.

    I think I've already addressed why white-box testing is a bad idea.

    For the rest, @dkf expressed my opinion more eloquently and in less words than I did.


  • Discourse touched me in a no-no place

    @gąska said in WTF Bites:

    By applying some mathematical laws, you can significantly reduce number of required tests by dozens of orders of magnitude.

    This is utter BULLSHIT. I've worked on doing this stuff, and it really doesn't work out the way you want. If you analyse the program, you can build up a model of the future executions of it, and these generally split into the possibles and the guarantees; reality will tend to be somewhere between. Given a set of inputs to a function, you know that at least the guarantees will hold at the end, and know that the possibles could hold, but you can't really say much more than that. You can then take that and run those through the decision points in the program (i.e., identifying what the control expressions are for branching and looping) and those effectively (with lots of work) generate the proposed theorem that you need to prove to figure out if the program is correct. Unfortunately, for a very large fraction of programs you have a very complex problem at that point: you're in maths, but you're probably pointing at a ghastly mess of maths that doesn't really help very much at all. Yes, you can then do things to prove that the state space is finite (only really easy when you've got a proof by construction) and that might prove finiteness (trivial case: foreach loop over a finite list) but there's no guarantee that you've got that nice case, even without dipping into the evil mess that is the constructs that were proposed by Gödel and Turing.

    Proving correctness is not the same thing at all as testing, and is very hard. Testing is much easier since it is just showing that much simpler things are true (e.g., that f(123) = 132 instead of proving that f does certain types of permutations in some cases and other types in others).



  • @dkf It also kind of ignores the question of "If it is possible to test for everything then why aren't we already doing it?"


  • Discourse touched me in a no-no place

    @dfdub said in WTF Bites:

    It doesn't sound crazy because it's groundbreaking, but because you need an incredible amount of computing power to even run those tests.

    This. You're talking about analysis that are EXP2SPACE in the size of the input program. IIRC. That's one of the “fucking hell that's utterly crazy” complexity classes that you can't tackle for anything bigger than trivial problems.


  • Discourse touched me in a no-no place

    @rhywden said in WTF Bites:

    "If it is possible to test for everything then why aren't we already doing it?"

    The answer to that one is “if you want to test everything, get on with it and WRITE ALL THE THINGS TESTS!”



  • @dkf said in WTF Bites:

    @rhywden said in WTF Bites:

    "If it is possible to test for everything then why aren't we already doing it?"

    The answer to that one is “if you want to test everything, get on with it and WRITE ALL THE THINGS TESTS!”

    And then you promptly forget about the edge case which only happens when new year's eve coincides with a full moon or something.


  • Banned

    @dfdub said in WTF Bites:

    @gąska said in WTF Bites:

    Similarly, if you see that your code only contains "if x < 0", it's logically impossible for 4 to yield any other result than 5. And it's not just restricted to trivial cases - you can do it for every function, as long as you can identify and control every input.

    I think I've already addressed why white-box testing is a bad idea.

    Now it's you confusing white-box testing with implementation testing. White-box testing is about knowing the code, and tailoring the tests to current implementation - you can do that with interface tests alone!



  • @dkf said in WTF Bites:

    The answer to that one is “if you want to test everything, get on with it and WRITE ALL THE TESTS!”

    I kinda want to recommend @Gąska to my employer as a QA engineer and give him exactly this task just to prove to him how wrong he is.



  • @gąska said in WTF Bites:

    White-box testing is about knowing the code, and tailoring the tests to current implementation - you can do that with interface tests alone!

    And then you have to change the tests every time you change the code if you want to keep the guarantee you were originally going for (every branch in the code is accounted for). Which kind of defeats the point of interface tests.



  • @gąska said in WTF Bites:

    Hardware guys do try, and they have appropriate methodologies all figured out so they can actually guarantee that no set of inputs will ever break their chip.

    Ha, ha, no. We do push for very high test coverage, but we can only measure coverage compared to the identified cases that need to be covered. There's always a possibility that something we didn't think of will result in a breaking bug. We do a lot of "what if ..." to try to think of everything that could go wrong, and we use randomized testing so there's a reasonable chance of hitting bugs we didn't think to check for, but there's no guarantee.

    There are formal techniques that may be able to prove correctness. But those are, unfortunately, outside my area of expertise. Also, "Beware of bugs in the above code; I have only proved it correct, not tried it."



  • @dfdub said in WTF Bites:

    Testing all combinations is literally impossible and if you want your test runs to take "only" hours instead of weeks, you cannot even get close to full coverage. (Unless you're using the most basic coverage metric "this line has been executed by some test" - that's possible.)

    A full test suite for a chip significantly less complicated than a microprocessor (probably has one or more embedded CPUs, but we generally assume ARM or whoever we licensed the CPU from did their job correctly) often takes a weekend to run. Start running Friday night and check the results on Monday morning; sometimes Monday evening, or Tuesday morning, or ...

    We do typically require 100% code coverage (excluding unreasonableunreachable — stupid autocorrect code), but more importantly, >99% functional coverage. Functional coverage is defined as all those "what if..." things we thought of as worth checking, and there can be millions of them. Fortunately, randomized testing tends to hit most of them eventually, so it isn't necessary to write millions of tests.


  • Considered Harmful

    @gąska said in WTF Bites:

    @topspin said in WTF Bites:

    @gąska to exhaustively black-box test even a super simple function like bool is_odd(uint64_t) you need to test 2^64 = 18,446,744,073,709,551,616 inputs.

    False. By applying some mathematical laws, you can significantly reduce number of required tests by dozens of orders of magnitude. It might not be literally every case, but it will test the code as if it were.

    It's a proven fix!


  • Discourse touched me in a no-no place

    @hardwaregeek said in WTF Bites:

    A full test suite for a chip significantly less complicated than a microprocessor (probably has one or more embedded CPUs, but we generally assume ARM or whoever we licensed the CPU from did their job correctly) often takes a weekend to run.

    The testing for chips with non-trivial NoCs is horrendously difficult by comparison with most of the rest of a CPU because that's the sort of component where asynchronicity, timings and blocking behaviour are critical. We're still trying to sort that thing out in our next-gen system; our silicon designer's pretty good, but it's deeply tricky stuff.

    Sometimes it's even us software people who write the stuff to go on top that spot issues, such as noticing that the voltage scaling doesn't just break some of the faster comms, but does so in a way that fails silently (That's Bad). We don't know yet (as of around a week ago) what we're going to do about that…


  • BINNED

    @dkf said in WTF Bites:

    The answer to that one is “if you want to test everything, get on with it and WRITE ALL THE THINGS TESTS!”

    And how are we doing this? For my is_odd function I guess I'd then write a code-generator to generate all these tests. Nevermind that it'd never finish creating the tests or the disk requirements, that generator would be at least as complicated as the code to be tested.
    So now we're in "Who watches the Watchmen" category.


  • Considered Harmful

    THIS.
    https://i.imgur.com/lAfjTWV.png
    THIS STUPID FUCKING WINDOW.
    EVERY GODDAMN DAY.
    I need this application about once a week. And honestly at this point it may actually be easier to uninstall and reinstall it every time. Because every fucking day this window will appear, and steal focus immediately for no reason. And then if I don't hit snooze before alt tabbing back to whatever I was doing, it steals the focus AGAIN.
    Some games respond nicely to being forcibly tabbed out. Some don't and crash. Sometimes it doesn't matter because I'm in the middle of a battle or whatever. Sometimes it steals focus when I'm typing, instead, and I'll hit spacebar when the OK button is selected and then it'll lag the entire system for a good five minutes while downloading an update.
    I do not need to be notified on every single GitHub commit. The developer of this insanity needs a good thumping.



  • @pie_flavor "The version you are currently running [...] is Chat: Re-ordered message menu options" :wtf:



  • @topspin said in WTF Bites:

    So now we're in "Who watches the Watchmen" category.

    Definitely. When a test fails, sometimes the hardest part of debugging it is figuring out whether the fault is in the hardware, or the test, or the test infrastructure.


  • Notification Spam Recipient

    @lb_ said in WTF Bites:

    @pie_flavor "The version you are currently running [...] is Chat: Re-ordered message menu options" :wtf:

    "Improved typing performance"

    Because text boxes are hard...


  • BINNED

    @tsaukpaetra Have you used ⛔ 👶 ?


  • Notification Spam Recipient

    @topspin said in WTF Bites:

    @tsaukpaetra Have you used ⛔ 👶 ?

    You'll notice I did not have a 🚎


  • Considered Harmful

    This is absolutely beautiful.

    Once you are done laughing at the question, look at the asker. Instead of writing 'android studio basic' as the question tag, they wrote it as their goddamn username.



  • @pie_flavor said in WTF Bites:

    Once you are done laughing at the question, look at the asker. Instead of writing 'android studio basic' as the question tag, they wrote it as their goddamn username.

    There should be some kind of internet award for putting relevant information in the most incorrect place possible.



  • CVE-2018-11235 – Remote code execution in git:

    In Git before 2.13.7, 2.14.x before 2.14.4, 2.15.x before 2.15.2, 2.16.x before 2.16.4, and 2.17.x before 2.17.1, remote code execution can occur. With a crafted .gitmodules file, a malicious project can execute an arbitrary script on a machine that runs "git clone --recurse-submodules" because submodule "names" are obtained from this file, and then appended to $GIT_DIR/modules, leading to directory traversal with "../" in a name. Finally, post-checkout hooks from a submodule are executed, bypassing the intended design in which hooks are not obtained from a remote server.

    Good writeup on how it works.



  • @dcoder how the hell did they find any other Git security bug before the one that's just "add three extremely normal path characters to a path in a human-editable text file"?



  • @ben_lubar Good question. I suppose since "they" as in git developers don't have dedicated security staff or fuzzers, and regular reviewers don't all consider security implications in reviews‚ and they take cues from Torvalds and his Some security people have scoffed at me when I say that security problems are primarily 'just bugs'. Those security people are f*cking morons. POV, these sorts of things can slip by.


  • Banned

    Wikipedia article on "Boxes and arrows" redirects to "Flow diagram". At first I thought it's some kind of joke, but the history looks like they did it seriously.


  • area_can

    @gąska said in WTF Bites:

    Using mathematics, you can prove that testing with the entire set of all possible input values is equivalent to testing with a much smaller set of values, given you pick them properly, and testing with this smaller set is well within "possible" territory, and for most projects it's very near the "feasible" border.

    This assumes that the program treats equivalent inputs as equivalent, making assumptions on the correctness of the implementation before it has been tested



  • @ben_lubar said in WTF Bites:

    @pie_flavor said in WTF Bites:

    Once you are done laughing at the question, look at the asker. Instead of writing 'android studio basic' as the question tag, they wrote it as their goddamn username.

    There should be some kind of internet award for putting relevant information in the most incorrect place possible.

    This guy would lose to the guy whose question read "Android basics" with username "Hello I want to print titles"


Log in to reply