Having problems with unit testing philosophy.



  • @Jaloopa said in Having problems with unit testing philosophy.:

    Unit tests aren't enough, but they're a useful part of your toolbox

    When you only look at the benefits, which I agree do exist, it's very easy to come to that conclusion. But when you factor in the drawbacks as well, that's where things get a bit more complicated. Based on what I've seen, I simply can't support that conclusion. Low-level unit tests are of negative net value; overall they cause more problems than they solve.


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Consider red/green testing. Basically, write your test before your implementation. See that it starts red (failing). Now implement. It should turn green (passing).

    It also ensures your tests actually test something.

    Yes, that's the specific problem that red/green testing solves. It doesn't solve the broader problem of "automated tests only test specifically that which you thought to write an automated test for." Only human testers solve that problem.

    Human testers don't necessarily test everything either. Just give it to the users. They'll find edge cases you'd never think of in a million years.



  • The first thing to say here is that code coverage metrics do have some value, but using them as a target is a bad idea and leads to unnecessary tests. For example a project I worked on once had a whole bunch of tests written against property accessors - but the tests against the actual calculation code were missing or @Ignored! Their coverage metric looked good but it was totally pointless.

    So (unless your coverage is really low) writing tests just because you have a coverage target is bad practice.

    As for the tests themselves:

    A good unit test should be testing behaviour of the unit.

    Behaviour means what the unit does, not how it does it. In a mainstream language that means calls to public methods (sometimes protected ones if you're writing a library that can be externally extended) and verifications of results and out/ref parameters. Depending on how you define your units it can also involve verifications of calls against dependencies.

    Tests are tests of behaviour, not methods. One method is likely to need several tests (at least a 'happy path' test and verification of validation and error states).

    The unit is often a class, but it doesn't need to be. It can often be convenient to write tests against a higher level orchestration or wrapper service, wired up with real dependencies, and call that package of classes a testable unit. Unit and integration tests are differentiated by whether they depend on things outside the project (like file systems or databases), there's nothing wrong with a unit test being a multi-layer run through of a full application scenario if that's appropriate.

    Also, remember that there are different types of testing. The three main ones for devs are unit tests, service integration tests and scenario level system tests with real or well mocked external dependencies. You don't need to test everything three times; your code coverage is the intersection of these, and often only the first is counted in a simple metric. "I don't need to unit test this because the system tests cover it" can be valid, especially for customer facing 'units' like web controllers or UI controller classes.

    Two more general principles regarding whether a test is worthwhile:

    • If your code is broken, would the test fail? If not, it isn't a sufficiently well specified test.
    • If you refactor the unit to achieve the same result in a different way, would the test fail? If so, it is too tightly coupled to implementation and dependencies, not behaviour.

    The second one is a bit flexible as it's hard to mock and verify dependencies in a way which doesn't risk it happening, but you should at least try not to.

    On 'pointless' tests, remember that you're not only testing what the code does now, but also making sure people don't break it in future. It may seem trivial to have

    func Act() { Run(); Jump(); }
    @Test bob_acts() {
     bob.Act();
     verify(bob.Run); verify(bob.Jump);
    }
    

    But that super simple implementation may not last. Maybe bob.Act becomes a web service call, or Bob is instantiated from some IOC dependency mechanism, or Run depends on the state of a system wide option, or the list of actions is read from user configuration, but you still want the default behaviour to be the same. Your test is not just testing the code, it is defining behaviour, and that behaviour should stay even if the code is changed.

    EDIT: however this dummy example is bad, because you shouldn't be verifying internal calls anyway. If you're verifying that Act calls Run and Jump on some dependency then that can be justified.

    In reality your test here should be

    @Test bob_acts() {
      Location position = bob.location();
      bob.Act();
      Assert(bob.position == position + { x: 5, y: 0, z: 1 }); // moved sideways and up
    }
    

    i.e. you should be checking the expected outcome of having run Act, not the internal calls it makes.



  • @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Consider red/green testing. Basically, write your test before your implementation. See that it starts red (failing). Now implement. It should turn green (passing).

    It also ensures your tests actually test something.

    Yes, that's the specific problem that red/green testing solves. It doesn't solve the broader problem of "automated tests only test specifically that which you thought to write an automated test for." Only human testers solve that problem.

    Human testers don't necessarily test everything either. Just give it to the users. They'll find edge cases you'd never think of in a million years.

    To some extent, yes. But there are two problems with that:

    1. For the tester, it is their job to test stuff, try to break it, and find bugs. That's not the user's job. You're not paying them to do so. Frequently, they're the ones paying you for the software they're finding the bugs in. Which means that they don't have any incentive not to tell people that your software is full of bugs, which leads to less people paying you for your software.
    2. End users suck at error reporting. They aren't trained in concepts like "reproducibility" or "writing a good bug report." The vast majority of reports you'll get from users will be in the form "X doesn't work", which is mostly-to-completely useless to a developer trying to track down and fix the problem.

  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Consider red/green testing. Basically, write your test before your implementation. See that it starts red (failing). Now implement. It should turn green (passing).

    It also ensures your tests actually test something.

    Yes, that's the specific problem that red/green testing solves. It doesn't solve the broader problem of "automated tests only test specifically that which you thought to write an automated test for." Only human testers solve that problem.

    Human testers don't necessarily test everything either. Just give it to the users. They'll find edge cases you'd never think of in a million years.

    To some extent, yes. But there are two problems with that:

    1. For the tester, it is their job to test stuff, try to break it, and find bugs. That's not the user's job. You're not paying them to do so. Frequently, they're the ones paying you for the software they're finding the bugs in. Which means that they don't have any incentive not to tell people that your software is full of bugs, which leads to less people paying you for your software.
    2. End users suck at error reporting. They aren't trained in concepts like "reproducibility" or "writing a good bug report." The vast majority of reports you'll get from users will be in the form "X doesn't work", which is mostly-to-completely useless to a developer trying to track down and fix the problem.

    You forgot #3: it's a joke meant to cope with the exasperation of not being able to find that stuff before the lusers do.



  • @dkf said in Having problems with unit testing philosophy.:

    Coverage is useful. Coverage metrics less so.

    Fully agree, but managers can and will measure the latter because they don't understand the former, so defining a reasonable threshold (like 80%) can be helpful.

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    when you seed both the codebase and the tests randomly with bugs and tell someone to fix it, people who subscribe to the "tests as documentation of correct behavior" philosophy have a strong tendency to treat the tests as documentation of correct behavior, and they end up "fixing" the codebase to match the buggy tests.

    Well, that's not really surprising: If what you treat as your spec is wrong, you will implement incorrect things. This doesn't seem like a useful observation about unit tests in particular. The only way to catch spec errors has always been acceptance testing, which you should be doing either way.

    The test you're describing also seems somewhat artificial: If the unit tests have serious bugs, you usually notice that immediately, since you write the tests for a requirement at the same time you implement the new requirement.



  • @dfdub said in Having problems with unit testing philosophy.:

    The test you're describing also seems somewhat artificial: If the unit tests have serious bugs, you usually notice that immediately, since you write the tests for a requirement at the same time you implement the new requirement.

    It's an artificial proxy for a real-world issue: requirements changing. Let's say requirement X changed and so you have to change code Y. This causes test Z to break, which does not appear at first glance to be directly related to requirement X. Is the problem that you coded your changes wrong, or is it that test Z is no longer valid and there's something deeper going on?


  • ♿ (Parody)

    @dfdub said in Having problems with unit testing philosophy.:

    @dkf said in Having problems with unit testing philosophy.:

    Coverage is useful. Coverage metrics less so.

    Fully agree, but managers can and will measure the latter because they don't understand the former, so defining a reasonable threshold (like 80%) can be helpful.

    It's also useful to already have something there when the bug report shows up. You just need to figure out how to modify it to expose the bug, but just setting up the data / mocks you need and making the call is helpful down the line. And having at least a successful execution of (presumably) the happy path is not worthless, either.



  • @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run.

    More likely, someone will refactor the code and misunderstand what JumpAndRun was meant to achieve - this test won't catch that.

    Rather than caring about Running and Jumping it might be better to have separate tests for the desired side effects:

    Bob moves from A to B
    Bob becomes more fit
    Jumping must happen before running
    Rita and Sue are satisfied

    This avoids problems caused by the well-meaning programmer who puts Bob on a treadmill.


  • Banned

    @japonicus depends on whether jumping and running are internal methods or external interfaces. I assumed the latter. As someone said upthread, you should NEVER test internals. But you should test ALL externally visible effects (except logging and other debug-only stuff).



  • @japonicus said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run.

    More likely, someone will refactor the code and misunderstand what JumpAndRun was meant to achieve - this test won't catch that.

    Rather than caring about Running and Jumping it might be better to have separate tests for the desired side effects:

    Bob moves from A to B
    Bob becomes more fit
    Jumping must happen before running
    Rita and Sue are satisfied

    This avoids problems caused by the well-meaning programmer who puts Bob on a treadmill.

    Now we're entering the territory of creating Test Scenario and even Test Plan... something surprisingly hard. Very few people can do it competently, especially among developers. That's why QA is such an important and well-paid job.



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Let's say requirement X changed and so you have to change code Y. This causes test Z to break, which does not appear at first glance to be directly related to requirement X.

    But isn't that a good thing, as it shows you how a change in requirement X affects different requirements covered by test Z? The end result is (ideally) that the developer asks for clarification, as they notice a unforseen interaction with an existing requirement. No reasonable programmer would willingly implement the new requirement incorrectly because of existing unit tests when faced with this scenario. And if they did, they wouldn't get away with it, since the new implementation doesn't actually support the changes to X.


  • Discourse touched me in a no-no place

    @Gąska said in Having problems with unit testing philosophy.:

    you should NEVER test internals

    Except when you should. Never testing internals is black-box testing, and is definitely valuable, but sometimes you need to instead use clear-box testing (where you can see what's going on inside) to check critical use cases. It typically happens where the case being tested is theoretically reachable but really hard to trigger when deployed.

    Clear-box tests are expected to be a lot more fragile when things are modified. As long as this is known and acknowledged, it's not a big problem.


  • Banned

    @dkf said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    you should NEVER test internals

    Except when you should.

    Then they shouldn't be internals.

    To make my post clearer - I mean the internals of the unit under test. A small internal class with two public methods SHOULD have those two methods tested in every way possible. But NONE of its private members should be checked if they have correct values at any point. ONLY test what's publicly visible - but test EVERYTHING that's publicly visible.

    If you feel like it's of utmost importance to validate private members - it's time to refactor.

    You don't need clear-box testing if you have black-box unit tests. I mean, what do you expect to gain from it if you already know that every step in the pipeline works flawlessly in isolation, and black-box integration tests tell you the whole process is okay as well?


  • ♿ (Parody)

    @dfdub said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Let's say requirement X changed and so you have to change code Y. This causes test Z to break, which does not appear at first glance to be directly related to requirement X.

    But isn't that a good thing, as it shows you how a change in requirement X affects different requirements covered by test Z? The end result is (ideally) that the developer asks for clarification, as they notice a unforseen interaction with an existing requirement. No reasonable programmer would willingly implement the new requirement incorrectly because of existing unit tests when faced with this scenario. And if they did, they wouldn't get away with it, since the new implementation doesn't actually support the changes to X.

    Yes, that's one of the main benefits to automated testing! Obviously, a developer could do the wrong thing with Z, but that's always a risk, and better than ignoring Z and creating one of those problems that users find. Unless you have a really small application and your human testers can actually do a full regression test of everything. I know we definitely can't.



  • @Gąska said in Having problems with unit testing philosophy.:

    You don't need clear-box testing if you have black-box unit tests. I mean, what do you expect to gain from it if you already know that every step in the pipeline works flawlessly in isolation, and black-box integration tests tell you the whole process is okay as well?

    And this is what I mean about the harm unit testing causes. It gets you into the mindset that "you already know that [the code] works flawlessly."

    No, you don't. If all your tests pass, you know that all your tests pass, nothing more. You don't know that the tests are covering every relevant case. You don't know that the specifications you're testing are correct. And you don't know that the test code itself is free from bugs.

    I've opened up a library that boasted of 90%+ test coverage and 100% on all critical paths, and found serious bugs within the first 5 minutes because I tried something the author just never thought of. But his tests told him everything was flawless...


  • Java Dev

    @Mason_Wheeler nods Testing can only prove the presence of bugs, not their absence.



  • @apapadimoulis said in Having problems with unit testing philosophy.:

    Unit Testing is a problem when you start thinking and design in Units.

    I think I understand this.

    We've moved that direction since it's easier to unit test and deploy IOC when you have mockable interfaces. So anything reusable ends up being an interface. Static classes and methods are gone, even if they would contain no data on their own and simply encapsulate functions that are used in more than one class.

    That drives me insane. Surely we can just wrap the damn thing in an interface for testing purposes? Nope, it's now an IOCed interface.

    @apapadimoulis said in Having problems with unit testing philosophy.:

    TDD practitioners see Unit-based software as easier to write

    I think it's a seesaw problem. The god-class was terrible, so we've swung way too far to the other side.

    @dfdub said in Having problems with unit testing philosophy.:

    Unit tests stop being useful and start being harmful when they test internals nobody actually cares about instead of well-defined functionality of a logical unit which provides a useful API boundary.

    Yes. That's what I'm struggling with. I see nothing to gain. And when I express this, I'm met with regurgitated arguments that have not been substantiated within our code.

    I've never had a unit test identify broken code. I've had it identify broken unit tests, ones that we forgot to update.

    And we STILL have bugs. TONS of bugs. Even in unit tested code. Why? Because the vast majority of our bugs are logic bugs, where the test is agreeing with the broken code.


  • BINNED

    @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Consider red/green testing. Basically, write your test before your implementation. See that it starts red (failing). Now implement. It should turn green (passing).

    It also ensures your tests actually test something.

    Yes, that's the specific problem that red/green testing solves. It doesn't solve the broader problem of "automated tests only test specifically that which you thought to write an automated test for." Only human testers solve that problem.

    Human testers don't necessarily test everything either. Just give it to the users. They'll find edge cases you'd never think of in a million years.

    boomzilla Sent from my Windows 10 Phone.

    Can I post this here? Or are we still in the on-topic posts only phase that goes with the help category?


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    You don't need clear-box testing if you have black-box unit tests. I mean, what do you expect to gain from it if you already know that every step in the pipeline works flawlessly in isolation, and black-box integration tests tell you the whole process is okay as well?

    And this is what I mean about the harm unit testing causes. It gets you into the mindset that "you already know that [the code] works flawlessly."

    Uh...I have never experienced this feeling.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    I've never had a unit test identify broken code. I've had it identify broken unit tests, ones that we forgot to update.

    I've had both.


  • Banned

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    You don't need clear-box testing if you have black-box unit tests. I mean, what do you expect to gain from it if you already know that every step in the pipeline works flawlessly in isolation, and black-box integration tests tell you the whole process is okay as well?

    And this is what I mean about the harm unit testing causes. It gets you into the mindset that "you already know that [the code] works flawlessly."

    Context. I used those words in particular context. Of course unit tests will only cover 0.01% of all possible cases AT BEST and do fuck all to detect bugs that are actually likely to pass unnoticed through proper code review (once again I'm employing hyperbole to invoke cognitive dissonance in you to make you realize I'm not an idiot who thinks test coverage is the solution to all problems that plague the Earth).

    But I wasn't talking about believing in tests. I was talking about something completely different. I was asking one very specific question: what do clear-box integration tests give you that black-box unit tests don't?

    Protip: if something I said sounds borderline insane, you most likely misunderstood something. Instead of treating me like a 6 year old child who just heard of unit testing on Sesame Street five minutes ago, please ask clarifying questions instead. It really takes all the joy away from the forum when people treat you like a slightly more articulate mutation of a vegetable. And you do this all the time as of late.



  • @boomzilla said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    I've never had a unit test identify broken code. I've had it identify broken unit tests, ones that we forgot to update.

    I've had both.

    Oh, I'm not doubting it.

    But at this rate, the times our tests will prevent a break will be low compared to the times our tests will prevent broken tests.


  • ♿ (Parody)

    @xaade also, yes.



  • @Gąska said in Having problems with unit testing philosophy.:

    what do clear-box integration tests give you that black-box unit tests don't?

    Easy.

    WHY are you calling the dependencies in this manner. That's what's missed by unit tests.


  • Java Dev

    @boomzilla said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    I've never had a unit test identify broken code. I've had it identify broken unit tests, ones that we forgot to update.

    I've had both.

    Last week, I had a test (or actually most of them) identify a broken source control system.



  • @boomzilla Well, apparently @Gąska does. And he's not alone. That's literally how the practice is "marketed" to developers:

    think about what would happen if you walked in a room full of people working [by the rules of TDD]. Pick any random person at any random time. A minute ago, all their code worked.

    Let me repeat that: A minute ago all their code worked! And it doesn't matter who you pick, and it doesn't matter when you pick. A minute ago all their code worked!

    People are literally taught, in so many words, that if you build everything the TDD way, then you have a magical assurance that at any given time, all your code (except for the very newest bits you just wrote) is working.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    People are literally taught, in so many words, that if you build everything the TDD way, then you have a magical assurance that at any given time, all your code is working.

    The people doing the teaching of that must have such simple code to be able to characterise it fully with test cases that they can write correctly ahead of time.


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla Well, apparently @Gąska does. And he's not alone. That's literally how the practice is "marketed" to developers:

    think about what would happen if you walked in a room full of people working [by the rules of TDD]. Pick any random person at any random time. A minute ago, all their code worked.

    Let me repeat that: A minute ago all their code worked! And it doesn't matter who you pick, and it doesn't matter when you pick. A minute ago all their code worked!

    People are literally taught, in so many words, that if you build everything the TDD way, then you have a magical assurance that at any given time, all your code is working.

    I've yet to meet anyone who actually followed TDD. The closest is @TheCPUWizard, who does claim to follow it. Still, I suspect you're overstating things when you say that they're taught that there are no bugs.


  • Banned

    @xaade said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    what do clear-box integration tests give you that black-box unit tests don't?

    Easy.

    WHY are you calling the dependencies in this manner. That's what's missed by unit tests.

    Can't you infer that from the input provided in public interfaces? If the logging module is retrieving user data from server #3, it's because it was told earlier that data resides in server #3, and it was told that because the test says so? Can you give me an example scenario where that wouldn't be sufficient and clear-box really becomes necessary?



  • @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."


  • Banned


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."


  • Considered Harmful

    @PleegWat said in Having problems with unit testing philosophy.:

    @Mason_Wheeler nods Testing can only prove the presence of bugs, not their absence.

    “Beware of bugs in the above code; I have only proved it correct, not tried it.” - Donald Knuth



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    People are literally taught, in so many words, that if you build everything the TDD way, then you have a magical assurance that at any given time, all your code (except for the very newest bits you just wrote) is working.

    You do. Your code is, by definition, working.

    That doesn't mean that there aren't bugs - but in the TDD mindset, those bugs are in the spec - if you find unwanted behaviour, but the tests are all green, the bug is that you are missing a test.

    I actually like this, although I'm not a strict TDDer. But in a code-driven rather than test-driven mindset, a 'bug' is the really vague 'I saw unwanted behaviour', and there's little verification that it's actually fixed. In a test-driven mindset, a 'bug' is 'you didn't specify this scenario', which leads to the question 'what should it do?' and then that gets written into the spec (i.e. a test gets fixed/added) - and you can then be sure that that bug stays fixed because it's now in the spec.

    Like all things, some acolytes of TDD (or BDD which is the same thing at a different level) get overly religious about it and think it solves all problems. But it is really useful, particularly for solving complex problems.



  • @boomzilla said in Having problems with unit testing philosophy.:

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."

    Yeah, exactly this (re my previous post)



  • @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."

    OK. Where are you drawing the distinction here? Like, what would be an example of working code that contains bugs?


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."

    OK. Where are you drawing the distinction here? Like, what would be an example of working code that contains bugs?

    Here's something simple:

    double divide( double a, double b ){
        return a / b;
    }
    

    This doesn't handle the case where b could be zero. You could also imagine that it's the job of the caller to catch an exception or something, and that would be a bug, because it should give an error message to the user instead of crashing or whatever. Nevertheless, that code works to divide numbers.


  • Considered Harmful

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."

    OK. Where are you drawing the distinction here? Like, what would be an example of working code that contains bugs?

    OK, you'll all say :trwtf: is JavaScript here, but I once discovered while writing spec tests for a function, that while I thought it was returning "true" and "false" as-expected, it was actually returning truthy and falsy values - that is, values that coerce to true or false.

    All the consuming code was implicitly coercing the return values, because it was going in if or for statements.

    I was shocked when my first test, something like expect( foo( 5 ) ).to.be( true );, failed.

    This happened because && and || don't return strictly true or false in JS. They return the first falsy or the first truthy value, respectively.



  • @error said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."

    OK. Where are you drawing the distinction here? Like, what would be an example of working code that contains bugs?

    OK, you'll all say :trwtf: is JavaScript here

    ...

    I was shocked when my first test, something like expect( foo( 5 ) ).to.be( true );, failed.

    :trwtf: is that syntax! 🤢


  • Considered Harmful




  • Banned

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    Still, I suspect you're overstating things when you say that they're taught that there are no bugs.

    It's a direct quotation, from no less an authority on the subject than Uncle Bob himself. How else are we to interpret "all [the] code work[s]"? The plain English meaning of that is "there are no bugs."

    Wrongly. "Code works" is a necessary but not sufficient condition for "no bugs."

    OK. Where are you drawing the distinction here? Like, what would be an example of working code that contains bugs?

    OK, you'll all say :trwtf: is JavaScript here

    ...

    I was shocked when my first test, something like expect( foo( 5 ) ).to.be( true );, failed.

    :trwtf: is that syntax! 🤢

    It's called "spaces inside parentheses". I hate it too. :tro-pop:



  • @Mason_Wheeler :trwtf: is that syntax! 🤢

    Ugh yeah I have a real problem with fluent libraries. If they could actually understand natural language it would be ok, but they don't, you still have to conform to exact syntax and word choice etc - so it's no more natural than writing AssertEquals(true, foo(5)). And I find it a lot harder to write and even read.



  • @bobjanova to.be.or.not.to.be.that.equals.trwtf()


  • kills Dumbledore

    @bobjanova said in Having problems with unit testing philosophy.:

    @Mason_Wheeler :trwtf: is that syntax! 🤢

    Ugh yeah I have a real problem with fluent libraries. If they could actually understand natural language it would be ok, but they don't, you still have to conform to exact syntax and word choice etc - so it's no more natural than writing AssertEquals(true, foo(5)). And I find it a lot harder to write and even read.

    There was one mocking framework I used that had everything under the static class A. A declaration would look like

    A.CallTo(myThing.DoStuff()).Returns(4);
    

    It was actually quite readable, even for more complicated stuff



  • @Jaloopa Yes, that framework definitely needs some mocking! :nelson:



  • @xaade said in Having problems with unit testing philosophy.:

    Yes. That's what I'm struggling with. I see nothing to gain. And when I express this, I'm met with regurgitated arguments that have not been substantiated within our code.

    Show them the talk that @frillunflop linked above. Writing stupid, fine-granular unit tests is not the point of TDD and the speaker actually proves that by quoting the book from Kent Beck that introduced TDD.

    Many things people claim about TDD are simply wrong. The right thing to do is write reasonable automated tests instead of following some TDD cult, only test API boundaries and actual requirements and avoid mocking like the plague.


  • Considered Harmful

    @bobjanova said in Having problems with unit testing philosophy.:

    Ugh yeah I have a real problem with fluent libraries. If they could actually understand natural language it would be ok, but they don't, you still have to conform to exact syntax and word choice etc - so it's no more natural than writing AssertEquals(true, foo(5)). And I find it a lot harder to write and even read.

    Yeah, it's not meant to be easier to write, but the goal is that the whole test suite be sort of be "read" like a document.

    You say what component you're describing, you say what behavior you're testing, then you say what your expectations are for that behavior.

    And you get a nice check list like

    TDWTF
    :3px: should be completely broken on mobile
    :3px: should not update posts when you edit them
    :3px: should 404 if you immediately refresh after posting


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error :vomit:

    It's pretty nice actually for writing that sort of thing. We use Jasmine here, so it's slightly different. We'd have:

    expect(foo).toEqual(true)
    Or:
    expect(foo).not.toEqual(true)

    Technically, I'd use toEqual(false) in the second case but I wanted tho show the negation syntax. For truthy/falsy tests:

    expect(foo).toBeTruthy();
    expect(foo).toBeFalsy();


Log in to reply