Having problems with unit testing philosophy.



  • So, I'm really struggling with the effectiveness of unit testing.

    And this is mostly because the vast majority of our unit tests end up being tautologies, like almost exact duplicates of the methods themselves, almost line by line even.

    IMO, if it's a tautology like this, it's not really testing anything that cannot be gleaned by observation. And the benefit seems to be outweighed by "unit test doesn't fix anything because it's just agreeing" and "every update to the method implementation REQUIRES an update to the unit test.

    Company wants to increase the unit test coverage, but I really need someone to tell me, "no that's what you're supposed to do" before I commit to writing more of these things that don't seem to serve a purpose to me.

    My interpretation was that unit tests are supposed to test the edge of the interface; to take subsets of possible inputs and test the outputs. Testing that it calls certain methods on internal interfaces seems like overkill.



  • @xaade said in Having problems with unit testing philosophy.:

    Testing that it calls certain methods on internal interfaces seems like overkill.

    If you're mocking too much, it's definitely not a useful unit test and you should consider writing an integration test instead.

    "every update to the method implementation REQUIRES an update to the unit test.

    That's an obvious sign that your unit tests are useless.

    Your company seems to be doing unit testing very wrong. Your gut feeling is right, you should write better tests that are actually useful. Hint: The "unit" doesn't have to be a single class or method. It's a part with a stable interface that can be reasonably tested in isolation.



  • @dfdub

    My thought was that there's no point in testing to see what's called in the dependencies, because we're going to test the dependencies also.

    So we should be just taking sample inputs and testing outputs.



  • @xaade said in Having problems with unit testing philosophy.:

    My thought was that there's no point in testing to see what's called in the dependencies, because we're going to test the dependencies also.

    That heavily depends on what you're actually testing.

    If there are two interface boundaries in your system, a low-level interface and a high-level interface, then it might make sense to check that high-level method calls result in the right low-level calls. But in general, mocking should be used sparingly and it definitely doesn't make sense to mock an internal class that is irrelevant to the public interface.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    My interpretation was that unit tests are supposed to test the edge of the interface; to take subsets of possible inputs and test the outputs.

    Yes.

    Testing that it calls certain methods on internal interfaces seems like overkill.

    When you say, "that it calls," is that part of the test? Or part of your coverage report / metrics?

    The bottom line is that you're hoping to:

    1. Satisfy yourself that the method is correct right now.
    2. Catch a breaking change that someone makes in the future.

    Both are extremely useful.



  • @boomzilla said in Having problems with unit testing philosophy.:

    When you say, "that it calls," is that part of the test? Or part of your coverage report / metrics?

    Some public methods don't operate on an input, but operate on internals.

    What we've done is test that it calls the right methods on the internal dependencies.

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?



  • @xaade said in Having problems with unit testing philosophy.:

    What we've done is test that it calls the right methods on the internal dependencies.

    That sounds questionable to me. If a method doesn't have clearly defined inputs and outputs, you probably shouldn't write unit tests, but cover it with higher-level tests.



  • Here's another example

    Bob { Jump(bool runAlso) { /*jump*/; if(runAlso) Run(); } Run() { /*run*/ } }
    

    Test

    [true, false]
    TestBobJumpRun(bool runAlso)
    {
      bob.Jump(runAlso);
      test if bob called Jump;
      if (runAlso) test if bob called run;
    }
    

    Like I'm saying, they're the same, line for line.

    Please someone tell me this is pointless...



  • @xaade said in Having problems with unit testing philosophy.:

    Please someone tell me this is pointless...

    Your wish shall be granted: It's dumb.

    If they desperately want their coverage metrics to look good, they should call such methods in a scenario test. But unit tests are supposed to simplify changes, not make them twice as hard, as the dumb unit test you just presented does.



  • The usefulness of tests is directly proportionate to their high-level-ness. As you're learning, unit tests are frequently more trouble than they're worth. People say "oh, your organization is doing them wrong," but everybody does them "wrong." I think that, over the course of my career, the number of unit tests I've deleted because I was able to prove they were only testing the behavior of the mocks and not demonstrating anything useful about the code that's supposed to be tested is greater than the number of actual unit tests I have written!

    Higher-level integration tests give you real information about the way the system works. A big part of why testing small units of code in isolation fails so hard is because they aren't used in isolation, and one of the largest sources of bugs is the points where different subsystems interact with one another. Unit tests use mocking to pretend this doesn't exist; integration tests confront it head-on.

    And the most valuable testing of all is the type performed by humans, because it's not automated. Testers can go off-script. They can check things that the person who wrote the tests never thought of. They can check things that the person who wrote the code never thought of. On the famous "engineering triangle", (Good, Fast, Cheap, pick 2,) automated tests get you Fast and Cheap, but if you want good testing you need testers.



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    The usefulness of tests is directly proportionate to their high-level-ness. As you're learning, unit tests are frequently more trouble than they're worth.

    I don't think I agree with this in general. Maybe you think this way because you mostly deal with user-facing software?

    In a library or data processing tool, you can easily identify testable units and unit testing those is very useful to avoid bugs. If you only write high-level tests for a complicated pipeline, you're going to have bugs that mask other bugs that will lead to frustrating debugging sessions later on.

    As always, it depends on what you're building. Unit tests are not the panacea that TDD fanatics claim they are, but writing testable code and unit tests can still be very useful.



  • @dfdub said in Having problems with unit testing philosophy.:

    Maybe you think this way because you mostly deal with user-facing software?

    No, actually I mostly deal with backend servers for enterprise software.



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @dfdub said in Having problems with unit testing philosophy.:

    Maybe you think this way because you mostly deal with user-facing software?

    No, actually I mostly deal with backend servers for enterprise software.

    OK, but "servers" suggests that there are at least some layers (like HTTP) in the mix that naturally make unit testing hard and that you're mostly implementing high-level functionality.

    I develop a lot of low-level stuff and unit testing definitely has its place there. You just need to focus on its actual strengths rather than coverage metrics. And thinking about testability helps you design good interfaces for your libraries.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    Here's another example

    Bob { Jump(bool runAlso) { /*jump*/; if(runAlso) Run(); } Run() { /*run*/ } }
    

    Test

    [true, false]
    TestBobJumpRun(bool runAlso)
    {
      bob.Jump(runAlso);
      test if bob called Jump;
      if (runAlso) test if bob called run;
    }
    

    Like I'm saying, they're the same, line for line.

    Please someone tell me this is pointless...

    Uh...hmmm...they don't have any side effects that you expect to have happened? A value changed? A message posted for the user?



  • @boomzilla said in Having problems with unit testing philosophy.:

    Uh...hmmm...they don't have any side effects that you expect to have happened? A value changed? A message posted for the user?

    those are tested for too. I'm just showing a psuedo example trying to demonstrate how it's literally the same line for line in most cases.


  • ♿ (Parody)

    @xaade yeah...I would test for the correct side effects and call it a day.



  • I've always found unit tests to be useless. They only work if you have a really complicated function that has few inputs and few outputs and really simple rules to check if the result is correct. In all my years of real-world experience, I've yet to find a real-world function that has few enough outputs to be unit tested.


  • Considered Harmful

    Consider red/green testing. Basically, write your test before your implementation. See that it starts red (failing). Now implement. It should turn green (passing).

    It also ensures your tests actually test something.


  • Considered Harmful

    I also encourage Behavior Driven Design/spec tests.

    Basically, each test is basically documenting something about your component, then verifying that.

    • tests double as docs
    • docs get tested along with code
    • docs are forced to be up to date
    • failing tests give very descriptive error messages, including which expectation was violated

    eg

    describe "foo" ->
      it "throws when it receives null" ->
        expect { () -> foo( null ) } to throw
      it "returns a number" ->
        expect { foo(5) } to be a number
    

    I've caught so many bugs with this, especially testing "unexpected" inputs



  • @magnusmaster said in Having problems with unit testing philosophy.:

    I've always found unit tests to be useless. They only work if you have a really complicated function that has few inputs and few outputs and really simple rules to check if the result is correct. In all my years of real-world experience, I've yet to find a real-world function that has few enough outputs to be unit tested.

    Well, it seems like the closer you get to 100% coverage, the closer you're getting to checking if the code changes, not if it's right.



  • @boomzilla said in Having problems with unit testing philosophy.:

    @xaade yeah...I would test for the correct side effects and call it a day.

    So, don't care if it calls other methods, but only check if it affects itself correctly? I mean that's what I figure. If you want to check if one interface calls another interface correctly, that's not a unit test, that's an integration test.


  • Considered Harmful

    I think another reason your tests look just like your code is because you have 1 test per method. Consider several small tests per method, each testing just one thing. This should include failure states and boundary conditions (eg null, MAX_INT, NaN, min > max, 1kb string, hash collisions).

    Each bug fix should also get a new test just for that bug.



  • One thing I can also recommend is, if you have some some internal state, don't test if the internal state has changed.

    For instance, if you have a method that takes Bob's ability to jump away, I found it to be counter-productive to test for some internal state.
    Instead, the test looks something like this:

    BreakingBobsLegCausesTheJumpToFail()
    {
       bob.breakHisLeg();
    
       // We could test for some state that indicates the leg to be broken here.
       // If that is not part of the public interface, don't bother.
    
       expect exception from bob.Jump();
    }
    

    Testing for some internal state will make the test more fragile, as it has to be adjusted when you change the representation of that internal state.

    A corollary of that is: A test sometimes should test multiple methods
    Often the most important aspect of one method is the way it affects another method.

    ... Is what worked for me best (YMMV and all that)



  • I recently found a talk that might be enlightening (or an utter waste of time, but that is pretty unlikely):

    https://www.youtube.com/watch?v=EZ05e7EMOLM

    I may or may not do a list of key takeaways... But for now, I'm 😴


  • Banned

    @dfdub said in Having problems with unit testing philosophy.:

    "every update to the method implementation REQUIRES an update to the unit test.

    That's an obvious sign that your unit tests are useless.

    Disagree. One of the main reasons to have unit tests is to detect accidental unwanted changes. Unit tests should establish how the function behaves. If you have to change the tests whenever the behavior of tested function changes, it means you're doing unit tests right. How long does it take to update that one test? 20 seconds? 30? Stop whining. You wasted more of your work time reading my post.


  • Banned

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run. The test is there because it takes 60 seconds to write and will save you a few hours of twisting your neck like an owl and wondering why , despite you clearly seeing that all preconditions are met, the needful still isn't being done.


  • Banned

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    People say "oh, your organization is doing them wrong," but everybody does them "wrong."

    Well, "everybody" writes regular code wrong, so I don't know why you're surprised they get unit tests wrong too.


  • Banned

    @error said in Having problems with unit testing philosophy.:

    Each bug fix should also get a new test just for that bug.

    This. With an in-code comment with ticket ID so when someone decides to refactor the component, they would know what it's supposed to test.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    @boomzilla said in Having problems with unit testing philosophy.:

    @xaade yeah...I would test for the correct side effects and call it a day.

    So, don't care if it calls other methods, but only check if it affects itself correctly? I mean that's what I figure. If you want to check if one interface calls another interface correctly, that's not a unit test, that's an integration test.

    Even with the integration test... You're interested in the side effects and return values, not the implementation details. One of the things good tests should help you with is refactoring the implementation, so you know the result is the same even though you've changed the details indeed the covers, presumably to pay off some technical debt or in preparation to be able to add some new feature.



  • @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run. The test is there because it takes 60 seconds to write and will save you a few hours of twisting your neck like an owl and wondering why , despite you clearly seeing that all preconditions are met, the needful still isn't being done.

    That's what I'm being told, but I don't buy it.

    You get into a habit of rewriting the tests every time the implementation changes, and you'll rewrite a test without thinking about whether the new implementation is right. They WILL change the test to "disable the call to 'Run'". And trust me, that extra 20 seconds of duplicating what you wrote WILL turn into a non-thinking task.

    The counter argument I have here is that you have a change log. You have a blame. You can straight up look and see that a method changed, and it broke here. The unit test won't save you from this.



  • @Gąska said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Each bug fix should also get a new test just for that bug.

    This. With an in-code comment with ticket ID so when someone decides to refactor the component, they would know what it's supposed to test.

    This, INSTEAD of my previous post, makes it a meaningful test.

    I can look and say, "There's this one test that's digging a little bit deeper than usual, why? Oh look, a comment. OOOOhhhhh it NEEDS to be implemented this way."

    Making EVERY test replicate the implementation takes that away.


  • Banned

    @xaade said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run. The test is there because it takes 60 seconds to write and will save you a few hours of twisting your neck like an owl and wondering why , despite you clearly seeing that all preconditions are met, the needful still isn't being done.

    That's what I'm being told, but I don't buy it.

    You get into a habit of rewriting the tests every time the implementation changes, and you'll rewrite a test without thinking about whether the new implementation is right. They WILL change the test to "disable the call to 'Run'". And trust me, that extra 20 seconds of duplicating what you wrote WILL turn into a non-thinking task.

    No tool will ever save you from being a total fucking idiot. But as long as you aren't, even trivial unit tests have value.

    The counter argument I have here is that you have a change log. You have a blame. You can straight up look and see that a method changed, and it broke here.

    How much time will pass between the things stopping working and someone finally noticing it and escalating it high enough that you get told to take a look? How much time will it take you to even determine which method was wrongly modified, considering how many more changes have been done in the meantime?

    The beauty of unit tests is that they give instant feedback before you even commit anything.

    @Gąska said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Each bug fix should also get a new test just for that bug.

    This. With an in-code comment with ticket ID so when someone decides to refactor the component, they would know what it's supposed to test.

    This, INSTEAD of my previous post, makes it a meaningful test.

    I can look and say, "There's this one test that's digging a little bit deeper than usual, why? Oh look, a comment. OOOOhhhhh it NEEDS to be implemented this way."

    Making EVERY test replicate the implementation takes that away.

    I don't get your point. Is there some upper limit on the number of tests you're willing to treat with due respect, and you couldn't care less if the remaining ones work or not? Why are you so opposed to the idea of simply testing every possible behavior to make sure all of them work as intended?



  • @Gąska said in Having problems with unit testing philosophy.:

    Why are you so opposed to the idea of simply testing every possible behavior to make sure all of them work as intended?

    Because, as I said, it becomes a tautology. You're no longer testing input and output, but testing if you test matches your implementation.

    I mean, if I wanted code that never changed, it would be great at testing that.

    None of it is meaningful, or even validates if it's performing what the requirements expect.

    You could get the same effect by simply commenting every line of code with what it's exactly doing, and code review to see if the line matches the comment.



  • @Gąska

    Let me try this.

    I find value in a unit test that tests if a brick laying function laid a brick at the coordinates you give it.

    I don't find value in a unit test that tests if a brick laying function, given coordinates 1,2,3, calls the cursor management resource with instructions move up 1 left 2 and back 3, and then calls the brick management resource, get brick, and then calls lay brick.

    I don't care if the brick laying function rolls random dice until it ends up at the correct coordinates. I don't care if it uses matrix math to move the cursor. I don't care if it calls a database of coordinates and compares hashes. None of that matters. And if we want to change the implementation later, we don't have to rewrite the test, as long as the brick is laid at 1,2,3


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    Company wants to increase the unit test coverage

    This is optimizing for the wrong metric. My thoughts on testing haven't changed a lot since Testing Done Right, though I do want to refine and expand the essay a bit to focus on Unit Testing.

    Unit Testing is a problem when you start thinking and design in Units. That's a deliberate word choice: I don't mean Component or Module; those serve other purposes like re-usability, abstraction, etc. I mean Unit, which exists only to test.

    Unit-based software is often complex and costly to change, especially outside of the Unit-level, which means changes are often restricted to modifying Units or building similarly-shaped Units. It's a similar end-result of the everything is a plugin approach.

    TDD practitioners see Unit-based software as easier to write, and maybe that's how they trained, so it is to them, but it's not lot a clean abstraction of business problems and leads to a lot of problems. The business doesn't think in Units, data isn't structured in Units, the UI isn't designed in Units, even Object-oriented Constructs are not based in Units, so you end up a kind of perversion of all of those.

    Compare this to software that was designed to have 60% of it's critical code paths tested in a 5-minute smoke test, and 80% in 15-minutes. That any developer could perform with their keyboard and mouse after hitting the Green Play Button in their IDE. Unit-based software is usually far too complex to enable this type of testing.

    Unit tests are fine if you can identify testable units. But most non-library software software doesn't have, nor need testable units.



  • @Gąska said in Having problems with unit testing philosophy.:

    @dfdub said in Having problems with unit testing philosophy.:

    "every update to the method implementation REQUIRES an update to the unit test.

    That's an obvious sign that your unit tests are useless.

    Disagree. One of the main reasons to have unit tests is to detect accidental unwanted changes.

    Yes, but accidental is the keyword.

    Unit tests should establish how the function behaves. If you have to change the tests whenever the behavior of tested function changes, it means you're doing unit tests right.

    Not every change is a behavior change that is important to the API user. And even if you're changing behavior: If the same unit tests change every few weeks, then that's a good sign the "unit" you chose to tests doesn't actually offer a stable, useful interface and you should consider testing higher-level functionality instead.

    How long does it take to update that one test? 20 seconds? 30? Stop whining. You wasted more of your work time reading my post.

    You've obviously never worked on a project with truly horrible unit tests that valued coverage metrics over everything else. I have. For every feature I implemented, I spent 25% on the implementation and 75% on running the test suite over and over again and fixing useless tests.

    That's not how testing is supposed to work. It should tell you when you accidentally break important assumptions. And for new features, ideally you'd only have to add a test, not change dozens of existing ones.


  • ♿ (Parody)

    @magnusmaster said in Having problems with unit testing philosophy.:

    In all my years of real-world experience, I've yet to find a real-world function that has few enough outputs to be unit tested.

    It's very rare, but the cases that they've proven invaluable for us are all at the library level:

    • OtterScript lexer/parser
    • OtterScript compiler
    • Inedo Execution Engine (runs a compiled OtterScript program)
    • SlimMemoryStream
    • Serialization/deserialization
    • Messaging protocols with custom encryption (TCP+AES)
    • Interprocess-communication (like circular buffer)

    However, there's no good reason for most organizations to build (and own) any of these things. We have different problems to solve, and most off-the-shelf libraries are usually insufficient for us for reasons that don't apply to most organizational software.


  • ♿ (Parody)

    @dfdub said in Having problems with unit testing philosophy.:

    You've obviously never worked on a project with truly horrible unit tests that valued coverage metrics over everything else. I have. For every feature I implemented, I spent 25% on the implementation and 75% on running the test suite over and over again and fixing useless tests.

    When we onboard new product engineers, a lot of them are shocked that we have zero unit tests in our Web Application and Service Application. Code coverage is maaaaybe in the single-digits if even at that.

    Though, it's not that much of a shock because they've already been introduced to our totally proprietary web framework, which is usually the biggest shock :tro-pop:

    I've done talks on it, and will write about it some day...



  • @Gąska said in Having problems with unit testing philosophy.:

    How much time will pass between the things stopping working and someone finally noticing it and escalating it high enough that you get told to take a look?

    Until the next time the integrations tests are run, ideally.

    @Gąska said in Having problems with unit testing philosophy.:

    I don't get your point. Is there some upper limit on the number of tests you're willing to treat with due respect, and you couldn't care less if the remaining ones work or not? Why are you so opposed to the idea of simply testing every possible behavior to make sure all of them work as intended?

    You're missing the point everyone else is making: The choice is not between writing tests and not writing tests, but between writing unit tests or only covering the high-level functionality with integration tests or end-to-end acceptance tests.

    Unit tests stop being useful and start being harmful when they test internals nobody actually cares about instead of well-defined functionality of a logical unit which provides a useful API boundary.



  • @apapadimoulis said in Having problems with unit testing philosophy.:

    Code coverage is maaaaybe in the single-digits if even at that.

    How are you measuring coverage? Personally, I think you should measure what all kinds of tests you have, including end-to-end tests, cover, not just unit tests. And obviously, you want that number to be somewhere around 80%, not 100%, unless your whole product is a low-level library.


  • ♿ (Parody)

    @dfdub I meant unit test code coverage, like those reports that you can run. Instead I focus on training on risk identification, thinking about an end-to-end test, and impact of errors.

    Our situation is pretty special, because we use our own tools to ship our tools. So I encourage our team to use as many features as possible, in the way they were intended to be used. That means we can find out very early, and with almost no risk, when things break.

    But we can't use all the features, so we obviously have to have non-production testing for that.


  • kills Dumbledore

    @error said in Having problems with unit testing philosophy.:

    expect { foo(5) } to be a number

    Laughs in static typing

    I have found high coverage unit test suites to be extremely useful in refactoring and confirming code matches specs. If it's an input to output test you can easily confirm that changing the algorithm didn't result in breaking the output for a range of values. If you have a complicated validator , it's often easier to confirm its correctness by passing in known good and bad values than tracing the logic and trying to hold several branches in your head at a time.

    Confirming that a method in a mock has been called is a poor unit test unless that's the entire point of the method. You should be treating the unit like a black box as much as possible and checking outcomes or observable side effects



  • @xaade said in Having problems with unit testing philosophy.:

    every update to the method implementation REQUIRES an update to the unit test.

    :wat: How does that even happen? I mean, how did anyone come up with this ridiculous idea? That's exactly the opposite of the whole concept!

    Regarding the real question, I would probably just repeated what others said (with some exceptions, obviously). One thing should be added though: bug fixes

    1. Add test case testing the bug. Test fails.
    2. Fix the bug. Test is now green again.

    This is, obviously, the ideal case. Sometimes I fix the bug first and then the test case... but I make sure to actually try the test with fix commit removed (reverted).
    This approach ensures that the bug was really there, it was what I thought it was, and that.

    Actually, one more trick: When writing unit/integration test, I make sure to break it by introducing bug. This ensures that the test is actually meaningful. Far too often, the test is green even with the introduced bug --> it's either wrong, or some case is missing.


  • Discourse touched me in a no-no place

    @dfdub said in Having problems with unit testing philosophy.:

    You just need to focus on its actual strengths rather than coverage metrics.

    Coverage is useful. Coverage metrics less so. The advantage of coverage is that it tells you which parts of the code are actually tested, but metrics on it lose that critical information. A 90% coverage rate can be either superb or horrendous, depending on what is actually going on, and that state of affairs tells me that the metric itself is nearly useless. But being able to see what the uncovered code paths actually are, that's useful.


  • Java Dev

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @dfdub said in Having problems with unit testing philosophy.:

    Maybe you think this way because you mostly deal with user-facing software?

    No, actually I mostly deal with backend servers for enterprise software.

    We have a number of CRUD services, which we test by invoking the web service and storing the DB contents after each operation as a reference. This is not strictly a unit test (since we're not testing a single class) but it's not really a full integration test either (since we're not including the frontend which would usually be calling those services). To me, that's a useful testing level for those usecases.

    We also have a bunch of tests on business logic which ends up being end-to-end, because the business logic can only be run by running the entire supporting infrastructure as well.

    @apapadimoulis said in Having problems with unit testing philosophy.:

    When we onboard new product engineers, a lot of them are shocked that we have zero unit tests in our Web Application and Service Application. Code coverage is maaaaybe in the single-digits if even at that.

    We test ours in selenium. Takes hours to run, is a ***** to maintain, and rarely finds anything. Do not recommend.



  • @error said in Having problems with unit testing philosophy.:

    Consider red/green testing. Basically, write your test before your implementation. See that it starts red (failing). Now implement. It should turn green (passing).

    It also ensures your tests actually test something.

    Yes, that's the specific problem that red/green testing solves. It doesn't solve the broader problem of "automated tests only test specifically that which you thought to write an automated test for." Only human testers solve that problem.



  • @Gąska said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    People say "oh, your organization is doing them wrong," but everybody does them "wrong."

    Well, "everybody" writes regular code wrong, so I don't know why you're surprised they get unit tests wrong too.

    "Problems cannot be solved by the same level of thinking that created them."
    -- commonly attributed to Albert Einstein



  • @xaade said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run. The test is there because it takes 60 seconds to write and will save you a few hours of twisting your neck like an owl and wondering why , despite you clearly seeing that all preconditions are met, the needful still isn't being done.

    That's what I'm being told, but I don't buy it.

    You get into a habit of rewriting the tests every time the implementation changes, and you'll rewrite a test without thinking about whether the new implementation is right. They WILL change the test to "disable the call to 'Run'". And trust me, that extra 20 seconds of duplicating what you wrote WILL turn into a non-thinking task.

    This. People who have actually tested unit testing rather than simply accepting it as dogma came up with some surprising results: when you seed both the codebase and the tests randomly with bugs and tell someone to fix it, people who subscribe to the "tests as documentation of correct behavior" philosophy have a strong tendency to treat the tests as documentation of correct behavior, and they end up "fixing" the codebase to match the buggy tests. So the tests don't end up eliminating incorrect behavior; they end up enforcing it!



  • @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    every update to the method implementation REQUIRES an update to the unit test.

    :wat: How does that even happen? I mean, how did anyone come up with this ridiculous idea? That's exactly the opposite of the whole concept!

    See above, re: "everyone does unit testing wrong." 🚎


  • kills Dumbledore

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    It doesn't solve the broader problem of "automated tests only test specifically that which you thought to write an automated test for." Only human testers solve that problem

    But if, when your human tester finds a bug, you add automated tests for that and anything related that occurs at the time, you have assurance that you won't get a regression on that bug. Unit tests aren't enough, but they're a useful part of your toolbox


Log in to reply