Having problems with unit testing philosophy.



  • @mikehurley

    I agree with that. I've posted before, mission critical deserves closer inspection.

    But assuming this is not the case. Wouldn't testing the various inputs for times of day be sufficient to cover both the outer method and inner method.

    Unit testing both ends up just telling you more precisely where the bug is, which is information you'll already have because you just failed THIS checkin, and you changed the code.

    You get coverage for both methods with the one unit test.

    Writing a separate unit test for the internal method is a waste, IMO.

    Am I wrong?


  • Trolleybus Mechanic

    @xaade said in Having problems with unit testing philosophy.:

    @mikehurley

    I agree with that. I've posted before, mission critical deserves closer inspection.

    But assuming this is not the case. Wouldn't testing the various inputs for times of day be sufficient to cover both the outer method and inner method.

    Unit testing both ends up just telling you more precisely where the bug is, which is information you'll already have because you just failed THIS checkin, and you changed the code.

    You get coverage for both methods with the one unit test.

    Writing a separate unit test for the internal method is a waste, IMO.

    Am I wrong?

    That seems reasonable if the function takes in a datetime, which it probably should. However I'd expect that sort of function to do some kind of DateTime.Now which you have zero control over.

    This thread kind of reads like "people I've worked with have done unit testing badly so I'm going to just assume unit testing is bad if it goes deeper than a certain point". My reply was more a push back against that general sentiment.


  • Considered Harmful

    @xaade said in Having problems with unit testing philosophy.:

    What I want to know is, Am I crazy?

    tenor.gif



  • @mikehurley said in Having problems with unit testing philosophy.:

    This thread kind of reads like "people I've worked with have done unit testing badly so I'm going to just assume unit testing is bad if it goes deeper than a certain point". My reply was more a push back against that general sentiment.

    I'm trying to avoid that sentiment.

    Part of the difficulty is that there's this trend to bloat testing and it's being strongly defended. So, it's very challenging for me to carefully determine what is meaningful unit testing and not just bloat.

    Depth itself isn't bad. I'd prefer depth in scenarios.

    1. A terse unit test of top level public interacting interfaces does not cover all scenarios.
    2. Specific scenarios that are prone to bugs.
    3. Mission critical code where you want quick recovery from bugs and thus specificity is meaningful.

    I don't want to use unit testing to track down a bug, but to tell me when I broke something. Unless it's essential that I get the bug fixed ASAP, at which point more in-depth unit testing that highlights bug location is important to save time on pushing changes in mission critical code.


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Wow. Compilers are generally one of my go-to arguments for my position on this.

    I wasn't clear on your position, but I think it's generally against unit testing?

    If the Roslyn test suite had been of the form "this input must produce this AST" rather than "this input must produce this output", I would never have been able to make it work

    That's probably true. I think this is a great example of why it's best to focus on totality of testing, because even comparing "compiler to compiler".

    Keep in mind, OtterScript is a very basic programming language (formal grammar) compared to something like C#, but it's pretty unique:

    OtterScript is a Domain-Specific Language that was designed in tandem with the execution engine to represent configuration plans and orchestration plans in Otter, and deployment plans in BuildMaster. Because a DSL is inherently limited in functionality, a key feature of is the ability to "drop down" to a lower-level scripting language as needed.

    While OtterScript is neither a general-purpose programming language (like Ruby), nor a general-purpose mark-up language (like YAML), it was inspired by both, and allows you to build declarative and imperative plans.

    In addition to meeting implementing the actual requirements of an advanced execution engine capable of executing across thousands of servers, we built OtterScript with these factors in mind:

    • easy to learn for those with little to no programming experience
    • familiar and intuitive to those with intermediate programming experience
    • extensible and powerful for those with programming and domain expertise

    In fact, using the Visual Mode in the plan editor, many users may not even realize they're building plans using OtterScript.

    That last bit means that OtterScript lets you export the AST in memory (maintained by a visual code editor) to the code. Even comments are represented in the AST.

    The runtime environment is a whole new level, but it's all tightly integrated with the language (unlike C#, which compiles to MSIL, which is then effectively executed by the runtime as machine language).

    So understanding all of these, and how to best test things.


  • Considered Harmful

    @xaade said in Having problems with unit testing philosophy.:

    Part of the difficulty is that there's this trend to bloat testing and it's being strongly defended. So, it's very challenging for me to carefully determine what is meaningful unit testing and not just bloat.

    :butwhy:

    All of the DRY principles go out the window for testing code. It affects neither the readability nor the efficiency of your production code. Even if the tests are useless, they don't make anything worse. At worst, they have no effect, and at best, they catch broken things.



  • @error said in Having problems with unit testing philosophy.:

    At worst, they have no effect

    To the code in a stable state which won't be modified again.

    The problem with overreaching tests is that it can inhibit change.

    You end up either avoiding change, or you simply blindly update the test to match the new code.

    If you have something you really don't want to change, and a comment doesn't stop people, will a test? The reactions are the same, delete the comment or update it. Unit test only prevents the description from being outdated.



  • @error said in Having problems with unit testing philosophy.:

    Even if the tests are useless, they don't make anything worse.

    That's not true. Superfluous tests don't support changes and refactoring, as they should, but make it unnecessarily hard instead. People become afraid of refactoring because internals that have no meaning outside their modules have dozens of unit tests that would break if you changed anything.

    The only thing that's worse than a superfluous test is a flaky test.



  • @xaade said in Having problems with unit testing philosophy.:

    Instead of just calling DateTime.Now in GetTimeOfDay, he passes in a datetime to a GetTimeOfDay method for the class, but now the code that calls GetTimeOfDay is using Datetime.Now, so he says he pushed the problem up a layer.
    Now, instead of just creating a method to GetTime or such, and mocking that method in his unit test, he develops an interface solely to wrap DateTime functionality because STATIC CLASS BAD! and uses IOC to inject a wrapper that wraps ONE METHOD!
    Now he writes a fake datetime provider so he can unit test the class.

    It's an article about testing techniques so it's deliberately simplistic, but it's making some good points. In this section he's showing two separate ways to remove tight coupling to the environment:

    • Pass environmental factors in as an argument, or
    • Pass the environment in as a provider type dependency (IOC)

    You absolutely need a fake date time provider (even if that's just "new DateTime(2020, 07, 01, 16, 30)") if you want to test business logic which is time dependent. And if you want that to use the current time in production, you need "DateTime.Now" to be made available to that code somehow.

    I typically create an 'Environment' (or 'Env') interface for all the environmental dependencies, rather than separate ones for each environmental dependency. For example my current system has environmental dependencies on 'now', probably the most common, but also getting 'files' (for which there's an in memory implementation for testing), sleeping (for timing related tests) and finding the OS (for testing OS-specific paths and third party tools) in an Environment interface.

    There is a third approach: have the environment as a static dependency, which is overridden in tests. This breaks the ideological purity of the IOC approach (and is a slightly bigger pain to test with because you have to reset it in an [AfterTest] cleanup method) but environment (at least Now) is referred to in enough places I didn't want to wire up a real dependency in all of them.

    WHO CARES HOW WE DETERMINE NIGHT AND DAY.... why is that even being unit tested?

    The point is that without providing a test implementation of 'now' you can't test that logic at all. And while you might think that logic is so trivial it doesn't need testing (which maybe it is, as it's a simplistic example for technique not a real codebase), that certainly isn't true of all date-dependent code.



  • Speaking of test coverage hard limit rules, at one place I wrote a reflection test that ran all simple getter/setter pairs via reflection simply because the rule was infallible and we were not allowed to ignore POJO structures without logic. And I got seriously fed up with writing testcases for them, because the data structures were yuuuge!
    Reflection and :doing_it_wrong: to the rescue. The horribleness was even pointed out as a good thing, because it saved so much time. Might want to consider backing off on the fixed limits then.

    I do not like the method of writing unit tests first and then the code, because this forces a specific structure on the code that usually makes things more complex and unwieldy in the name of testing. This is often said to be a good thing, because easy testing mean fewer bugs, but I find that since the code is now more spread out, and harder to follow because everything is designed for testing instead of for functionality, bugs crop up more often when people modify the code because it's harder to grasp all of it, and because of the spread out structure of things, the bugs will happen in interfaces between units so unit tests wont catch them. Integration tests might, but they are often novel bugs that the integration tests miss.
    I've seen it plenty of times. So I don't like it.
    I prefer to write the code for the function first, when it looks like it works, I move shit around to make things prettier, then I write proper tests for everything to see that it works precisely the way it's supposed to, first on a unit level, and then on an integration level. Then I build the full system and fuck about with it to tickle my changes to see if it works for real as well.
    Then I clean the code and tests a bit more.



  • @bobjanova said in Having problems with unit testing philosophy.:

    The point is that without providing a test implementation of 'now' you can't test that logic at all. And while you might think that logic is so trivial it doesn't need testing (which maybe it is, as it's a simplistic example for technique not a real codebase), that certainly isn't true of all date-dependent code.

    Probably shouldn't have mentioned the ioc-ing of datetime.

    I'm referring here to how he publicly exposes the method to determine "night"/"day" and then unit tests that. At which point I point out that it's superfluous because testing his primary method covers both adequately for the sake of determining if you broke the code, and that we shouldn't care how his primary method determines night or day.

    For example, what if we changed the implementation to use a light sensor rather than a time of day.

    If you only test the primary method, you don't need to change anything, your mock will say "night" or "day" whether you use datetime or a light sensor.

    There's no need to unit test that layer individually. If it's broken, your primary method test will fail.

    I'm not against ocking daytime provider. I'm saying that you can reach 100% coverage without testing every public method.



  • @Carnage said in Having problems with unit testing philosophy.:

    because easy testing mean fewer semantic bugs

    I means shit all for logic bugs (except on the first pass, future modifications have no protection, because people will simply fix the test to match the implementation)


  • ♿ (Parody)

    @apapadimoulis said in Having problems with unit testing philosophy.:

    I call those tests "worst than uselessfailure".


  • BINNED

    @apapadimoulis said in Having problems with unit testing philosophy.:

    I call those tests "worst than uselessfailure".

    FTF:trollface:




  • BINNED

    @boomzilla

    @xaade said in Having problems with unit testing philosophy.:

    :hanzo: ?

    GODDAMNIT, one minute!! The post is 12 hours old and he beat me by one minute!


  • I survived the hour long Uno hand

    @xaade said in Having problems with unit testing philosophy.:

    I'm referring here to how he publicly exposes the method to determine "night"/"day" and then unit tests that. At which point I point out that it's superfluous because testing his primary method covers both adequately for the sake of determining if you broke the code, and that we shouldn't care how his primary method determines night or day.

    :mlp_shrug: It makes pretty good sense to me that you would want your unit tests to document the business logic that defines the different dayparts (and that changing those rules should require updating the tests to indicate that was an intentional change). Maybe you could take some issue with where he put the IOC wrapper, but given that GetTimeOfDay is stateless and the SmartHomeController needs to track last motion time, the method he chose minimizes the amount of duplicated DI complexity.

    And besides, using strings instead of an Enum for the GetTimeOfDay return values is :trwtf:

    Edit to add: Also, the GetTimeOfDay method started as public in his example, so it's not like he changed the interface of his class just for the sake of testing. Granted, we can argue that GetTimeOfDay should have been a private within the SmartHomeController class to begin with, but that assumes there's no other consumer elsewhere in the system that needs to make daypart decisions.



  • @izzion said in Having problems with unit testing philosophy.:

    It makes pretty good sense to me that you would want your unit tests to document the business logic that defines the different dayparts (and that changing those rules should require updating the tests to indicate that was an intentional change).

    This is where I immediately disagree.

    Code review is the tool for this. Not more code.


  • Considered Harmful

    Back to the concept of spec tests (tests as documentation), the fact that the method checks the time of day to determine night/day is a fact that should be documented, and if it changes, the documentation needs to be updated too.


  • I survived the hour long Uno hand

    @xaade
    But how is future-me in six months going to know that the choice of dayparts was intentional. The rest of the team from six months ago is gone, digging through old PR comments will take hours (assuming there was even a written comment at all and not a hallway conversation), so I'm just gonna come through and bulldoze the time windows and nobody else on the team will know whether or not that's a meaningful change, so how are they going to know they should push back on it?

    To give a more real life example (that I wish I would have tested better)...

    • I worked on a project that did a periodic synchronization job with a 3d party API
    • From time to time, the OAuth tokens for the API would get into a bad state that would require user intervention to re-authorize
    • We wanted to make sure we sent the user a notification via e-mail if the automatic, periodic sync job was failing
    • We didn't want to bomb the user with e-mails on every failure (since waking up to many copies of a 'hey this failed' message if something bombed out overnight isn't anybody's idea of a good time)
    • So we chose to implement a time-of-day based trigger, at one specific run of the sync job it would send e-mail notifications, and all other runs would just log and fail without notifying the end user.

    I didn't create a test around this, because it was super trivial and all kind of internal to the sync job's methods... a private, basically...

    One month later, we're making changes in the sync job, and the next developer (who also happened to be me) is looking at the code and going "wtf why is this here, this doesn't make any sense". And it took me almost an hour of digging through git blame and the PR comments and our Slack channel to find why the change was implemented originally. Had it been longer, I may well have never found it, and would have either just ripped it out (losing a valuable business notification) or just left it in place as a sacred invocation that would continue to fester and cause ever increasing time loss when modifying functionality within that code unit.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    @apapadimoulis

    Can you give an example of what you WOULD unit test, and what the test would look like?

    Nearly all of our unit tests are in libraries, and at a very low level. Most are in classes difficult to explain, like SlimBinaryFormatter, SlimHtmlFormatter, LazyAsync, but here's an example of a test for the InnerText property of a class called Element.

    It's from AhWeb, our proprietary web framework, and Element is a low-level class that's generally inherited by classes like Div or P or Span. These essentially form a server-side DOM of sorts.

    Note that new Element("hdars") would render as <hdars></hdars>, which is why we want to test the text property.

        [TestMethod]
        public void TestElementInnerText()
        {
            var hdars = new Element("hdars");
            Assert.AreEqual("", hdars.InnerText);
    
            hdars.Controls.Add("hello dars");
            Assert.AreEqual("hello dars", hdars.InnerText);
    
            var hsirs = new Element("hsirs");
            hdars.Controls.Add(hsirs);
            Assert.AreEqual("hello dars", hdars.InnerText);
    
            hsirs.Controls.Add(" hello sirs");
            Assert.AreEqual("hello dars hello sirs", hdars.InnerText);
        }

  • Considered Harmful

    @izzion said in Having problems with unit testing philosophy.:

    a 3d party

    That sounds fun, if brief.



  • @izzion said in Having problems with unit testing philosophy.:

    @xaade
    But how is future-me in six months going to know that the choice of dayparts was intentional.

    A comment.


  • I survived the hour long Uno hand

    @xaade

    #define ONE 2 // no magic numbers, ONE = 1
    

    Edit to not Ben it: Which is to say, comments drifting out of sync with code can't be reasonably made to cause CI to fail. Tests drifting out of sync with code will blow up your CI pipeline (or result in someone commenting out the failing test, which should be a really big freaking flag and much easier to catch in code review than a comment which might be glossed over because someone got distracted mid-review or something



  • @izzion said in Having problems with unit testing philosophy.:

    @xaade

    #define ONE 2 // no magic numbers, ONE = 1
    

    You asked how to tell the code was intentional.

    I've already said that you don't have to test every layer to make sure you have 100% coverage for semantic errors. Every layer will fail with a bug like this.

    Having unit tests to make sure you produced the right logic will not protect you. The next programmer is going to come by and say, "The unit test failed, but we really want to make this change, so I changed the unit test."

    How would you prevent that?

    With a comment?

    With a unit test for the unit test?


    Are you REALLY saying you should export ONE 2 to an interface you can mock so that you can unit test to ensure you wrote ONE 1?



  • @xaade said in Having problems with unit testing philosophy.:

    I'm referring here to how he publicly exposes the method to determine "night"/"day" and then unit tests that. At which point I point out that it's superfluous because testing his primary method covers both adequately for the sake of determining if you broke the code, and that we shouldn't care how his primary method determines night or day.

    Yeah I think this is because it's a simplified example to show the techniques. I probably wouldn't bother testing the 'night or day' method either - but I would want to test the 'lights come on at night' and I'd use an environment boundary interface to do that.



  • @bobjanova said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    I'm referring here to how he publicly exposes the method to determine "night"/"day" and then unit tests that. At which point I point out that it's superfluous because testing his primary method covers both adequately for the sake of determining if you broke the code, and that we shouldn't care how his primary method determines night or day.

    Yeah I think this is because it's a simplified example to show the techniques. I probably wouldn't bother testing the 'night or day' method either - but I would want to test the 'lights come on at night' and I'd use an environment boundary interface to do that.

    Ok, and then when the implementation changes of what resource you use to determine day and night, you can create a different mock to pass in.

    That's not that bad.

    (Or if your environment interface includes both a time and a light sensor, you don't even have to update the mock)


  • I survived the hour long Uno hand

    @xaade
    Correct, unit testing won't prevent programmers from changing business logic when they think they need to. But as much as I lampoon a lot of Uncle Bob's videos and some of the TDD philosophy (and tend to do TDD-AD when I'm working on anything more trivial than Hello World), in my experience there's a lot of merit in implementing your business rules as automated tests (unit, integration, or whatever makes sense).

    I actively contribute to an OSS project that's mostly JS/React and has very little automated testing outside of the most core functionality because of the modular nature of the project and the wide variety of experience/skill from module maintainers. Even when we do prod people into commenting their code and/or using our wrapper on console.log to do debug logging for important logic points, things quickly devolve into "well, that's @izzion's code, you'll have to ask him" and very few people even feel like they can help maintain other modules. And our onboarding rate sucks.

    Would unit tests fix the whole thing? Certainly not. Would unit tests make people more confident that they could make a necessary change in this one area of this module without breaking all the other modules on the page? Probably.



  • @izzion said in Having problems with unit testing philosophy.:

    @xaade
    Correct, unit testing won't prevent programmers from changing business logic when they think they need to. But as much as I lampoon a lot of Uncle Bob's videos and some of the TDD philosophy (and tend to do TDD-AD when I'm working on anything more trivial than Hello World), in my experience there's a lot of merit in implementing your business rules as automated tests (unit, integration, or whatever makes sense).

    I actively contribute to an OSS project that's mostly JS/React and has very little automated testing outside of the most core functionality because of the modular nature of the project and the wide variety of experience/skill from module maintainers. Even when we do prod people into commenting their code and/or using our wrapper on console.log to do debug logging for important logic points, things quickly devolve into "well, that's @izzion's code, you'll have to ask him" and very few people even feel like they can help maintain other modules. And our onboarding rate sucks.

    Would unit tests fix the whole thing? Certainly not. Would unit tests make people more confident that they could make a necessary change in this one area of this module without breaking all the other modules on the page? Probably.

    You can express those business rules at the top layer though. There's still no need to delve down to the exact layer of the rule implementation.

    You just have a extensive enough set of inputs and the checks on the result come out of the top layer.

    It will still immediately fail, and you'll have your current PR to look at for the break.

    Testing bottom-most layers only help you find the break more precisely.

    Using @bobjanova's example. Pass the environmental interface in, mocking certain input times based on the outcomes you expect according to business rules. If the light fails to come on, you've validated your business rule. No need to test the individual method that determines day/night.

    If you change to implement using a light sensor, you update your unit test to mock certain light levels instead.

    I'm not saying you can't write a ton of rules to express expected outcomes. I'm saying I don't think you need to test every layer going down.


    I'm coming from an environment where there could be 3 or 5 layers between the top and that business logic layer. That's testing all 3-5 layers independently to make sure the input is passed down properly.

    That's too much.



  • @xaade said in Having problems with unit testing philosophy.:

    You can express those business rules at the top layer though

    As long as those tests can be run in a reasonable time and have full coverage of your application boundary, yes you can.

    (EDIT and yes, in this simplistic example, I would write a test against the outer controller, give it a fake time of midday/midnight/whatever the boundary times are and verify against the state of the light)

    Lower level unit tests are often helpful for developers because they're faster and more localised, so they give a faster feedback cycle on breaking stuff, and provide a spec for internal component boundaries. I'd always say that any complicated business logic should always have tests at the lowest level that makes sense, even if you want a system test to cover it too, because it makes it a lot easier to see if (and where) you broke it.


  • Considered Harmful

    @izzion said in Having problems with unit testing philosophy.:

    Correct, unit testing won't prevent programmers from changing business logic when they think they need to

    The idea is not to cement the business logic in place. It's still fine to change the business logic! The idea is that any time the business logic changes, that should be a deliberate choice. Here, having to update the tests is a signal that "yes, I meant to change that."

    I work in a tangled mess of a project where changing one thing frequently causes other things to break - things our live testers (nor developers) don't even realize changed. The surface area of our web application is far too large (hundreds of unique pages) to test comprehensively every time. This kind of unwanted entanglement would be caught by unit tests - if we could write them, but sadly our legacy codebase is not test-friendly either. We instead do end-to-end testing with Selenium, which take a long time (and a separate team) to maintain.

    Ironically, writing testable code has the side-effect of reducing entanglement. Splitting everything into interfaces and using plain old data objects makes it less likely that changes have nonlocal effects - but it catches when they do.



  • @bobjanova said in Having problems with unit testing philosophy.:

    I'd always say that any complicated business logic should always have tests at the lowest level that makes sense

    Chances are the complicated business logic is mission critical, and thus I'd agree to test at depth. If your complicated business logic is just there to change a label on a screen.... 🤷♂ How many times are you breaking this label? How valuable is it that the label not break? What's the opportunity cost for maintaining 10 layers of test to ensure the label is correct? If the label holds legal relevance, then it's probably worth it.

    If it's code you're constantly breaking, or code that is likely to silently break other areas, then that's also a good reason.

    I'm wondering if people are underestimating just how many layers of interfaces and dependencies we've generated.


  • Considered Harmful

    I also want to note that, even if the rule is applied mindlessly everywhere, even when it's overkill - this is good because it removes individual discretion from the decision. "Is this important enough to test?" is a subjective decision, where different developers might not reach the same conclusion. A hard-and-fast "the answer is always yes" rule removes the subjectivity from the decision.

    I also have had a terrible record predicting which code would be used once versus which code will be used everywhere. It feels like I've been more often wrong about that than right. And once the code is used everywhere, it's a bitch to go back and refactor it to be done the right way. It's much easier to get it right from the start, even if it doesn't end up being important after all.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    I'm wondering if people are underestimating just how many layers of interfaces and dependencies we've generated.

    After your previous discussion on this subject I've decided to try to forget about all of that.



  • @error said in Having problems with unit testing philosophy.:

    Even if the tests are useless, they don't make anything worse. At worst, they have no effect, and at best, they catch broken things.

    I've explained already why this is not true. At worst, they serve as an incorrect spec, enforcing buggy behavior rather than catching it.


  • Banned

    @xaade said in Having problems with unit testing philosophy.:

    Now, instead of just creating a method to GetTime or such, and mocking that method in his unit test, he develops an interface solely to wrap DateTime functionality because STATIC CLASS BAD! and uses IOC to inject a wrapper that wraps ONE METHOD!

    "Mocking static method" means that this method isn't static anymore - not in any useful sense at least.

    Don't optimize for line count. Optimize for how easy it is to explain to newcomers.

    This static method does this thing except when you build in testing configuration in which case you might or might not have a mock there and you might or might not look in a completely different place for what it actually does, depending on how the particular test has been written.

    is much harder to explain than

    This object gets passed this other object as a dependency in the constructor so it can call this method, but in tests it gets a mock instead.


  • Banned

    @xaade said in Having problems with unit testing philosophy.:

    Oh, and he had to make GetTimeOfDay a public method because he wants to unit test that separately for whatever reason.

    Well, he's an idiot. Forget everything you read there and add the domain to firewall blacklist. Stop listening to idiots advocating for things they don't understand, and start listening to the smart people advocating for things that actually help them in their line of work. Like me.



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run. The test is there because it takes 60 seconds to write and will save you a few hours of twisting your neck like an owl and wondering why , despite you clearly seeing that all preconditions are met, the needful still isn't being done.

    That's what I'm being told, but I don't buy it.

    You get into a habit of rewriting the tests every time the implementation changes, and you'll rewrite a test without thinking about whether the new implementation is right. They WILL change the test to "disable the call to 'Run'". And trust me, that extra 20 seconds of duplicating what you wrote WILL turn into a non-thinking task.

    This. People who have actually tested unit testing rather than simply accepting it as dogma came up with some surprising results: when you seed both the codebase and the tests randomly with bugs and tell someone to fix it, people who subscribe to the "tests as documentation of correct behavior" philosophy have a strong tendency to treat the tests as documentation of correct behavior, and they end up "fixing" the codebase to match the buggy tests. So the tests don't end up eliminating incorrect behavior; they end up enforcing it!

    In my (limited) experience and observation, a lot of people from a certain part of the globe (but not all of them, thankfully) tend to fall into that category. From what I know of their culture(s), though, that's just the expected way to do things; it's someone else's responsibility, and the clients/supervisors/testers are always right (even if they're not).

    They also tend to never question the client's spec itself, even if it's anywhere from slightly wrong to actually impossible. If you give them something impossible, they'll just keep "working" on it until you realize it's impossible.


  • I survived the hour long Uno hand

    @djls45
    Yeah, aren’t American students the worst of the worst? :tro-pop:

    @mikeTheLiar



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    @Gąska said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    So,

    bob.JumpAndRun();
    test if bob called jump
    test if bob called run

    actual code

    bob.JumpAndRun() { Jump(); Run(); }

    Like I said, complete tautologies, why is this being done?

    Accidents happen. One day someone might modify the code to do just a bit more logging and accidentally disable the call to Run. The test is there because it takes 60 seconds to write and will save you a few hours of twisting your neck like an owl and wondering why , despite you clearly seeing that all preconditions are met, the needful still isn't being done.

    That's what I'm being told, but I don't buy it.

    You get into a habit of rewriting the tests every time the implementation changes, and you'll rewrite a test without thinking about whether the new implementation is right. They WILL change the test to "disable the call to 'Run'". And trust me, that extra 20 seconds of duplicating what you wrote WILL turn into a non-thinking task.

    This. People who have actually tested unit testing rather than simply accepting it as dogma came up with some surprising results: when you seed both the codebase and the tests randomly with bugs and tell someone to fix it, people who subscribe to the "tests as documentation of correct behavior" philosophy have a strong tendency to treat the tests as documentation of correct behavior, and they end up "fixing" the codebase to match the buggy tests. So the tests don't end up eliminating incorrect behavior; they end up enforcing it!

    @error said in Having problems with unit testing philosophy.:

    The idea is not to cement the business logic in place. It's still fine to change the business logic! The idea is that any time the business logic changes, that should be a deliberate choice. Here, having to update the tests is a signal that "yes, I meant to change that."

    If they're too dumb to notice a comment and know that the code should not be changed, you get the above, or you get people that just simply update the unit test to match.

    I don't really see how this protects you from dumb decisions. To me it only reduces it to two concrete dumb decisions, with the hopes that someone figures out the smart decision.

    I fundamentally disagree that a unit test helps solidify any more than a comment. All it does is give TWO locations for a code review to look at, and double the development time. It may APPEAR to, but I don't have evidence for that. I have plenty of evidence of the opposite.

    We're supposed to be agile, raising barriers to agility because you don't trust your developers to read a comment is backwards thinking, IMO.

    If you have a sensitive area that's easy to break, by all means, unit test it. But unit tests for trivial code (especially non-critical) that literally copy line for line, I fail to see how that's productive.



  • @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    cargo-cult

    car not go cult, car go road.

    You have obviously never met extreme car tuner.

    4d8e7072-1d3e-4964-91dd-cbbeb408367c-image.png ❓

    Alternately, @Groaner?



  • @dfdub said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    What language is that

    Groovy.

    It's a :wtf: for production code, but quite neat if you want to build DSLs.

    Eesh! :eek:
    Our codebase is moving to Groovy from Java. It started with our "integration" code (i.e. the code to format data into something that can be consumed by another system and then sends it to that other system and logs the result/response), and now our newest versions of the application are all in it.
    AFAICT, the only thing that has changed is that now we have to comply with Groovy's syntax requirements, like newlines always end an expression unless there's a binary operator just before the line break.



  • @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    Anything that depends (directly or directly) on web server, database or even any network, cannot be considered stateless in the first place! Any code that tries to peddle that notion is just disaster in waiting (usually it explodes in production, on a particularly busy day).

    Our program is supposedly stateless (or rather, the devs tout that all the service operations it performs are stateless), but some of the services use state information from a database and creates state info to save to the database.

    It does fairly well in production (i.e. it doesn't put much stress on the system, so the servers can almost always handle it), for the most part, although it is possible for two people to do the same operation at the same time to the same subject, resulting in an invalid doubled (according to our business specs) state. If you've read enough of my work thread to know what field I'm in, you'll know that something like this can be a Very Bad Thing™, from our clients' perspective, at the very least. Fortunately, this sort of occurrence is extremely rare, as in, I think I've seen users do this only once in ~5 years (and in this case didn't lead to anything actually dangerous happening), though I don't think it's been fixed.


  • Considered Harmful

    @xaade said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    The idea is not to cement the business logic in place. It's still fine to change the business logic! The idea is that any time the business logic changes, that should be a deliberate choice. Here, having to update the tests is a signal that "yes, I meant to change that."

    If they're too dumb to notice a comment and know that the code should not be changed, you get the above, or you get people that just simply update the unit test to match.
    I don't really see how this protects you from dumb decisions. To me it only reduces it to two concrete dumb decisions, with the hopes that someone figures out the smart decision.

    Like I said, if you don't realize that modifying component X affects component Z because it depends on component Y that uses component X, you'll immediately notice that tests on Z started failing when you only expected them to fail on X and maybe Y.

    In a codebase as large as the ones I work with, you could easily not even be aware that Z exists at all. At which point, you could look up the failing tests - and lo and behold, the tests themselves tell you what Z should be* doing!

    Also, like I said, splitting things into testable chunks also tends to prevent these types of interdependencies in the first place.

    * not because the test code is infallible gospel, but because the test code is accompanied by a brief description explaining the requirement


  • Considered Harmful

    I guess it's time for another Tales From Production.

    Not too long ago I broke a reporting dashboard (written by not-me) because it was directly calling a stored procedure written for another component. I had no idea they were interdependent. No one was testing that dashboard because nobody touched it.

    A unit test would have told me about that. Actually, if the logic was in a service, the problem wouldn't have even happened because the dashboard would have asked the service instead of hitting the database directly, and the updated logic would have kept working.


  • And then the murders began.

    @error said in Having problems with unit testing philosophy.:

    A unit test would have told me about that.

    Would it? Normally I’d expect anything touching a database to be mocked for a unit test.

    An integration test might have caught it, but I’ve yet to see automated integration tests actually touching a database out in the wild.


  • Considered Harmful

    @Unperverted-Vixen The direct database access shouldn't have been allowed in the first place. It was a violation of encapsulation. It should have been a call to a service, which would have Just Worked when I made my change. Or been a compile time error when I changed the POCO.


  • Java Dev

    @error said in Having problems with unit testing philosophy.:

    Like I said, if you don't realize that modifying component X affects component Z because it depends on component Y that uses component X, you'll immediately notice that tests on Z started failing when you only expected them to fail on X and maybe Y.

    Not if your unit tests are mocking all external dependencies though.


  • Java Dev

    @Unperverted-Vixen said in Having problems with unit testing philosophy.:

    An integration test might have caught it, but I’ve yet to see automated integration tests actually touching a database out in the wild.

    ✋


  • Considered Harmful

    @PleegWat said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Like I said, if you don't realize that modifying component X affects component Z because it depends on component Y that uses component X, you'll immediately notice that tests on Z started failing when you only expected them to fail on X and maybe Y.

    Not if your unit tests are mocking all external dependencies though.

    I guess the spec tests I use straddle the line of unit- and integration-test.

    I'll even concede OP's point that too much mocking can make the test pointless. I only mock out things that I can't provide in a test environment or need to simulate a specific scenario.

    Every component should allow mocking, but mocking everything all the time is :doing_it_wrong:


Log in to reply