The Untests



  • Normally one assumes that tests exist to, well, test the code does what it is supposed to. Well, our project contains a bunch of untests. These:

    • Test what the code is doing, but in such a totally convoluted way that there is no way to tell whether it actually is the thing it should be doing.
    • They make heavy use of an “accessor” pattern. Accessor is a class derived from the class to be tested that provides access to internal state of that class, that should be private, but is protected specifically for that purpose, to the test.
    • They play fast and loose with thus accessed state instead of properly setting up by, like, returning the right things from the provided mocks. Therefore, if the state is refactored, the tests keep breaking though the intended functionality is just fine.
    • They test private methods separately, so again, if the class is refactored, they simply stop making any sense at all.
    • Speaking of the mocks, many of them are derived from the actual class and have methods that may be called in the test overridden, because the code uses the concrete types. Even in some cases where interfaces are actually declared.

    As far as I can tell, this kind of mess is actually pretty typical…


  • Discourse touched me in a no-no place

    @bulb said in The untests:

    As far as I can tell, this kind of mess is actually pretty typical…

    Yes, it is unless you do one of two things:

    1. Design for total testability.
    2. Stop caring about the internal state of things so much and stick to testing observables (i.e., key correct use cases and as many tricky erroring edge cases as you can stand).

    Testing fanatics tend to go all out for the first option, but sane folks go for the second. And remember, a test failure isn't necessarily a disaster; it's just telling you that something changed, and you've got to work out whether that is a problem or not. It's quite possible that a test is failing because the functionality it is testing has changed because you intended it to, and that's A-OK.

    (My other advice is that getting 100% test coverage is very difficult. Getting it without creating all sorts of awful fragile mocks is much harder…)



  • I've mentioned one of these before regarding a ShitFarts con-artistsultant at my former place of employment, haven't I?

    Yeah, SDFC requires all 'Apex' 🔽 programs to pass at 80% unit test coverage before making any pages using them - user-accessible. Note that it did not retain a known-working version for use after a failed test suite (presumably because 'working' wasn't really an option, but I aggrdigress), meaning that if the error occurred in a user-facing server, the page would fail hard the moment anyone tried to use it.

    The obvious solution to this would be to use the integrated... oh, no integrated VCS? Uhm, OK, can we connect SVN through... ugh, that's a pain, but at least we can script it, but that doesn't solve the problem regarding the Production server.

    OK, fine, we'll shell out for a separate development server (and why not two more for formal testing and UAT while we're at it), all paid for and managed independently of each other but pipelined to each other for staging (thank Eris, they got that part almost right), and use their rather badly broken import system for passing updates up the chain. Oh, and there's no breaks for doing it, so every added server costs as much as the production server + the added support needed to allow staging.

    Did I mention that there was nothing keeping you from editing code in any of those severs, including production? And not even a warning for doing it?

    Yeah. Funny thing about that coverage requirement... it doesn't actually check to see if there are any actual tests coded in your Unit Tests. All it does is see if a) the 'test' tickles the relevant part of the source code, and b) any tests that are run pass.

    You can see where this is going, right?

    So, gaming this was dead easy. We generally avoided that, because, fuck me, maybe we had the sense to recognize that unit tests are just the first line of defense, meant to catch the dumbfuck errors and fumbled typing everyone is guilty of from time to time. They are good at ensuring that the logic does what you, the programmer, think that piece should be doing, but it says little about how the pieces will work when fit together, or whether they do what the user needs it to do - that what Integration and User Acceptance testing are for.

    But this particular inconsultant didn't give a fuck about any of that. His unit test all passed fine!

    No one noticed any problems until I happened to work on something he'd written, and specifically, had to add something to a unit test he wrote.

    Odd... where are the assertions?

    Now I wasn't sure if I was misunderstanding it, because the test was written in a way that seemed pretty weird to me. I asked him about it, figuring he'd say, 'oh, that's such and such feature".

    Then I noticed that the source file in the dev server was mostly empty.

    He'd not only been bypassing the unit test system, he'd been bypassing the entire staging system.

    He wasn't around for much longer.


  • 🚽 Regular

    @bulb said in The untests:

    • They make heavy use of an “accessor” pattern. Accessor is a class derived from the class to be tested that provides access to internal state of that class, that should be private, but is protected specifically for that purpose, to the test.

    This was common in a prior job I had until someone had the bright idea of adding some dependency injection to make that whole thing disappear.

    @dkf said in The untests:

    Testing fanatics tend to go all out for the first option, but sane folks go for the second.

    Every job I've had started out with the goal of 100% test coverage, and each time as we assessed our deadlines and what-not, the target percentage decreased until all that was left were mission-critical infrastructure and bug fixes that should never repeat. Other stuff is left to QA. We haven't had a disaster under that policy... yet.


  • Banned

    @the_quiet_one at my last company, I worked for three years on a project that never had test coverage drop below 90%. And they were mostly good, meaningful, readable tests.


  • Banned

    @bulb said in The untests:

    • They make heavy use of an “accessor” pattern. Accessor is a class derived from the class to be tested that provides access to internal state of that class, that should be private, but is protected specifically for that purpose, to the test.
    • They play fast and loose with thus accessed state instead of properly setting up by, like, returning the right things from the provided mocks. Therefore, if the state is refactored, the tests keep breaking though the intended functionality is just fine.
    • They test private methods separately, so again, if the class is refactored, they simply stop making any sense at all.

    All those problems would go away if you got rid of this "accessor" "pattern". No access to privates -> no random changes to object's internals -> no problems when refactoring. If object's public interface is the only way it'll be used, it's enough to only test its public interface.

    @bulb said in The untests:

    Speaking of the mocks, many of them are derived from the actual class and have methods that may be called in the test overridden, because the code uses the concrete types.

    What else can you do?


  • And then the murders began.

    @gąska said in The untests:

    All those problems would go away if you got rid of this "accessor" "pattern". No access to privates -> no random changes to object's internals -> no problems when refactoring. If object's public interface is the only way it'll be used, it's enough to only test its public interface.

    While that's the "right" way to do things, there are times where I still want to unit test my protected methods. Both to make it easier to isolate where a change/breakage occurred, and to make the tests simpler and easier to understand. But I don't want to make the methods part of the public interface; I don't trust my caller to call X then Y then Z in order, so I only give them access to method W that handles that for them. (And by "my caller" I really mean "me". :))

    Maybe the answer there is to start using the accessor pattern but only with methods - keep internal state private, but expose the methods I want to test as protected so that I can make them visible to the unit test.

    That, or start using reflection to call the methods to test. I'm sure nothing could go wrong. :whoosh:



  • @the_quiet_one said in The untests:

    Every job I've had started out with the goal of 100% test coverage

    I'd have to say I'm spoiled working in the hardware world. Due to the expense of revising a chip, not to mention the nightmare of recalling hardware that's in the field (anybody remember FDIV?), everybody all the way up the management chain recognizes that very high test coverage is essential.

    Test coverage means different things, though. In what I do, it means something pretty similar to what you mean: How much of the design has been tested to make sure it does what it is supposed to do? We use metrics like line and branch coverage, and also things like whether state machines have been driven through all valid states and state transitions. More importantly, we define a bunch of checks for "interesting" conditions, e.g., have data packets of all valid lengths been received? Has a packet of type A been immediately followed by a type B? By a type C? Has a particular type of invalid input been received? All together, the coverage must be well over 99%, typically something like 99.8 or more.

    To other people, test coverage means the ability to detect manufacturing defects. Current-generation CPUs may have over 1e9 transistors and 1e9 wires connecting them all together, which means billions of things that could possibly be defective in an actual chip. There are special things that are done (or prohibited) in the design in order to make testing easier. The test engineers use software that analyzes both the logic and physical structure of the chip, and suggests sequences of inputs that will cause defects to be detected at the outputs. They run the test programs while simulating various wires stuck at logic 0, stuck at logic 1, shorted to an adjacent wire, etc. 100% coverage is being able to detect every possible such defect. Again, coverage in the very high 90s is mandatory. And the test has to be fast; every single chip that comes out of manufacturing has to be tested, and every second costs money.

    Such high coverage is time consuming to achieve. One engineer at a company I once worked for had a picture on the wall of his cube, titled "A test engineer when he achieves 100% test coverage;" illustrated by Billy Crystal as Miracle Max in The Princess Bride.


  • Banned

    @unperverted-vixen said in The untests:

    @gąska said in The untests:

    All those problems would go away if you got rid of this "accessor" "pattern". No access to privates -> no random changes to object's internals -> no problems when refactoring. If object's public interface is the only way it'll be used, it's enough to only test its public interface.

    While that's the "right" way to do things, there are times where I still want to unit test my protected methods. Both to make it easier to isolate where a change/breakage occurred, and to make the tests simpler and easier to understand. But I don't want to make the methods part of the public interface; I don't trust my caller to call X then Y then Z in order, so I only give them access to method W that handles that for them. (And by "my caller" I really mean "me". :))

    I disagree. You should only test public interfaces. If you want to test a class partially, it means it violates single responibility principle and should be split anyway - then you split and test public interfaces of those smaller classes.


  • Considered Harmful

    I write my code, do nearly no testing (and certainly no automated testing), release it, and wait for someone to tell me it's broken. 🔥



  • @dkf said in The untests:

    Testing fanatics tend to go all out for the first option, but sane folks go for the second.

    This is not a work of testing fanatics. It is just simply not designed for any testability, so the tests are hacked on to it.

    @dkf said in The untests:

    And remember, a test failure isn't necessarily a disaster

    It isn't. But if tests fail because of implementation details changing, they fail to be useful.

    @scholrlea said in The untests:

    Yeah. Funny thing about that coverage requirement... it doesn't actually check to see if there are any actual tests coded in your Unit Tests.

    I haven't seen any that would. And fortunately nobody bothers measuring coverage here anyway. It would be quite a bit of work, too, because for C++ it's not so easy to set up and our build system is convoluteda massive pile of :wtf: already. So we just have review¹ and the reviewer is supposed to verify there are tests and they make sense. Well, that means some critical bits are not really tested (everything should be tested by QA; they have their own automation framework)

    @the_quiet_one said in The untests:

    This was common in a prior job I had until someone had the bright idea of adding some dependency injection to make that whole thing disappear.

    Yeah, we are trying to do that too. And in the application itself it is mostly going well, just the tests are resisting. That's why I am rambling about them.

    @the_quiet_one said in The untests:

    Every job I've had started out with the goal of 100% test coverage, and each time as we assessed our deadlines and what-not, the target percentage decreased until all that was left were mission-critical infrastructure and bug fixes that should never repeat. Other stuff is left to QA. We haven't had a disaster under that policy... yet.

    And I think you ain't gonna. As long as end-to-end functional tests are performed and between you and QA cover all features, you are fine. Unit tests are mainly a debugging and maintenance utility, but they are not effective at preventing breakage anyway.

    @gąska said in The untests:

    All those problems would go away if you got rid of this "accessor" "pattern". No access to privates -> no random changes to object's internals -> no problems when refactoring.

    That's precisely what I am trying to do. The only problem is that I underestimated how big mess it is, so the schedule is getting rather tight.

    @gąska said in The untests:

    What else can you do?

    Derive them from the interfaces. Most of the classes even have them. They are just incomplete because hacks got bolted on.


    ¹ We do reviews wrong, of course. The rabbit hole goes deep.


  • kills Dumbledore

    @pie_flavor said in The Untests:

    I write my code, do nearly no testing (and certainly no automated testing), release it, and wait for someone to tell me it's broken. 🔥

    So you are a Discodev



  • @dkf said in The Untests:

    It's quite possible that a test is failing because the functionality it is testing has changed because you intended it to, and that's A-OK.

    In my project, that's the cause of about 99% of test failures. I recall maybe one or two instances where a failing unit test actually helped me spot a bug in the code.

    Most of our actual bugs stem from bad assumptions about the data (eg. we thought that column refers to a code, but it's actually a numeric ID encoded as a varchar), and that can't be caught without proper integration tests on test data.



  • @maciejasjmj Yep. And that's precisely where writing unit tests is not efficient use of resources.

    Our case here is similar. We had heaps of broken tests, so colleague spent lot of time to get them pass and vast majority of problems were in the tests themselves. Only two small actual problems were found in the lower layer (where the tests are written more sensibly). In the upper layer, where the main mess resides, nothing.


  • Discourse touched me in a no-no place

    @maciejasjmj said in The Untests:

    Most of our actual bugs stem from bad assumptions about the data

    You assumed that the data was good to begin with. That's bad. :p


  • Banned

    @maciejasjmj said in The Untests:

    @dkf said in The Untests:

    It's quite possible that a test is failing because the functionality it is testing has changed because you intended it to, and that's A-OK.

    In my project, that's the cause of about 99% of test failures. I recall maybe one or two instances where a failing unit test actually helped me spot a bug in the code.

    The point of unit tests isn't to find bugs. It's to make it easy to refactor. You don't have to wonder if you accidentally changing some weird corner case behavior if you have test for it.



  • @gąska said in The Untests:

    The point of unit tests isn't to find bugs. It's to make it easy to refactor. You don't have to wonder if you accidentally changing some weird corner case behavior if you have test for it.

    It's a bug when you change your code and it doesn't handle the weird corner case anymore.


  • Discourse touched me in a no-no place

    @maciejasjmj said in The Untests:

    It's a bug when you change your code and it doesn't handle the weird corner case anymore.

    Perhaps. But if your test fails, it might be because the test was wrong in the first place. I've seen that happen all too often.


  • And then the murders began.

    @gąska said in The Untests:

    If you want to test a class partially, it means it violates single responibility principle and should be split anyway - then you split and test public interfaces of those smaller classes.

    In the ivory tower, sure. But in the real world, I have multistep processes. If I break each step into its own class, then I have to rely on runtime checks for if I screw up & leave out a step.


  • Banned

    @unperverted-vixen said in The Untests:

    @gąska said in The Untests:

    If you want to test a class partially, it means it violates single responibility principle and should be split anyway - then you split and test public interfaces of those smaller classes.

    In the ivory tower, sure. But in the real world, I have multistep processes. If I break each step into its own class, then I have to rely on runtime checks for if I screw up & leave out a step.

    Then make a test that makes sure the whole process works from the beginning to the end. In addition to tests for individual parts.

    It's basically exactly what you do right now.



  • @scholrlea
    er not quite correct.

    Yeah, SDFC requires all 'Apex' 🔽 programs to pass at 80% unit test >coverage

    Its Apex test code coverage has to be 75% , and it doesn't have to do anything much to get it pass. So it sucks, badly as a testing regime. Early versions of the API meant it didn't require you to create test data, and it still doesn't require it to test anything.

    Note that it did not retain a known-working version for use after a failed test suite >(presumably because 'working' wasn't really an option, but I aggrdigress), meaning that >if the error occurred in a user-facing server, the page would fail hard the moment >anyone tried to use it.

    Er what ? No it rolls back on a failed deploy ??

    OK, fine, we'll shell out for a separate development server (and why not two more for >formal testing and UAT while we're at it), all paid for and managed independently of >each other but pipelined to each other for staging (thank Eris, they got that part almost >right), and use their rather badly broken import system for passing updates up the >chain. Oh, and there's no breaks for doing it, so every added server costs as much as >the production server + the added support needed to allow staging.

    Depending on your license - Unlimited/Professional has 100 dev sandboxes , 1 full copy and 1 partial copy instance for testing. don't know what you are referring to there.

    Its true automating deployments is still very much a dark art for many still welded to Change sets (copy from sandbox to another) .
    SF are getting there with DX though - its still half arsed, but at least they've acknowledged the problem.

    Did I mention that there was nothing keeping you from editing code in any of those >severs, including production? And not even a warning for doing it?

    Nope you cannot go in and edit Apex code directly in production.
    Visual Force pages/components and a bunch of other things - yes.


Log in to reply