Having problems with unit testing philosophy.



  • @xaade said in Having problems with unit testing philosophy.:

    @apapadimoulis said in Having problems with unit testing philosophy.:

    Unit Testing is a problem when you start thinking and design in Units.
    I think I understand this.

    We've moved that direction since it's easier to unit test and deploy IOC when you have mockable interfaces. So anything reusable ends up being an interface. Static classes and methods are gone, even if they would contain no data on their own and simply encapsulate functions that are used in more than one class.

    That drives me insane. Surely we can just wrap the damn thing in an interface for testing purposes? Nope, it's now an IOCed interface.

    :wtf_owl: This is definitely now needed and unit testing is a lame excuse at best. There is absolutely no reason to mock function without any state and/or side effects. It's good practice to cover them by their own tests (it helps to quickly localize bugs), but from that point on they should be treated as a library.

    So, found your problem. Someone in your workplace is preaching cargo-cult.



  • @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    cargo-cult

    car not go cult, car go road. EDIT: car go road, over cult.



  • @xaade said in Having problems with unit testing philosophy.:

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    cargo-cult

    car not go cult, car go road.

    You have obviously never met extreme car tuner.



  • @error said in Having problems with unit testing philosophy.:

    Yeah, it's not meant to be easier to write, but the goal is that the whole test suite be sort of be "read" like a document.

    I think Spock actually gets quite close to that ideal:

    def "HashMap accepts null key"() {
      given:
      def map = new HashMap()
    
      when:
      map.put(null, "elem")
    
      then:
      notThrown(NullPointerException)
    }
    


  • @dfdub said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    Yeah, it's not meant to be easier to write, but the goal is that the whole test suite be sort of be "read" like a document.

    I think Spock actually gets quite close to that ideal:

    def "HashMap accepts null key"() {
      given:
      def map = new HashMap()
    
      when:
      map.put(null, "elem")
    
      then:
      notThrown(NullPointerException)
    }
    

    What language is that ⁉



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    What language is that

    Groovy.

    It's a :wtf: for production code, but quite neat if you want to build DSLs.


  • Considered Harmful

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    @apapadimoulis said in Having problems with unit testing philosophy.:

    Unit Testing is a problem when you start thinking and design in Units.
    I think I understand this.

    We've moved that direction since it's easier to unit test and deploy IOC when you have mockable interfaces. So anything reusable ends up being an interface. Static classes and methods are gone, even if they would contain no data on their own and simply encapsulate functions that are used in more than one class.

    That drives me insane. Surely we can just wrap the damn thing in an interface for testing purposes? Nope, it's now an IOCed interface.

    There is absolutely no reason to mock function without any state and/or side effects.

    There is some value in reifying stateless methods. This lets you plug in different methods (strategy pattern). Also, if the methods have more dependencies (and those methods have dependencies), it's good to be able to mock out those methods with simpler implementations, where they might indirectly depend on something like a web server or a database.



  • @error You don't need IOC container injection and all that to achieve mocking of dependencies though. In any decent language you can create mocks of dependency classes and create instances of the service class under test from them.



  • @error said in Having problems with unit testing philosophy.:

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    @xaade said in Having problems with unit testing philosophy.:

    We've moved that direction since it's easier to unit test and deploy IOC when you have mockable interfaces. So anything reusable ends up being an interface. Static classes and methods are gone, even if they would contain no data on their own and simply encapsulate functions that are used in more than one class.

    That drives me insane. Surely we can just wrap the damn thing in an interface for testing purposes? Nope, it's now an IOCed interface.

    There is absolutely no reason to mock function without any state and/or side effects.

    There is some value in reifying stateless methods. This lets you plug in different methods (strategy pattern).

    Sure, but pretty much every language today allows you to do that without explicitly declared interface and IOC. So unless the code is in Java 7, it's completely unnecessary and smells of cult.

    Also, if the methods have more dependencies (and those methods have dependencies), it's good to be able to mock out those methods with simpler implementations, where they might indirectly depend on something like a web server or a database.

    Anything that depends (directly or directly) on web server, database or even any network, cannot be considered stateless in the first place! Any code that tries to peddle that notion is just disaster in waiting (usually it explodes in production, on a particularly busy day). So yes, that should be made mockable component (there is quite a tricky question where should the interface actually me, but that's case-by-case with no real rules).


  • Considered Harmful

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    Anything that depends (directly or directly) on web server, database or even any network, cannot be considered stateless in the first place!

    I work mostly with web applications, and a surprising amount of (poorly written, but what can you do) web application code just assumes a web request/response/session is always available.


  • Banned

    @dfdub said in Having problems with unit testing philosophy.:

    avoid mocking like the plague.

    For the most part, yes, but in general - no. Unit tests should provide input and check output, and nothing more. If input and output is sent via external dependencies - then mock. Or stub, or fake, or whatever is appropriate for the test.

    Unit testing is nice because it makes you REALLY hate overcomplicated architecture. When you mock everything, you want to make "everything" as small as possible - which turns most code into simple argument-accepting value-returning functions with no calls to external dependencies, making the codebase much easier to navigate and maintain.

    That said, idiots will be idiots. That always is, always has been, and always will be a caveat of all good practices.


    I'd like to take a moment to point out that by now, @Mason_Wheeler has most definitely seen my post where I explained where he got me wrong, so he has no excuse anymore to wrongly claim that I believe unit tests prevent bugs. I'm pointing it out because I'm really annoyed by how he purposely misinterprets what I say and then spread despicable lies about me. It's not the first time he's done it in recent time, and he never apologized or even acknowledged his error.



  • @Gąska said in Having problems with unit testing philosophy.:

    For the most part, yes, but in general - no. Unit tests should provide input and check output, and nothing more. If input and output is sent via external dependencies - then mock. Or stub, or fake, or whatever is appropriate for the test.

    Just to clarify: I agree with this, I didn't mean to state an absolute rule. But mocking in a test is definitely a "test code smell" that should make you double-check you're testing the right thing.


  • Banned

    @dfdub I forgot to finish my post. Please refresh and re-read 2nd paragraph.


  • ♿ (Parody)

    @Gąska said in Having problems with unit testing philosophy.:

    I'd like to take a moment to point out that by now, @Mason_Wheeler has most definitely seen my post where I explained where he got me wrong, so he has no excuse anymore to wrongly claim that I believe unit tests prevent bugs. I'm pointing it out because I'm really annoyed by how he purposely misinterprets what I say and then spread despicable lies about me. It's not the first time he's done it in recent time, and he never apologized or even acknowledged his error.

    I see you've met our Mason!


  • kills Dumbledore

    @Gąska said in Having problems with unit testing philosophy.:

    Unit testing is nice because it makes you REALLY hate overcomplicated architecture. When you mock everything, you want to make "everything" as small as possible - which turns most code into simple argument-accepting value-returning functions with no calls to external dependencies, making the codebase much easier to navigate and maintain

    This is my main takeaway from my current job having an actual culture of unit testing. Making your code easily testable leads to better architecture, because it forces you to think about modularity, side effects and what should actually be in each class



  • @Gąska
    I agree with that insofar as writing testable code definitely helps you clean up your architecture. But I disagree with your implication that mocking is a good thing. If you find out that you have to mock a lot, you should either:

    • re-think your architecture to reduce the amount of "glue code" and increase the amount of functional modules that work independently of the big dependencies, or if that's not possible
    • cover that piece of glue code with higher-level tests instead of unit tests.

  • Discourse touched me in a no-no place

    @boomzilla said in Having problems with unit testing philosophy.:

    This doesn't handle the case where b could be zero.

    In a sane system, the problem comes when both a and b are zero, as that's where division doesn't have a single-valued solution under any interpretation (especially including IEEE floating point). That's what generates a NaN, but only when you've not got signalling NaNs enabled and that's where the semantics of all this becomes really horrible; it's an error case that might either produce a poison value or an exception, depending on what bit of code happened to touch the relevant CPU register flag bit last. (Once had a user bug report on this about how things blew up whenever he used his printer; the HP printer driver would set both the floating point mode flag and the locale whenever it was loaded into the process, including via COM. Fucking HP.)


  • Banned

    @Jaloopa also, fearless refactoring. A comprehensive test suite that tests even trivial functions minimizes chances that something goes wrong (again, assuming at least high school level of competence).

    Unpopular opinion: developers should spend about 20% of their work time on refactoring. It doesn't just let developers keep some of their sanity intact, it also speeds up development of new features so much that it's a net time save.


  • ♿ (Parody)

    @dkf said in Having problems with unit testing philosophy.:

    In a sane system,

    Yes, yes, yes...but that's rather missing the point.



  • @error said in Having problems with unit testing philosophy.:

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    Anything that depends (directly or directly) on web server, database or even any network, cannot be considered stateless in the first place!

    I work mostly with web applications, and a surprising amount of (poorly written, but what can you do) web application code just assumes a web request/response/session is always available.

    Is that assumption really a sign of poorly-written code? Seems to me that that's an invariant: if you're in server code and that stuff's not available, you've got bigger problems!


  • Considered Harmful

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    Anything that depends (directly or directly) on web server, database or even any network, cannot be considered stateless in the first place!

    I work mostly with web applications, and a surprising amount of (poorly written, but what can you do) web application code just assumes a web request/response/session is always available.

    Is that assumption really a sign of poorly-written code? Seems to me that that's an invariant: if you're in server code and that stuff's not available, you've got bigger problems!

    Yes it is. We, for instance, tried to write console applications using Sitecore APIs, for bulk administration tasks, but invariably it would crash trying to access the request cookies. Similar issues running code in test harnesses.


  • Banned

    @dfdub said in Having problems with unit testing philosophy.:

    @Gąska
    I agree with that insofar as writing testable code definitely helps you clean up your architecture. But I disagree with your implication that mocking is a good thing. If you find out that you have to mock a lot, you should either:

    • re-think your architecture to reduce the amount of "glue code" and increase the amount of functional modules that work independently of the big dependencies, or if that's not possible
    • cover that piece of glue code with higher-level tests instead of unit tests.

    Sometimes, the "glue" is unavoidable. I used to work on a project that did a lot of inter-process communication, and most mocks were for the message sending interfaces (we had several of them - don't ask). They were invaluable in making sure that all messages are sent in order and with the right values, and receiving responses out of order was handled correctly (we found quite a few bugs that way).

    Our current project is a CLI tool, so the only mock we use is for that one external service we need to query data from - and we use it very sparingly (early in the process, we achieve the point where no more data needs to be queried, and only then start the real work).



  • @Gąska said in Having problems with unit testing philosophy.:

    Unit testing is nice because it makes you REALLY hate overcomplicated architecture. When you mock everything, you want to make "everything" as small as possible - which turns most code into simple argument-accepting value-returning functions with no calls to external dependencies, making the codebase much easier to navigate and maintain.

    Yes, being forced to adopt such an overcomplicated architecture would make me hate it too! 🚎

    I'd like to take a moment to point out that by now, @Mason_Wheeler has most definitely seen my post where I explained where he got me wrong, so he has no excuse anymore to wrongly claim that I believe unit tests prevent bugs.

    I saw that. I also saw all your other posts where you talk about how useful they are at preventing bugs, so... 🤷♂

    I'm pointing it out because I'm really annoyed by how he purposely misinterprets what I say and then spread despicable lies about me. It's not the first time he's done it in recent time, and he never apologized or even acknowledged his error.

    You've never established the existence of errors and lies. Simply asserting them doesn't mean anything.


  • BINNED

    @error said in Having problems with unit testing philosophy.:

    @Mason_Wheeler https://www.chaijs.com/api/bdd/

    Do these things actually do anything?

     expect( foo( 5 ) ).to.be( true );
     expect( foo( 5 ) ).has.been( true );
     expect( foo( 5 ) ).is.be( true );
    

    Are the results of these identical or different?



  • @error said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @error said in Having problems with unit testing philosophy.:

    @Kamil-Podlesak said in Having problems with unit testing philosophy.:

    Anything that depends (directly or directly) on web server, database or even any network, cannot be considered stateless in the first place!

    I work mostly with web applications, and a surprising amount of (poorly written, but what can you do) web application code just assumes a web request/response/session is always available.

    Is that assumption really a sign of poorly-written code? Seems to me that that's an invariant: if you're in server code and that stuff's not available, you've got bigger problems!

    Yes it is. We, for instance, tried to write console applications using Sitecore APIs, for bulk administration tasks, but invariably it would crash trying to access the request cookies. Similar issues running code in test harnesses.

    So... you're using server code in a non-server context.

    eb5e4018-d319-4360-871b-95d4559ab18f-image.png


  • Considered Harmful

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    I also saw all your other posts where you talk about how useful they are at preventing bugs, so...

    They do, specifically, prevent regression bugs. In my experience, they can expose - with surprising frequency - latent bugs that haven't manifested themselves yet. That is, when the code misbehaves in a subtle way that no one has run into yet, or is mitigated by other factors (such as my earlier example, where the consuming code didn't really care if it got back true or any other truthy value). In particular, when I start testing code that should throw an exception in some cases, a lot of times it just spews out garbage and chugs on.


  • Considered Harmful

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    So... you're using server code in a non-server context.

    Rather the API designers made the invalid assumption that you'd only consume the API in a server context. They imposed an artificial limitation that made it far less useful to us.


  • Banned

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    I'd like to take a moment to point out that by now, @Mason_Wheeler has most definitely seen my post where I explained where he got me wrong, so he has no excuse anymore to wrongly claim that I believe unit tests prevent bugs.

    I saw that. I also saw all your other posts where you talk about how useful they are at preventing bugs, so... 🤷♂

    You saw all zero of them? I literally did text search just to make sure, and there were literally 0 times I used the word "bug" before the post you quoted here.

    I said unit tests prevent one very specific kind of mistake - accidentally changing behavior in unintended way when modifying old code. You extrapolated it to me saying that TDD leads to no bugs whatsoever. That's either an absolutely outstanding failure at reading comprehension, or a straight up lie.

    I'm pointing it out because I'm really annoyed by how he purposely misinterprets what I say and then spread despicable lies about me. It's not the first time he's done it in recent time, and he never apologized or even acknowledged his error.

    You've never established the existence of errors and lies. Simply asserting them doesn't mean anything.

    You said I believe tests make code bug-free. I don't believe tests make code bug-free. Here, have your establishment. Will you finally acknowledge you were wrong?


  • BINNED

    @dkf said in Having problems with unit testing philosophy.:

    Once had a user bug report on this about how things blew up whenever he used his printer; the HP printer driver would set both the floating point mode flag and the locale whenever it was loaded into the process, including via COM. Fucking HP.

    That reminds me of one of the crazier bugs I had to hunt down.
    I had just gotten a new workstation and only installed tools on an as-needed basis. Checked out the repository from SVN, implemented a large-ish new feature and spent some time testing it. Once everything worked I committed it and ... it no longer worked. I mean, not just some edge case but it basically failed all the fucking time. How could that be possible, I know that I tested at least something and saw it working?!?
    It turns out that for committing I installed the latest version of TortoiseSVN (GUI client) and once its shell extension got loaded into our application it fucked up the locale, so from then on all parsing of data files produced wrong values.
    Fucking bastards...



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    So... you're using server code in a non-server context.

    You're looking at the problem in the wrong way.

    Why is the business logic that @error wants to test tightly coupled to the session and request objects in the first place when it only cares about the values of certain session / URL parameters? Why can't it be written as a "pure" function without hidden side effects so that all input parameters are visible at the API boundary?

    The answer is that the code is designed poorly. Yes, even if the original use case is in the context of a server. You should never tightly couple business logic to implementation details of the framework such as the session or request object.


  • Considered Harmful

    @error said in Having problems with unit testing philosophy.:

    latent bugs that haven't manifested themselves yet.

    These are basically holes in the Swiss cheese that only become tangible bugs when they line up with other holes in other layers. Your human testers won't find these. You need both.



  • @dfdub said in Having problems with unit testing philosophy.:

    But I disagree with your implication that mocking is a good thing.

    IMO, you should really only be mocking access to external dependencies, or dependencies that perform external actions (read/writing to database, read/writing to file)



  • @boomzilla said in Having problems with unit testing philosophy.:

    When you say, "that it calls," is that part of the test? Or part of your coverage report / metrics?

    I don't know how to answer that question.

    What I'd expect would be good unit tests in our position:
    Given a certain input, the output is tested, exceptions are tested, and every relevant public property is tested for the correct result.

    What we actually do:
    We're testing every single interface layer method to see what it's "output" is. That "output" includes what calls it makes to other interfaces.

    It's hard to point this out without literally sharing my codebase, but everything is IOC'd as much as possible. Many things are broken up as small as possible. My typical view model can have 20 or so IOC'd interfaces. Every public method of that VM is tested. The test checks every call on every interface in that list of 20. To do that, every interface is mocked out.

    Then every one of those interfaces has unit tests for all of its public methods, and those interfaces can have 5 or so other dependencies, so every public method is tested for every call it makes on another interface.


    Here's an actual unit test (names changed)

                var bobVM = Substitute.For<IBobVM>();
                _BillsBobManagerVMMock.ClearReceivedCalls();
    
                _bobWrapperVM.BobVM = bobVM;
    
                _BillsBobManagerVMMock.Received(1).BobVM = bobVM;
    

    Test to see if an internal interface receives the bob when the external interface receives the bob.

    How is that meaningful?

    Here's another test we have that I find meaningful

                IBolt boltMock = Substitute.For<IBolt>();
                IScrewVM screwMock = Substitute.For<IScrew>();
                boltMock.Screw.Returns(callinfo => screwMock);
    
                // given a screw and bolt with matching threads, can they connect?
                Assert.IsTrue(_screwBoltConnectivityUtility.CanConnectToScrew(boltMock));
    

  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    My typical view model can have 20 or so IOC'd interfaces. Every public method of that VM is tested. The test checks every call on every interface in that list of 20. To do that, every interface is mocked out.

    😮



  • @boomzilla

    Some of those are child VMs or factories for child VMs.



  • This VM I'm talking about. Listing out the interfaces it uses. I counted more than 16 because I accidentally counted a few concrete instantiation params (a few bool flags)

            breadcrumb interface
            report builder
            cross VM field management (for if the same field shows up somewhere else. This is a holdover from before IOC)
            validation utility
            navigation utility (for navigating global views in response to requests from this class)
            custom dialog popup utility
            manages toggleable fields (really? we need to IOC this out in case what? It only serves an execute on trigger execute on untrigger)
            report viewing VM
            custom busy overlay
            shares report state
            simple threading utility (this one really freaks me out. It simply serves a run on background thread utility. I have no idea why this is no longer a static class. I mean, I guess we might switch out the implementation on demand? Really...?! UGH!!!)
            session manager for tracking logged in status for remote web calls
            proprietary VM that I shouldn't mention
            accesses database
            accesses config
            displays toaster alerts


  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Low-level unit tests are of negative net value; overall they cause more problems than they solve.

    I disagree. I wrote some low level tests over some 3rd party libs. Saved my bacon when they changed some assumptions between versions. It took a bunch of digging to find the documentation about what changed. That test failure actually saved me tons of time - because the code compiled and ran - and it was not readily visible that something was wrong.



  • @xaade great topic, good to know I'm not the only one to notice the emperor has no clothes



  • @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @Jaloopa said in Having problems with unit testing philosophy.:

    Unit tests aren't enough, but they're a useful part of your toolbox

    When you only look at the benefits, which I agree do exist, it's very easy to come to that conclusion. But when you factor in the drawbacks as well, that's where things get a bit more complicated. Based on what I've seen, I simply can't support that conclusion. Low-level unit tests are of negative net value; overall they cause more problems than they solve.

    If I had to think a rule of thumb it would be some ratio between the lines of code for the test and the lines it cover. I don't know the right ratio but I've seen a 20 line test for a single line function.



  • @sockpuppet7 said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @Jaloopa said in Having problems with unit testing philosophy.:

    Unit tests aren't enough, but they're a useful part of your toolbox

    When you only look at the benefits, which I agree do exist, it's very easy to come to that conclusion. But when you factor in the drawbacks as well, that's where things get a bit more complicated. Based on what I've seen, I simply can't support that conclusion. Low-level unit tests are of negative net value; overall they cause more problems than they solve.

    If I had to think a rule of thumb it would be some ratio between the lines of code for the test and the lines it cover. I don't know the right ratio but I've seen a 20 line test for a single line function.

    This depends IMO. Again, I'm looking for value and meaning.

    If the line of code is so mission critical, then the test is still worth it.

    For example. It may be the one function that does the magic math that IS your product, the rest of the code being the UI and config and the support for those values.


  • ♿ (Parody)

    @xaade said in Having problems with unit testing philosophy.:

    We've moved that direction since it's easier to unit test and deploy IOC when you have mockable interfaces. So anything reusable ends up being an interface. Static classes and methods are gone, even if they would contain no data on their own and simply encapsulate functions that are used in more than one class.

    That drives me insane. Surely we can just wrap the damn thing in an interface for testing purposes? Nope, it's now an IOCed interface.

    This happens across the board (technical, non-technical), it's just losing sight of the big picture. It happens naturally as organizations grow and functional silos form (testing, development, operations, etc).

    It's likely not cargo-cult (as @Kamil-Podlesak mentions), since it's management-driven, but more focused on addressing quality concerns at the

    This is why it's so important to cross-train on things. Developers who don't know testing (i.e. 5 types I mentioned) cannot be effective in their job.

    And we STILL have bugs. TONS of bugs. Even in unit tested code. Why? Because the vast majority of our bugs are logic bugs, where the test is agreeing with the broken code.

    I call those tests "worst than useless". It would have been better to have no test at all in this case.


  • ♿ (Parody)

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Based on what I've seen, I simply can't support that conclusion. Low-level unit tests are of negative net value; overall they cause more problems than they solve.

    You definitely haven't seen enough 😉

    I think @dcon's example (third-party library testing) is great; even Microsoft (.NET) breaks across versions, so it's balancing cost to debug/fix/ship vs cost to write the tests. I classify it more as a business / risk management decision than a technical one.

    We have a lot of very low-level unit tests (most are low-level), Consider OtterScript / Inedo Execution Engine. It test the most basic functionality. For example, we don't really test if the string set $var = val; parses/compiles/executes, we may have a ton of different tests that would handle this...

    • does $var yield a VariableExpression of type scalar with name of var
    • does val yield a LiteralExpression
    • does set $var = val; yield SetVariableStatement with VariableExpression and a LiterlaExpression
    • does a running a SetVariableStatement create a variable that doesn't exist in the current scope
    • does a running a SetVariableStatement set a variable that exists in the current scope
    • does a running a SetVariableStatement set a variable that exists in a higher scope
    • does a running a SetVariableStatement set a variable that exists in the global context
    • etc...

    If all of those things happen on their own, then we can be pretty certain that set $var = val; does what it's supposed to do in the right context. We can just use manual testing in the software ("high quality smoke-tests") to test everything else.

    These tests allow us to make big changes to all parts of OtterScript/ExecutionEngine when needed. For example, introducing the global context (which works slightly different than the highest scope), etc. caused a lot of tests to fail because, when editing/refactoring the code, it changed the behavior in ways that weren't apparent (until the test ran and failed).



  • @xaade said in Having problems with unit testing philosophy.:

    @sockpuppet7 said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    @Jaloopa said in Having problems with unit testing philosophy.:

    Unit tests aren't enough, but they're a useful part of your toolbox

    When you only look at the benefits, which I agree do exist, it's very easy to come to that conclusion. But when you factor in the drawbacks as well, that's where things get a bit more complicated. Based on what I've seen, I simply can't support that conclusion. Low-level unit tests are of negative net value; overall they cause more problems than they solve.

    If I had to think a rule of thumb it would be some ratio between the lines of code for the test and the lines it cover. I don't know the right ratio but I've seen a 20 line test for a single line function.

    This depends IMO. Again, I'm looking for value and meaning.

    If the line of code is so mission critical, then the test is still worth it.

    For example. It may be the one function that does the magic math that IS your product, the rest of the code being the UI and config and the support for those values.

    In my example the line called a constructor. This constructor had a single mocked parameter and it's inside was also mocked. If someone wanted to maximize uselessness that is the unit test someone would write. And someone seriously wrote it. I deleted it, of course.



  • @apapadimoulis said in Having problems with unit testing philosophy.:

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Based on what I've seen, I simply can't support that conclusion. Low-level unit tests are of negative net value; overall they cause more problems than they solve.

    You definitely haven't seen enough 😉

    I think @dcon's example (third-party library testing) is great; even Microsoft (.NET) breaks across versions, so it's balancing cost to debug/fix/ship vs cost to write the tests. I classify it more as a business / risk management decision than a technical one.

    We have a lot of very low-level unit tests (most are low-level), Consider OtterScript / Inedo Execution Engine. It test the most basic functionality. For example, we don't really test if the string set $var = val; parses/compiles/executes, we may have a ton of different tests that would handle this...

    • does $var yield a VariableExpression of type scalar with name of var
    • does val yield a LiteralExpression
    • does set $var = val; yield SetVariableStatement with VariableExpression and a LiterlaExpression
    • does a running a SetVariableStatement create a variable that doesn't exist in the current scope
    • does a running a SetVariableStatement set a variable that exists in the current scope
    • does a running a SetVariableStatement set a variable that exists in a higher scope
    • does a running a SetVariableStatement set a variable that exists in the global context
    • etc...

    If all of those things happen on their own, then we can be pretty certain that set $var = val; does what it's supposed to do in the right context. We can just use manual testing in the software ("high quality smoke-tests") to test everything else.

    These tests allow us to make big changes to all parts of OtterScript/ExecutionEngine when needed. For example, introducing the global context (which works slightly different than the highest scope), etc. caused a lot of tests to fail because, when editing/refactoring the code, it changed the behavior in ways that weren't apparent (until the test ran and failed).

    Wow. Compilers are generally one of my go-to arguments for my position on this. If you look at the test suite to the Boo compiler, pretty much the entire thing works by saying "compile and execute this short code snippet, and make sure the output matches this expected output."

    When I ported async/await to Boo, I grabbed Microsoft's tests for the feature from the Roslyn C# compiler codebase, (and they were also in that same basic form,) translated the language syntax, and added them to the test suite. Then when I was able to get them all to run without breaking any of the existing tests, I thought I was done. Then I went to actually use the feature and it fell apart. I found a use case that the Roslyn tests didn't cover, and my port had managed to get it wrong. (They do now because I sent the Roslyn project a new test case.)

    If the Roslyn test suite had been of the form "this input must produce this AST" rather than "this input must produce this output", I would never have been able to make it work, because the ASTs for Roslyn and Boo are completely different in style and basic philosophy.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in Having problems with unit testing philosophy.:

    Compilers are generally one of my go-to arguments for my position on this.

    If you were to extend that to “programming language implementations” I'd agree from my own experience. Those are usually extremely thoroughly tested precisely because so much else depends on them; if writing to a variable (or parsing an expression, or …) doesn't work, there'll be whole loads of programmers looking at code and thinking “am I going mad here?”.

    The answer to that is “yes”, but we don't need to encourage it.



  • @apapadimoulis

    Can you give an example of what you WOULD unit test, and what the test would look like?



  • Here's an article that goes in the direction of where we've been going but I have some questions about it.

    Instead of just calling DateTime.Now in GetTimeOfDay, he passes in a datetime to a GetTimeOfDay method for the class, but now the code that calls GetTimeOfDay is using Datetime.Now, so he says he pushed the problem up a layer.

    Now, instead of just creating a method to GetTime or such, and mocking that method in his unit test, he develops an interface solely to wrap DateTime functionality because STATIC CLASS BAD! and uses IOC to inject a wrapper that wraps ONE METHOD!

    Now he writes a fake datetime provider so he can unit test the class.

    All of this, just to be able to unit test his class.

    Oh, and he had to make GetTimeOfDay a public method because he wants to unit test that separately for whatever reason.

    And why was all of this done?

    To test that the actuator turns on lights when motion is detected during the night, and not during the day.

    WHO CARES HOW WE DETERMINE NIGHT AND DAY.... why is that even being unit tested?

    But now, if for whatever reason we decide to change how we determine night and day, we have to change this unit test.


    This is what I'm trying to figure out. The code base is now 50% larger for the sake of one unit test.

    What I want to know is, Am I crazy?


  • I survived the hour long Uno hand

    @xaade
    In that specific example, the reason for wrapping DateTime.Now is because it isn't mockable, so you can't ensure that it returns a "known" value for your tests and thus your tests aren't deterministic. The wrapper winds up giving you an override point that your tests can use to allow for A.CallTo(_dateTimeWrapper.Now).Returns(dayTime) so that your test is asserting what happens when a time during Day (or Night) comes back from DateTIme.Now



  • @izzion

    I get what the purpose is.

    But we now have a wrapper for one method JUST so we can unit test. That's a separate gripe of mine.

    I was more confused as to why he's exposing this internal method and unit testing it separately.

    I think it's sufficient to test the actual public method we care about, the internals are so small that we'll be able to trace down the reason for the bug fairly quickly. We don't need two unit tests that fail for the same reason, do we?


  • Trolleybus Mechanic

    @xaade said in Having problems with unit testing philosophy.:

    @izzion

    I get what the purpose is.

    But we now have a wrapper for one method JUST so we can unit test. That's a separate gripe of mine.

    I was more confused as to why he's exposing this internal method and unit testing it separately.

    I think it's sufficient to test the actual public method we care about, the internals are so small that we'll be able to trace down the reason for the bug fairly quickly. We don't need two unit tests that fail for the same reason, do we?

    Depends on how complicated the code using the datetime is and how often it's been wrong or how important it is. If having the scan only happen at night is a preference and not a hard requirement then you can easily make the case that it's a waste. If it's an essential feature then these tests are likely worth it. Even if a style of unit testing is bad 90% of the time, you still need to know if you're in a 10% scenario where it's not only good but maybe essential.

    At a previous job a different team had to do something similar so they could verify that audits would be emitted in the right order in a fairly complex situation that as I recall caused a bit of a prod support nightmare.

    Similar things were done in a couple places to make sure absolutely essential info was logged properly, however I think that was just using a mock logger and didn't need a thing created just to enable the validation.


Log in to reply