Don't test, it gets in the way of code coverage



  • I've just started with a company that develops bespoke apps on the salesforce platform.

    I never expected it would be my vocation in life, to put it mildly, but one thing did attract me to it. I'm into test driven development and in salesforce test coverage is enforced - it's a managed platform, and you are prevented from pushing any code to production that has less than 75% coverage. Great, I thought, no more having to persuade colleagues of the value of testing, etc. .....

    Then, the other day, my boss asked me to look at an app he'd written. It had nearly 75% coverage and he wanted me to add a couple more tests. Well I started looking through the existing tests to see how he wanted them done......

    They had no assertions in them. They couldn't fail. They "covered" the code, by simply running the code, whether it worked or not.

    I assumed the boss was the usual non-coding coder you get for an MD in these little companies and whispered to the guy next to me that I had come across this abomination and asked him how I should tactfully correct it.

    "Oh no," he said, "It's supposed to be like that. We all do it like that here."

    "Well", I said, "I'll let it go in his code I suppose, but when write tests myself I think I would rather put the assertions in."

    "I wouldn't do that," he said. "You need to spend your time efficiently getting the code coverage up. Don't waste time on assertions. Do you realise assertions add nothing to the coverage figure?"



  • So they start with unjustified, pointless dogma and then foolishly circumvent it by selling indulgences leaving out assertions.

    This sounds like somebody tried to enforce a test-driven process on a cowboy culture, and the barbarians played along but really just kept doing things their own way.



  • Have you ever used one of those gel keyboard wrist wrests? Not only do they lighten the load on your wrists, but they make immensely satisfying weapons. Personally I'd go for the side of the head.



  • @DOA said:

    Have you ever used one of those gel keyboard wrist wrests? Not only do they lighten the load on your wrists, but they make immensely satisfying weapons. Personally I'd go for the side of the head.

    Funny that you mention that; I have a gel mouse wrist wrest on my mousepad, and I have it turned around so that my wrist doesn't wrest on it... I guess that makes me no better than those cowboy coders of token_woman's... :: ducks ::

    Edit: Yanno what. For today, I will actually use the wrist wrest. This is my big eff you to the cowboy mentality for today.



  • @dohpaz42 said:

    Edit: Yanno what. For today, I will actually use the wrist wrest. This is my big eff you to the cowboy mentality for today.

    You go girl!



  • How are you guys getting line breaks in your posts? CS must like you better than me. It wouldnt let me have any!



  • @Justice said:

    So they start with unjustified, pointless dogma and then foolishly circumvent it by selling indulgences leaving out assertions.

    This sounds like somebody tried to enforce a test-driven process on a cowboy culture, and the barbarians played along but really just kept doing things their own way.

    Obviously given what I said about TDD I personally disagree with the remark about pointless dogma.

    But I have no issue with anyone choosing not to go the test-driven way.

    What bugs the bejeezus out of me is that they spend time writing all this "coverage" and not spend the extra few minutes to make it actually do something. Might as well ya, know, Since it's THERE...??

    (edited to hand-code line breaks in)



  • @token_woman said:

    How are you guys getting line breaks in your posts? CS must like you better than me. It wouldnt let me have any!

    If you're using Chrome, then you have to use the raw html editor and enter all of your own markup.



  •  @token_woman said:

    How are you guys getting line breaks in your posts? CS must like you better than me. It wouldnt let me have any!

    If it likes your browser it gives you an HTML button which lets you write in a sane markup.



  • That's what I call cargo cult programming - those poor clueless guys expect that the sky gods will come and give them bountiful supplies if they can only successfully look like they're actually testing.

    @token_woman said:

    How are you guys getting line breaks in your posts? CS must like you better than me. It wouldnt let me have any!

    In my case I don't see any rich text editor, so I just type the line break and paragraph HTML tags myself.



  • It wouldn't piss me of with QUITE SUCH EXTREMITY if I hadn't mentioned wanting to work in a TDD environment on my resume.


    On the FIRST LINE of my resume.



  • @token_woman said:

    How are you guys getting line breaks in your posts? CS must like you better than me. It wouldnt let me have any!

    I get not just line breaks but also two different kinds of them:

    paragraph breaks

    like that

    and line breaks
    like this

    I'm using firefox, and I have scripting enabled, and I just press CR or shift+CR in the rich-text editor. 



  • @dohpaz42 said:

    I have a gel mouse wrist wrest on my mousepad, and I have it turned around so that my wrist doesn't wrest on it...

    So you're saying that you've wrested the rest from your wrists.

     



  • Cheers for your help on the line breaks, I'm using chrome and have always posted in firefox before.

    In chrome, CS only seems to give you one simple editor. But I've realised you can put HTML in it anyway so am manually putting them in.



  • Shocking news: horses led to water can not be forced to drink!



  • @token_woman said:

    @Justice said:

    So they start with unjustified, pointless dogma and then foolishly circumvent it by selling indulgences leaving out assertions.

    This sounds like somebody tried to enforce a test-driven process on a cowboy culture, and the barbarians played along but really just kept doing things their own way.

    Obviously given what I said about TDD I personally disagree with the remark about pointless dogma.

    But I have no issue with anyone choosing not to go the test-driven way.

    What bugs the bejeezus out of me is that they spend time writing all this "coverage" and not spend the extra few minutes to make it actually do something. Might as well ya, know, Since it's THERE...??

    (edited to hand-code line breaks in)

     

    I have no issue with test-driven development; quite the contrary, I think it's a fine idea.  The pointless dogma I was referring to is basing it on meaningless metrics, i.e. requiring 75% code coverage (I should have said something like "defeat the point" rather than circumvent).

    You could get 95% code coverage in a lot of situations by testing the normal path of execution where everything works and the end product is puppies and rainbows, but the important, difficult part is the 5% of the code that handles edge cases and error conditions.  Unit tests are great, and TDD has its merits, but the way to enforce it is through code reviews and development practices, not having some automated system block you from checking anything below an arbitrary code coverage threshold.



  • @blakeyrat said:

    Shocking news: horses led to water can not be forced to drink!

    Sure, but if you don't want to drink, why go to the water?

    If you don't want to really do unit tests why start a company working exclusively on a platform that forces you to write them?

    And why TF would you employ someone with "Looking to work in a test driven environment" at the top of their resume - wouldn't "They will discover the sham and leave" suggest itself at some point in the hiring procedure?



  • @Justice said:

    I have no issue with test-driven development; quite the contrary, I think it's a fine idea.  The pointless dogma I was referring to is basing it on meaningless metrics, i.e. requiring 75% code coverage (I should have said something like "defeat the point" rather than circumvent).

    You could get 95% code coverage in a lot of situations by testing the normal path of execution where everything works and the end product is puppies and rainbows, but the important, difficult part is the 5% of the code that handles edge cases and error conditions.  Unit tests are great, and TDD has its merits, but the way to enforce it is through code reviews and development practices, not having some automated system block you from checking anything below an arbitrary code coverage threshold.

    +1. Sorry I misunderstood you before. In fact you've just described, better than I could, the lesson I've learned from this whole sorry business.



  • @token_woman said:

    If you don't want to really do unit tests why start a company working exclusively on a platform that forces you to write them?

    And why TF would you employ someone with "Looking to work in a test driven environment" at the top of their resume - wouldn't "They will discover the sham and leave" suggest itself at some point in the hiring procedure?

    Perhaps because they believe that they're actually testing things? Sure, it sounds stupid and unlikely to you, but it is amazing what some people can get up to when they don't think things through.



  • @token_woman said:

    And why TF would you employ someone with "Looking to work in a test driven environment" at the top of their resume - wouldn't "They will discover the sham and leave" suggest itself at some point in the hiring procedure?

    So are you going to leave?

    Answer carefully, this might be a trick question.



  • @blakeyrat said:

    Shocking news: horses led to water can not be forced to drink!
     

     

    Am i the only one to be surprised that blakeyrat understands this topic without anyone having to point out the exact meaning of the term 'assertions' within a test environment context?



  • @Helix said:

    @blakeyrat said:

    Shocking news: horses led to water can not be forced to drink!
     

     

    Am i the only one to be surprised that blakeyrat understands this topic without anyone having to point out the exact meaning of the term 'assertions' within a test environment context?

     

    I think you are. Sorry.



  • @Helix said:

    @blakeyrat said:

    Shocking news: horses led to water can not be forced to drink!
     

     

    Am i the only one to be surprised that blakeyrat understands this topic without anyone having to point out the exact meaning of the term 'assertions' within a test environment context?

    LOL


  • @blakeyrat said:

    So are you going to leave?

    Certainly, unless you happen to be my employer and to have discovered my identity, in which case I take it all back! We all have to pay the bills!



  • @token_woman said:

    @blakeyrat said:
    So are you going to leave?

    Certainly,

    I approve. Carry on.



  • Just to through oil on a fire....

     "Tests" which do noting but exercise the code (not checking any results) can find a suprising lerge number of real errors. In many cases, I (using automation) write such tests to establish a baseline (and typicaly 90% to 95% of "raw" coverage) then layer on the assertions that enforce the "requirements". About 30% of the defects discovered are discovered before any assertions have been put in place.



  • @TheCPUWizard said:

    Just to through oil on a fire....

     "Tests" which do noting but exercise the code (not checking any results) can find a suprising lerge number of real errors. In many cases, I (using automation) write such tests to establish a baseline (and typicaly 90% to 95% of "raw" coverage) then layer on the assertions that enforce the "requirements". About 30% of the defects discovered are discovered before any assertions have been put in place.

    That's all fine and good, but most likely that 30% of bugs would have been found and fixed anyway. It's the other 70% that you really should be concerned about, because they could go undiscovered for a very long time, and cause much worse problems than simple program execution problems. Say for example you collect a shit-ton of data (just a smidgen larger than crap-ton, but smaller than a holy-shit-ton) that has some complex algorithms applied to it, which in turn feeds some other application that processes them to determine what kind of annual pay rise you receive. Let's hypothetically say that the calculation undervalues the data, so that you get a fraction of the pay rise you are rightfully entitled to get each year. One day this bug is discovered, but because the data has been mangled by the bug, there is no way to go back and retro your pay. A few simple unit tests could have easily discovered this problem, but now you're out all of this fictional money.

    Yes, this is an "edge case" that I dreamed up, but that's kind of what tests are there for; among other things.

    P.S. I know that you're not disagreeing with testing, but I just wanted to add more oil to the fire that you added oil to, so my reply was not particularly aimed at you.



  • @TheCPUWizard said:

    Just to through oil on a fire....

     "Tests" which do noting but exercise the code (not checking any results) can find a suprising lerge number of real errors. In many cases, I (using automation) write such tests to establish a baseline (and typicaly 90% to 95% of "raw" coverage) then layer on the assertions that enforce the "requirements". About 30% of the defects discovered are discovered before any assertions have been put in place.

    Sounds like a good idea, especially if you have the automation available (what do you use by the way and how little human effort is involved?)

    Claiming your raw coverage figure as test coverage as that is understood in the business would be fraudulent, of course - but to use it as a baseline just sounds sensible.



  • Right nouw I am using custom tooling that has been developed in-house by my firm, we are in the process of "productizing" it, but that is not scheduled for release until Q3 2012,,,,

    I am not sure about the "fraud" part. I look at coverage from two distinc angles... code and requirements. The business really only cae about what percentage of the requirements are being tested, and typically these are not done with unit tests.

     Consider a requirement that "Cusomer balance cna not exceed credet limit" as a requirement. There are thousands of way that individual methods, properties, etc. could be combined to avchieve this goal. From a unit testing perspective, the deteremination of if a specific method does what it should is largely irrelevant to the business goal.

    Put another way...every refactoring of the code should impact multiple unit tests, but if the intention is that the refactoring should not change behaviour, then the behavioural tests should not require any updating.



  • By "in the business" I meant the programming business, not the [i]business[/i] business. Sorry to be confusing. "In the trade" would have made more sense in the context.



  • @token_woman said:

    +1. Sorry I misunderstood you before. In fact you've just described, better than I could, the lesson I've learned from this whole sorry business.
     

    No sweat, sorry I wasn't clear about it.  How long have you been at the new gig?  Maybe you have an opportunity to lead by example.

    @TheCPUWizard said:

    Consider a requirement that "Cusomer balance cna not exceed credet
    limit" as a requirement. There are thousands of way that individual
    methods, properties, etc. could be combined to avchieve this goal. From a
    unit testing perspective, the deteremination of if a specific method
    does what it should is largely irrelevant to the business goal.

    Put
    another way...every refactoring of the code should impact multiple unit
    tests, but if the intention is that the refactoring should not change
    behaviour, then the behavioural tests should not require any updating.

    Not sure I'm following you here...are you separating the "mock up an appropriate program state to test each function" sort of unit tests from automated behavioral testing?  I've always just lumped them both under the "unit test" label.  It seems like the former isn't particularly useful for ensuring that refactoring won't break existing functionality, and is more for testing complicated logic (math functions for instance).  Having automated tests for that sort of thing is useful of course, but even then I'm not sure why refactoring would necessarily break that sort of thing; I would think those sorts of methods would rarely (if ever) be subject to refactoring.

    (Not trying to argue against you here, I'm genuinely curious as to how your unit/automated testing is set up.)



  • @Justice said:

    (Not trying to argue against you here, I'm genuinely curious as to how your unit/automated testing is set up.)

    No worries.. I have found that very granular unit tests [G.U.T.] has alot of value. This has been an empirical determination running my software development company for 27+ years. I keep these distinct from requirements/functional testing, and also from design rule tests.

    Since the focus of GUT testing is what the method/propery/etc does rather than the context in which it does it [hope that makes sense] it is usually trivial to setup the "mock environment". Inf fact the ease of setting up the mock environment is a good (but not absolute) indicator if the item under est is too tightly coupled.

    Other discount the value of this testing, but I have run many projects with many clients (therefore different programming teams) and in over 90% of the cases, the ones where GUT was adopted had a much more stable and predictable development and maintenance cycles than the ones that did not.

    Yes, as others pointed out, many of the bugs would have been caught...eventually. But (statistically) the faster a bug is found the easier and cheaper it is to fix.



  • @token_woman said:

    @TheCPUWizard said:

    Just to through oil on a fire....

     "Tests" which do noting but exercise the code (not checking any results) can find a suprising lerge number of real errors. In many cases, I (using automation) write such tests to establish a baseline (and typicaly 90% to 95% of "raw" coverage) then layer on the assertions that enforce the "requirements". About 30% of the defects discovered are discovered before any assertions have been put in place.

    Sounds like a good idea, especially if you have the automation available (what do you use by the way and how little human effort is involved?)
    Claiming your raw coverage figure as test coverage as that is understood in the business would be fraudulent, of course - but to use it as a baseline just sounds sensible.

     

    I agree some bugs could be caught even before the assertions.  But at my company, before I joined, most of the tests were written as:

        public void testFoo {

            try {

                // entire method body here

            } catch (Exception e) {

                // either an actual no-op, or System.out.println("something failed, but I won't say what it was")

            }

        } 

    In addition to that, they are almost always functional tests (nothing mocked out) with hard-coded user credentials, environment URLs, and hard-coded primary keys.  So when I try to improve our test quality by removing the try-catch wrapper, they are almost always failing.

    Even more than copy/paste engineering, *this* has been my biggest pet peeve at my current company, and several times while looking at the art of team members, I realized I made a mistake joining this team.  After lurking for 5 years, I finally signed up just to say that.

     

    And we don't even have an excuse of trying to improve code coverage, because no one (not management, not project lead, not the developers) cared how lousy the coverage rate was.



  • @blakeyrat said:

    Shocking news: horses led to water can not be forced to drink!
     

    More shocking news: yes, they can.  All it takes is a piece of hose, a suitable squishable container and something to connect the two together (though there are specific apparatuses that will do better).  I've done it with sick cows, and horses are surely the same. 



  • @mahlerrd said:

    I've done it with sick cows, and horses are surely the same. 

    They're not.  And don't call me Shirley.



  • @mahlerrd said:

    @blakeyrat said:

    Shocking news: horses led to water can not be forced to drink!
     

    More shocking news: yes, they can.  All it takes is a piece of hose, a suitable squishable container and something to connect the two together (though there are specific apparatuses that will do better).  I've done it with sick cows, and horses are surely the same. 

    And once more I am amazed by the variety and inventiveness of pedantic dickweedery. Bravo.



  • @Sutherlands said:

    And don't call me Shirley.
     

    rimshot
    <br>



  • @blakeyrat said:

    And once more I am amazed by the variety and inventiveness of pedantic dickweedery.
     

    Advanced pedantic dickweedery, as we've just concluded in a different thread.

     



  • @TheCPUWizard said:

    G.U.T. testing

    Looks useful. Alas, "gut testing" seems to be the prevalent method - "Hmm, from the look of it, I don't think that it's actually broken; at least not too much." ;)



  • @piskvorr said:

    @TheCPUWizard said:
    G.U.T. testing

    Looks useful. Alas, "gut testing" seems to be the prevalent method - "Hmm, from the look of it, I don't think that it's actually broken; at least not too much." ;)

    Agreed that as a sole test approach it is not sufficient, however as one component of a well designed suite of test methodologies I am convinced that the ROI is significant. As one (probably) last example. Consider a situation where you have functional/behavioural tests and may or may not have GUT tests. A change is made and a functional test fails (due to a bug in the change). Tracking this down may involve looking at a number of areas. If a GUT test also fails, that isolates a specific method as having changed behaviour, this can cut the failure diagnosis time by "an order of magnitude" and lead to faster resolution.



  • The only problem here is that you care too much.  If they want to do sloppy development in the company, that is what they are going to do, and you should go with the flow.  In a lot of businesses, there is a lot more profit in writing bad code and not testing it.



  • @Salami said:

    The only problem here is that you care too much.  If they want to do sloppy development in the company, that is what they are going to do, and you should go with the flow.  In a lot of businesses, there is a lot more profit in writing bad code and not testing it.

    This. If you're selling consulting for companies that want to spend the least amount necessary for something, this is what they actually want. They know they will not get an engineering masterpiece, but something that will get the job done.

    They don't want perfect or even good, they just want 'good enough'.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.