We have automated tests so our code is reliable



  • On a new project, the manager proudly boasts that his team's software is very reliable because they have a VERY large suite of JUnit test cases/suites that cover a very large percentage of the code, and check for all possible exceptions/errors. It is run every 30 minutes, to guarantee that all code is tested and validated virtually immediately upon being checked in. Ooooooookay! A quick perusal shows that every single one of the test cases has the following format:

    public class XxxTest extends TestCase {
    

    // Test feature x
    public void testFeatureX() {
    try {
    ... // if the code doesn't throw any exceptions, it just returns, implicitly passing the test
    } catch (SomeException se) {
    se.printStackTrace();
    assertTrue(true);
    } catch (SomeOtherException soe) {
    soe.printStackTrace();
    assertTrue(true);
    } catch (Exception e) {
    e.printStackTrace();
    assertTrue(true);
    } catch (Throwable t) {
    t.printStackTrace();
    assertTrue(true);
    }
    }
    }

    There are thousands of test cases in hundreds of classes - every single one exactly like this one. sigh
     



  • Of course, the fact that your software doesn't throw any exceptions doesn't mean that it is actually doing the RIGHT thing.

     

    public int incrementByFive(int theNumber){

    try{

    theNumber +=10;

    }

    catch{}

    return theNumber;

    }

     

    will pass just fine

     

     



  • What're those assertTrue(true) lines all about?



  • Precisely my point! Thousands of tests that effectively return "pass" under all circumstances - pointless!



  • Wow... so let's see.  Unless I'm mistaken there are two possible outcomes, and both are shining examples of WTFery:

    1) The code executes in the try block, and if no exceptions occur there's no assertions made, so the test passes... without actually asserting that things are what they should be.

    2) If an exception occurs... it prints the exception to the console.. and asserts that True is True, so the test passes.

    Now I've not used JUnit, but I've used NUnit.  This seems to be as "effective" as not running any tests at all, since from the look of things it'll "pass" the test regardless of what happened.
     



  • @SpoonMeiser said:

    What're those assertTrue(true) lines all about?
    I have no idea - they're effectively a no-op, but from what I can tell of the person who wrote this, it was his attempt at saying "pass the test anyway", which it would do anyway without them.



  • @TheRubyWarlock said:

    Now I've not used JUnit, but I've used NUnit.  This seems to be as "effective" as not running any tests at all, since from the look of things it'll "pass" the test regardless of what happened.



    Hopefully, at least, someone will at some point look at the console and think to do something about all the printed stack traces. So it's a little more effective than not having the tests.
    But it seems just a stubborn refusal to let the tool do its job. Unfortunately, I get the bad feeling that at some point, those assertTrue(true)s were assertTrue(false),
    and someone took the initiative to "fix" all the broken tests by replacing false with true



  • @campkev said:

    Of course, the fact that your software doesn't throw any exceptions doesn't mean that it is actually doing the RIGHT thing.

     

    public int incrementByFive(int theNumber){

    try{

    theNumber +=10;

    }

    catch{}

    return theNumber;

    }

     

    will pass just fine

     

     

     

    I got the impression that the tests were more like

    try {

    if(incrementByFive(42) != 47) throw [something]

    } catch {

    (if there are exceptions the test didn't pass)

     

    Regardless, the model seems to lead to some absurdity, when the feature you're looking for is making sure that it throws an exception for invalid input 



  • @TheRubyWarlock said:

    Wow... so let's see.  Unless I'm mistaken there are two possible outcomes, and both are shining examples of WTFery:

    1) The code executes in the try block, and if no exceptions occur there's no assertions made, so the test passes... without actually asserting that things are what they should be.

    2) If an exception occurs... it prints the exception to the console.. and asserts that True is True, so the test passes.

    Now I've not used JUnit, but I've used NUnit.  This seems to be as "effective" as not running any tests at all, since from the look of things it'll "pass" the test regardless of what happened.

    Quite correct! 3,658 tests, all passing every single time. I've already found a couple via a random scan that will fail if these are removed and the exceptions allowed to occur.

    Interesting aside about JUnit. If you try - catch some exception, that doesn't make the test fail (uncaught exceptions are considered failures). If you catch the exception, you actually need to do something to tell it to fail. For instance, sometimes you want to do:

     

    try {
        process_1_to_10(10);
        // if it gets here, the test passed
    } catch (OutOfRangeException o) {
      assert(false); // should not have gotten exception
    }
    

    try {
    process_1_to_10(11); // throws exception
    assert(false); // expected exception not received
    } catch (OutOfRangeException o) {
    // test passes: got expected exception
    }

    The point is that they swallowed the exceptions, made useless assertions, and then made claims of how great it was!



  • @snoofle said:

    @TheRubyWarlock said:

    Wow... so let's see.  Unless I'm mistaken there are two possible outcomes, and both are shining examples of WTFery:

    1) The code executes in the try block, and if no exceptions occur there's no assertions made, so the test passes... without actually asserting that things are what they should be.

    2) If an exception occurs... it prints the exception to the console.. and asserts that True is True, so the test passes.

    Now I've not used JUnit, but I've used NUnit.  This seems to be as "effective" as not running any tests at all, since from the look of things it'll "pass" the test regardless of what happened.

    Quite correct! 3,658 tests, all passing every single time. I've already found a couple via a random scan that will fail if these are removed and the exceptions allowed to occur.

    Interesting aside about JUnit. If you try - catch some exception, that doesn't make the test fail (uncaught exceptions are considered failures). If you catch the exception, you actually need to do something to tell it to fail. For instance, sometimes you want to do:

     

    try {
    process_1_to_10(10);
    // if it gets here, the test passed
    } catch (OutOfRangeException o) {
    assert(false); // should not have gotten exception
    }

    try {
    process_1_to_10(11); // throws exception
    assert(false); // expected exception not received
    } catch (OutOfRangeException o) {
    // test passes: got expected exception
    }

    The point is that they swallowed the exceptions, made useless assertions, and then made claims of how great it was!

     

    That's why when I write tests, I don't wrap them in a try/catch block. This way, if there is an exception, the test will fail, just like if the condition I set out for it is not met. 



  • @snoofle said:

    On a new project, the manager proudly boasts that his team's software is very reliable because they have a VERY large suite of JUnit test cases/suites that cover a very large percentage of the code, and check for all possible exceptions/errors. It is run every 30 minutes, to guarantee that all code is tested and validated virtually immediately upon being checked in. Ooooooookay! A quick perusal shows that every single one of the test cases has the following format:

    public class XxxTest extends TestCase {

    // Test feature x
    public void testFeatureX() {
    try {
    ... // if the code doesn't throw any exceptions, it just returns, implicitly passing the test
    } catch (SomeException se) {
    se.printStackTrace();
    assertTrue(true);
    } catch (SomeOtherException soe) {
    soe.printStackTrace();
    assertTrue(true);
    } catch (Exception e) {
    e.printStackTrace();
    assertTrue(true);
    } catch (Throwable t) {
    t.printStackTrace();
    assertTrue(true);
    }
    }
    }

    There are thousands of test cases in hundreds of classes - every single one exactly like this one. sigh
     

     

    I'm dieing to see the console output!! or at leaste the line count...is there any way you could get that??



  • So the only code policy in your company is that code MUST NOT under any circumstance throw exceptions?



  • Someone else may have explained this already but basically what an Assert() is supposed to be used for is to allow the developer to "Assert" on facts that he knows should be true. If the Assert() fails then it pops up a nasty warning and bombs the program with the stack trace. Asserts only work in code that is compiled in Debug mode, as compiling in Release mode skips them entirely.

    An example of this would be

    void SomeFunc(object Xy)

    {

    Assert(Xy != null);

    // then do something with Xy

    }

    Its basically a developers tool. They work the same as exceptions except you can't catch an Assert. The idea is that 99.9% of the time, the Asserts will pass. Basically in the test functions above by them saying Assert(true) is a complete waste of typing and may even be optimized out at compile time. Its like saying if (true == true). The even funnier thing would be if they are compiling these test cases in release mode.



  • @snoofle said:

    Interesting aside about JUnit. If you try - catch some exception, that doesn't make the test fail (uncaught exceptions are considered failures). If you catch the exception, you actually need to do something to tell it to fail.

    This is exactly what you want.

    You try your test. By default, if you get any exception at all, it will fail. However, certain exceptions require a different response. An example I used when training a tester recently was:

    try {
        // send a string of a particular format
    }
    catch(InvalidFormatException ife) {
        // set the "skip" toggle
    }
    if(!skip) {
        // other test cases using this format
    }

    For example, we might send a Japanese localised string to start. Having done so, we have specific tests to make regarding how kanji and kana are handled in the parse mechanism. But since we're building the test long before the actual implementation of the Japanese build is done, using the spec as our guide, it is expected that for several months our initial Japanese string will throw an InvalidFormatException. Meanwhile, our test results will show:

        US English locale - Passed
        UK English locale - Passed
        Japanese locale - Skipped
        Russian locale - Passed

    Once the Japanese localised string support is completed, the tests will automagically detect it and run the tests. Occasionally a developer thinks "I'm here at 9 PM on Friday and none of the testers are here, I'll sneak this worthless crap into the build and look like I'm in good shape until Monday"... and then he gets paged Saturday morning at 4 AM because he broke the build. You'd be surprised how many otherwise smart people are just convinced that "Skipped" has been hardcoded in the test and doesn't really mean we skipped a working test.

    An interesting theory of human nature results: people expect you to do what they would do. Whenever someone expects you to do a half-assed job, it's generally because he himself is doing a half-assed job, and the reason he's so surprised when he gets caught is that he expected his security and quality gates to be doing a half-assed job too.

    If you're inclined to be evil, this means that once you go past certain social boundaries, you can always get away with it because people are simply unable to believe you did it. The canonical example from Douglas Adams (Long Dark Tea-Time of the Soul) is that as you walk into a diner, you are perfectly safe simply taking someone else's coffee off their table as you pass. It's so outrageous, their brains just can't process it, and they invent some other explanation for the sudden disappearance of their coffee. In my own experience, roughly 15% of people will verbally object as you take the coffee, but only about 4% will persist in their objection if you ignore it.

    Well, I had to try it. It just seems too ludicrous to be true... but it is.



  • @Strider said:

    I'm dieing to see the console output!! or at leaste the line count...is there any way you could get that??

    The console output basically scrolls intentionally forced exceptions (testing negative cases) as fast as it can do the I/O. I'm certain there are a few real error mixed in, but no human would ever spot them. Line count is > 100K.

    The rationale, as explained to me, is that we only need to know that the tests pass; we can ignore the console output.



  • @snoofle said:

    The rationale, as explained to me, is that we only need to know that the tests pass; we can ignore the console output.

    That's even better!  Did you.. oh, I dunno.. explain that the tests are designed to "pass" no matter what, even if they don't work, so you're not actually testing anything?  Then again don't do that.  They're liable to label you a troublemaker and let you go, and then I'd feel bad.



  • @CDarklock said:

    If you're inclined to be evil, this means that once you go past certain social boundaries, you can always get away with it because people are simply unable to believe you did it. The canonical example from Douglas Adams (Long Dark Tea-Time of the Soul) is that as you walk into a diner, you are perfectly safe simply taking someone else's coffee off their table as you pass. It's so outrageous, their brains just can't process it, and they invent some other explanation for the sudden disappearance of their coffee. In my own experience, roughly 15% of people will verbally object as you take the coffee, but only about 4% will persist in their objection if you ignore it.

    Well, I had to try it. It just seems too ludicrous to be true... but it is.

    You are unbelievably awesome. That just made my day.

    Thanks! :D



  • @campkev said:

    Of course, the fact that your software doesn't throw any exceptions doesn't mean that it is actually doing the RIGHT thing.

     

    public int incrementByFive(int theNumber){

    try{

    theNumber +=10;

    }

    catch{}

    return theNumber;

    }

     

    will pass just fine

     

     

    This is the perfect example showing why code cannot be used to indicate what code is supposed to do!

    Code can only indicate what it does, not its intent.  Other documentation is required to indicate what it is supposed to do. Comments at the very least, and preferably some requirements and/or design documentation.

    If you write code without stating somewhere what the code is supposed to do...well, you might get lucky and have the code do what you want, but only sometimes. 



  • @too_many_usernames said:

    This is the perfect example showing why code cannot be used to indicate what code is supposed to do!

    Code can only indicate what it does, not its intent.  Other documentation is required to indicate what it is supposed to do. Comments at the very least, and preferably some requirements and/or design documentation.

    If you write code without stating somewhere what the code is supposed to do...well, you might get lucky and have the code do what you want, but only sometimes. 

    The names for variables, methods, classes etc. can and should of course indicate what they are supposed to do. Of course they can lie, but so can every other kind of documentation. 


Log in to reply