Testing creates problems later



  • Recently we have been going though a bit of a management change and I have been slightly involved in discussing our new development procedures. By involved I mean I raise questions / give suggestions which are then ignored. One of these suggestions was that unit testing be 'mandatory', I.E. some form of metric for what constitutes as code has enough tests TBD.

     The reply I got back from another member of the team (loved by management because he delivers flashy UI things):

    "<MyName>, under the unit tests I understand testing of each class or even procedure in the code individually. Do not know if we understand in the same way unit tests that you want to do. I do not recommend this type of testing. It takes too much time and creates all kinds of problems with reworking the code later. After each change of code, you have to change the unit test."

    As a result I was told that I should restrict myself to " 'smoke' tests as well as simplified unit testing" WTFTM. Mind you that does leave the definition of simplified up to me I guess.



  • I would say if you're writing unit tests for every class and method, then yeah, you're testing way too much. You're adding a significant amount of code for little-to-no purpose. Conversely, if you're testing nothing you're also doing it wrong. I generally unit test complex pieces of logic and do automated acceptance testing for everything else.



  • @SandGroper said:

    Recently we have been going though a bit of a management change and I have been slightly involved in discussing our new development procedures. By involved I mean I raise questions / give suggestions which are then ignored. One of these suggestions was that unit testing be 'mandatory', I.E. some form of metric for what constitutes as code has enough tests TBD.

     The reply I got back from another member of the team (loved by management because he delivers flashy UI things):

    "<MyName>, under the unit tests I understand testing of each class or
    even procedure in the code individually. Do not know if we understand in
    the same way unit tests that you want to do. I do not recommend this
    type of testing. It takes too much time and creates all kinds of
    problems with reworking the code later. After each change of code, you
    have to change the unit test."

    As a result I was told that I should restrict myself to " 'smoke' tests as well as
    simplified unit testing
    " WTFTM. Mind you that does leave the definition of simplified up to me I guess.

    I would have responded (and in fact have responded) that "Code I am responsible for will be properly tested, including an appropriate set of unit tests that veifies the intended functionallity of each exposed class/method/property/event. It is your (the manager's) choice if the code I am responsible for will be at this organization or elsewhere. Which would you prefer?"



  • @morbiuswilters said:

    I would say if you're writing unit tests for every class and method, then yeah, you're testing way too much. You're adding a significant amount of code for little-to-no purpose. Conversely, if you're testing nothing you're also doing it wrong. I generally unit test complex pieces of logic and do automated acceptance testing for everything else.

    Have you *ever* had a piece of simple code that "was not worth testing", then have someone make a change to it, such that it evolves to the point where testing could reveal a defect, but no one realized that such tests were needed????

     If the answer is yes, then I would argue that you are testing too little and that you saved an insignificant amount of time (a few thousand unit tests for trivial items can be automatically generated in under an hour) for a very-great-risk.



  • @TheCPUWizard said:

    Have you ever had a piece of simple code that "was not worth testing", then have someone make a change to it, such that it evolves to the point where testing could reveal a defect, but no one realized that such tests were needed????

    If a method evolves to the point where it needs to be tested, then a test should be added. You know, it's also possible a method might be added and that somebody might not have written a test for it; better unit test even more!!!

    @TheCPUWizard said:

    If the answer is yes, then I would argue that you are testing too little and that you saved an insignificant amount of time (a few thousand unit tests for trivial items can be automatically generated in under an hour) for a very-great-risk.

    You know, you'd think somewhere around the point where you were writing a script to generate thousands of useless unit tests you might have looked at your life and realized you'd become a cargo cultist. Here's a good standard: unit tests that can be generated en masse with a script have no value whatsoever; you're testing trivial logic. Not only that, they're counter-productive: you waste time on it, it gives you a false sense of security and it makes it more likely you aren't focusing on the things you should be testing. If somebody handed in unit tests like that I'd consider having them fired because it's obvious that they aren't using their brain and instead are looking for an easy way out. Unit testing is the new static typing; all those dumbasses who insisted the compiler could catch their mistakes are now convinced 100% test coverage will catch their mistakes.

    I'm sick of this fucking industry. I'm going to switch to a field where "engineering" doesn't mean "I saw some guy talk about it once at a conference and now I just do it blindly, ad nauseum, until the next fad catches on."



  • @morbiuswilters said:

    @TheCPUWizard said:
    Have you *ever* had a piece of simple code that "was not worth testing", then have someone make a change to it, such that it evolves to the point where testing could reveal a defect, but no one realized that such tests were needed????

    If a method evolves to the point where it needs to be tested, then a test should be added. You know, it's also possible a method might be added and that somebody might not have written a test for it; better unit test even more!!!

    @TheCPUWizard said:

    If the answer is yes, then I would argue that you are testing too little and that you saved an insignificant amount of time (a few thousand unit tests for trivial items can be automatically generated in under an hour) for a very-great-risk.

    You know, you'd think somewhere around the point where you were writing a script to generate thousands of useless unit tests you might have looked at your life and realized you'd become a cargo cultist. Here's a good standard: unit tests that can be generated en masse with a script have no value whatsoever; you're testing trivial logic. Not only that, they're counter-productive: you waste time on it, it gives you a false sense of security and it makes it more likely you aren't focusing on the things you should be testing. If somebody handed in unit tests like that I'd consider having them fired because it's obvious that they aren't using their brain and instead are looking for an easy way out. Unit testing is the new static typing; all those dumbasses who insisted the compiler could catch their mistakes are now convinced 100% test coverage will catch their mistakes.

    I'm sick of this fucking industry. I'm going to switch to a field where "engineering" doesn't mean "I saw some guy talk about it once at a conference and now I just do it blindly, ad nauseum, until the next fad catches on."

    I hope you were not referring to me in the last sentence..... EVERYTHING I do related to technology is because I have first hand experience [35 years total professional, 28 years running my own firm] in what works and what causes probems (since I deal with many clients per year, this experience covers a wide range of environments). Some things I have tried work fanstastic, and I keep them, some were abject failures and I learned how to avoid those types of failures.

    As far as "If a method evolves to the point where it needs to be tested, then a test should be added", there is simply too mich risk (for my scenarios, in my experience) that it will not be. On the other hand, if you prevent a checkin to the repository unless the code is tested you have eliminated the risk of someone missing something. So I much prefer It is IMPOSSIBLE that "a method might be added and that somebody might not have written a test for it;".

    Finally " now convinced 100% test coverage will catch their mistakes"...I have never made such a claim. However when a mistake is found at any point in the chain, my first step is to always add a unit test that catches this mistake (before making any changes to the codebase). The development of this test quite often reveals other conditions which could leat to other issues, and those tests are added also.



  • @morbiuswilters said:

    I'm sick of this fucking industry. I'm going to switch to a field where "engineering" doesn't mean "I saw some guy talk about it once at a conference and now I just do it blindly, ad nauseum, until the next fad catches on."
    Please, can I marry you and bear your children?



  •  @morbiuswilters said:

    ... [thousands of] unit tests ... generated en masse with a script ...

    And who's checking that the generated tests are themselves correct - and how are they doing it? Has the generator script been properly unit tested?



  • @TheCPUWizard said:

    However when a mistake is found at any point in the chain, my first step is to always add a unit test that catches this mistake (before making any changes to the codebase). The development of this test quite often reveals other conditions which could leat to other issues, and those tests are added also.
      Please, can I marry you and bear your children?

    I mean, aren't you two fighting at too high a level here? Programming needs to be done with a brain, not with implementing the latest fad. And how can development be done any better than with what you describe (write a test that catches a bug before tackling the bug itself)?

     



  • @Watson said:

     @morbiuswilters said:

    ... [thousands of] unit tests ... generated en masse with a script ...

    And who's checking that the generated tests are themselves correct - and how are they doing it? Has the generator script been properly unit tested?

    A very valid concern. The generator we have developed took quite some time, and yes it does have its own set of unit tests as well as functional tests against a codebase that has a number of issues. If the generator does not catch the issues it fails its own tests. If (when) we find a defect [in real code that the generator should have caught] then we update the tests for the generator, and finaly update the generator.

    As I believe I have said before, the investment is not small; but when one is developing large amounts of software for customers who demand extremely high quality, then the ROI becomes significant.



  • @SandGroper said:

    Recently we have been going though a bit of a management change ...

     The reply I got back from another member of the team...

    In situations such as those, I tend to request clarification and forumulation of a clear policy that I can stab back in their eyes when it all fails follow.



  •  I test by hitting F5 and seeing if it blows up.



  • @morbiuswilters said:

    Unit testing is the new static typing; all those dumbasses who insisted the compiler could catch their mistakes are now convinced 100% test coverage will catch their mistakes.
     

    You're snarking here, but you're not too far from the truth.  Unit testing (along with a bunch of other cargo-cult practices) was developed in Smalltalk, and one of the main things it was used for was catching the type errors that Smalltalk, being a dynamic language, could not find for you.



  • "when one is developing large amounts of software for customers who demand extremely high quality, then the ROI perceived business value of forwarding a TAP output file with 10,000 lines of 'ok n - meaningless test case passed' becomes significant."



  •  I deleted a production database.... In a unit test....So yes, badly coded test or lack of separation between dev and production can be problematic... sometimes :p



  • @Rootbeer said:

    "when one is developing large amounts of software for customers who demand extremely high quality, then the ROI perceived business value of forwarding a TAP output file with 10,000 lines of 'ok n - meaningless test case passed' becomes significant."

     Who said anything about "meaningless tests"????  Consider the following:

    enum PrimaryColor { Red, Green, Blue }
    PrimaryColor MyColor { get; set; }

    Some would argue that testing MyColor is "meaningless".

     But consider if (an automatically generated) test

    .... looped through all of the enum values and verified that the same value was returned....Now someone makes a change, such that under certain circumstances, the get value is not the set value....The behaviour has changed, and a test fails.
    .... Verified that a default instance returned Red...and someone "innocently" put a new value at the start of the list (unintentionally changing the default behaviour.

     I have seen both of these conditions occur in the eal world, where they were not caught (and my testing philosophy was not applied) which resulted in significant issues. Much better to reduce the possibility by several orders of magnitude...



  • @tchize said:

     I deleted a production database.... In a unit test....So yes, badly coded test or lack of separation between dev and production can be problematic... sometimes :p

    I would argue the exact opposite, your test found a real probem (the lack of separation), now ...

    1) Adda test that detects talking to a production database (test should fail)
    2) go fix your environment (test will now pass)

    And return to your regularly scheduled operations...

    ps: the above are NOT "unit tests", but do illustrate the impact of a comprehensive testing policy.



  • @TheCPUWizard said:

    @Rootbeer said:

    "when one is developing large amounts of software for customers who demand extremely high quality, then the ROI perceived business value of forwarding a TAP output file with 10,000 lines of 'ok n - meaningless test case passed' becomes significant."

     Who said anything about "meaningless tests"????  Consider the following:

    enum PrimaryColor { Red, Green, Blue }
    PrimaryColor MyColor { get; set; }

    Some would argue that testing MyColor is "meaningless".

     But consider if (an automatically generated) test

    .... looped through all of the enum values and verified that the same value was returned....Now someone makes a change, such that under certain circumstances, the get value is not the set value....The behaviour has changed, and a test fails.
    .... Verified that a default instance returned Red...and someone "innocently" put a new value at the start of the list (unintentionally changing the default behaviour.

     I have seen both of these conditions occur in the eal world, where they were not caught (and my testing philosophy was not applied) which resulted in significant issues. Much better to reduce the possibility by several orders of magnitude...

    If your property does, indeed, look like that, then yes, testing it is completely pointless.

    On the other hand, if your property actually has human-written code inside it then testing it is worthwhile.



  • @pkmnfrk said:

    If your property does, indeed, look like that, then yes, testing it is completely pointless.

    On the other hand, if your property actually has human-written code inside it then testing it is worthwhile.

    That is exactly what I find as the fundamental flaw in most people's approaches. How can you GUARANTEE that the code will not change from the first case (automatic property) to the second at some point in time?

    Things like code changes help, but we all know that at least one change has slipped past the review without being carefully analyzed. Even if the analysis is complete, is it viable to MANUALLY review every element to see if a test has potentially been missed? What if the  property currently has code inside and is tested, over time the code becomes simpler and simpler (and the existing test keeps passing) until such time as it is converted back to an automatic property. If this is not caught (and the test removed) then you have an inconsistent state where some automatic properties have test cases and others dont (in more general cases, how do you validate that something does NOT have a test?!?!?!)



  • @TheCPUWizard said:

    .... Verified that a default instance returned Red...and someone "innocently" put a new value at the start of the list (unintentionally changing the default behaviour.
    How does your test generator know that Red should be the default? Is the first version of something automatically considered correct? That seems to defeat the point of tests.



  • @Jaime said:

    @TheCPUWizard said:
    .... Verified that a default instance returned Red...and someone "innocently" put a new value at the start of the list (unintentionally changing the default behaviour.
    How does your test generator know that Red should be the default? Is the first version of something automatically considered correct? That seems to defeat the point of tests.

    In .NET, the first delared value in an enum is numerically 0. Value types are initialized to 0. Enums are Value types. Therefore Enums are initialized to the first variable in the declaration list, unless one explicitly assigns values.

    Since there is no requirement for explicit assignment when declaring enums, the following:

    enum MyEnum {.....}
    MyEnum myVariable;  // Will automatically be initialized to the first value in the above list.

    Thus a change in the order of elements in the enum declaration (Again, without explicit value assignment) will change the value of myVariable.  If code depends on myVariable having a specific variable, and that behaviour is tested, then the test in question will fail. But it will not fail with an EXPLICIT message that "the default value for enumeration xxx has changed". If the code happens to be missing a test, it can easily be missed completely.



  • @SandGroper said:

    some BS from management...

    As a result I was told that I should restrict myself to " 'smoke' tests as well as
    simplified unit testing
    " WTFTM. Mind you that does leave the definition of simplified up to me I guess.

    They're totally wrong. That said, there is a lot of value just confirming that a page renders without error. Even writing a UI test that simple can be really hard depending on your framework. I've had the chance to work with Wicket lately (a Java framework) and those tests are basically two liners once you set up your dependencies.

    WicketTester wt = new WicketTester();
    wt.assertPageRenders(WidgetDisplayPage.class);

    Regression test suites don't come much cheaper.



  • @TheCPUWizard said:

    @Jaime said:

    @TheCPUWizard said:
    .... Verified that a default instance returned Red...and someone "innocently" put a new value at the start of the list (unintentionally changing the default behaviour.
    How does your test generator know that Red should be the default? Is the first version of something automatically considered correct? That seems to defeat the point of tests.

    In .NET, the first delared value in an enum is numerically 0. Value types are initialized to 0. Enums are Value types. Therefore Enums are initialized to the first variable in the declaration list, unless one explicitly assigns values.

    Since there is no requirement for explicit assignment when declaring enums, the following:

    enum MyEnum {.....}
    MyEnum myVariable;  // Will automatically be initialized to the first value in the above list.

    Thus a change in the order of elements in the enum declaration (Again, without explicit value assignment) will change the value of myVariable.  If code depends on myVariable having a specific variable, and that behaviour is tested, then the test in question will fail. But it will not fail with an EXPLICIT message that "the default value for enumeration xxx has changed". If the code happens to be missing a test, it can easily be missed completely.

    I understand the technical issues here. What I don't understand is how your "automatic test generator" can decide that "Red" is the correct default and fail the test when "Blue" is put in the first position. In other words, the generated test is simply testing if something has changed, not whether it's actually correct. Since the purpose of changing code is to change the output, a test suite generated this way will simply point out the effects of a given change, but not evaluate the correctness of those changes.

    Worse yet, if you ever regenerate your test suite, they will all generate in a passed state. Whatever evaluated "Red" as correct, will now evaluate "Blue" as correct after the coding change above since the code itself is used as a reference for what is correct.



  • @Jaime said:

    I understand the technical issues here. What I don't understand is how your "automatic test generator" can decide that "Red" is the correct default and fail the test when "Blue" is put in the first position. In other words, the generated test is simply testing if something has changed, not whether it's actually correct. Since the purpose of changing code is to change the output, a test suite generated this way will simply point out the effects of a given change, but not evaluate the correctness of those changes.

    Worse yet, if you ever regenerate your test suite, they will all generate in a passed state. Whatever evaluated "Red" as correct, will now evaluate "Blue" as correct after the coding change above since the code itself is used as a reference for what is correct.

    A large part is lookign for unintended impacts of changes as this has one of the highest root cause metrics for bugs. I have found excellent value in having a set of tests that comprehensively measure the current behaviour and alert me to any changes in that behaviour. If the change is expected, then the impact is acknowledged, if no the failed test prevents the unintended impact from escaping.

    You are 100% that do a "fresh" (from scratch) re-generation of tests would have a severe negative impact, unless some other form of comprehensive testing was done on that version to validate that all current behaviours appeared correct. 99.99% of the time (just made that number up, but would not be suprised if it was accurate) an INCREMENTAL generation is done.

    If you have ever made a code change to alter specific behaviour, and then later realized that there was a side-effect that caused problems, you should see the value here. The enum example is quite trivial (but I use it as a common example because it is easy to understand).


  • Discourse touched me in a no-no place

    @TheCPUWizard said:

    enum MyEnum {.....}
    MyEnum myVariable;  // Will automatically be initialized to the first value in the above list.
    enum MyEnum { foo=1; bar=2;};
    MyEnum myVariable;


  • ♿ (Parody)

    @cmccormick said:

    That said, there is a lot of value just confirming that a page renders without error.

    Yes, and while it's obviously not comprehensive, it gives you a starting point for more detailed tests later on.



  • @morbiuswilters said:

    I'm sick of this fucking industry. I'm going to switch to a field where "engineering" doesn't mean "I saw some guy talk about it once at a conference and now I just do it blindly, ad nauseum, until the next fad catches on."

    I'm afraid almost every single computing industry is like this. I've known 3D artists who flip the green channel of normal maps (basically inverting the direction of lighting details on one axis) religiously without checking or thought because it "fixes problems". This is only necessary going between some software packages because not all orient the lighting tangents the same way. Most of the time it doesn't need flipping and the result is say stones that have highlights on the wrong side.

    Heck, even Epic's animation export tool documentation describes the "superstition" of animators of moving to frame zero to "prevent crashes" even though the current frame doesn't matter at all.

    And then there's the matter of "EVIL NGONS!" (all surfaces must be quads or triangles) used completely blindly even though as long as the surface is convex and planar it should not be an issue.

    Don't even get me started on guitarists.

    Every industry you could enter is full of ridiculous superstition and practice.



  • @nexekho said:

    Every industry you could enter is full of ridiculous superstition and practice.

    Talking about MacOS 7.5 in the other thread reminded me of "rebuilding the desktop database".

    Problems this actually fixed: Icons look wonky.

    Problems Mac users thought it fixed: Anything up to and including a failed space shuttle launch.

    Similarly, there was "resetting the PRAM" which did basically nothing except placebo. I guess your next boot would take .5 seconds longer.



  • @blakeyrat said:

    @nexekho said:
    Every industry you could enter is full of ridiculous superstition and practice.

    Talking about MacOS 7.5 in the other thread reminded me of "rebuilding the desktop database".

    Problems this actually fixed: Icons look wonky.

    Problems Mac users thought it fixed: Anything up to and including a failed space shuttle launch.

    Similarly, there was "resetting the PRAM" which did basically nothing except placebo. I guess your next boot would take .5 seconds longer.

    Or, in the field of Web Development, "clearing you cache and cookies".

    Problems actually fixed: Stale image/pages, bad cookie values

    Problems thought to be fixed: Items missing from page, incorrect values of all kinds, logic errors, exceptions, incorrect passwords, incorrect usernames, 404 and 500 errors, etc.


  • Trolleybus Mechanic

    @TheCPUWizard said:

    Have you *ever* had a piece of simple code that "was not worth testing", then have someone make a change to it, such that it evolves to the point where testing could reveal a defect, but no one realized that such tests were needed????
     

    Code:

    public int OverridePoints { get { return (int)DataBaseRow("override_points"); } }

    Change done later:

    ALTER TABLE point_shit ALTER override_points decimal(18,2)

    Me:

    @Lorne Kates said:

    Why the otherfucking motherfucking is this fucking a mother?!?

     



  • @Lorne Kates said:

    @TheCPUWizard said:

    Have you ever had a piece of simple code that "was not worth testing", then have someone make a change to it, such that it evolves to the point where testing could reveal a defect, but no one realized that such tests were needed????
     

    Code:

    public int OverridePoints { get { return (int)DataBaseRow("override_points"); } }

    Change done later:

    ALTER TABLE point_shit ALTER override_points decimal(18,2)

    Me:

    @Lorne Kates said:

    Why the otherfucking motherfucking is this fucking a mother?!?

     

    Huh, I made a very similar mistake very recently. I changed an int column to decimal. I even remembered to change the places that cast to int. Didn't need to bother testing the actual functionality, since I just using this column as a multiplier against another already-decimal column, so no worries there.

    Client: It's not working, it still only accepts integers.

    Me: Really? Huh, let me check. Yeah, you're right, what the heck?

    Firing up SSMS, inspecting the table, turns out that if you say decimal, it really means decimal(18,0). Whoops!



  • @PJH said:

    @TheCPUWizard said:
    enum MyEnum {.....}
    MyEnum myVariable;  // Will automatically be initialized to the first value in the above list.
    enum MyEnum { foo=1; bar=2;};
    MyEnum myVariable;

    Yes, thank you.  If there's no value for 0, then the value 0 won't represent a value.  Very helpful.

    @pkmnfrk said:

    Client: It's not working, it still only accepts integers.

    Me: Really? Huh, let me check. Yeah, you're right, what the heck?

    So you didn't test it at all?...


  • @Sutherlands said:

    @PJH said:

    @TheCPUWizard said:
    enum MyEnum {.....}
    MyEnum myVariable;  // Will automatically be initialized to the first value in the above list.
    enum MyEnum { foo=1; bar=2;};
    MyEnum myVariable;

    Yes, thank you.  If there's no value for 0, then the value 0 won't represent a value.  Very helpful.

    The point is that the test generator cannot assume that the default value will be the first enum item. The default for any numeric type (including enumerations) is 0. Even if that is not a valid enumeration value.

    @Sutherlands said:

    @pkmnfrk said:

    Client: It's not working, it still only accepts integers.

    Me: Really? Huh, let me check. Yeah, you're right, what the heck?

    So you didn't test it at all?...

    Decimal numbers are a superset of integer numbers. I tested it by re-saving the relevant form and ensuring it didn't crash. So, I tested it a tiny bit, but not nearly enough.



  • @pkmnfrk said:

    The point is that the test generator cannot assume that the default value will be the first enum item. The default for any numeric type (including enumerations) is 0. Even if that is not a valid enumeration value.

    It doesn't have to.  I assume it would do something like this:

    var val = Enum.GetValue(0); (or Enum theEnum = new Enum;)

    CreateTestWithValue(val);



  • @Sutherlands said:

    @pkmnfrk said:

    The point is that the test generator cannot assume that the default value will be the first enum item. The default for any numeric type (including enumerations) is 0. Even if that is not a valid enumeration value.

    It doesn't have to.  I assume it would do something like this:

    var val = Enum.GetValue(0); (or Enum theEnum = new Enum;)

    CreateTestWithValue(val);

    Please note that all of my posts stted "without a value assignment", the case being discussed here is different (and caught by a different test) where the default value is not a legal value for the enum.


Log in to reply