Effective exception handling



  • @Mikademus said:

    but in your code example you are in fact doing exactly what the original code sought to avoid (ignoring, of course, that the original code was also designed to be confusing): you are incuring overheads.

    Parsing text with validation needs overhead. The original code only works when given valid data and performs oddly when you give it bad data. "A0" is parsed as 170, rather than being flagged as an error or exception.

    The only time you'd need the orignal code is when you know you'll never see anything but valid data, and the cases where that's true are vanishingly small in anything that interacts with people or other software or systems.

    Even in Java, omitting that check only adds a couple extra clock cycles.  Omitting it over the course of 1,000,000 runs of that method only reduces the total runtime from around 190ms to 160ms on my workstation.  Given that it does the right thing when presented with invalid data, that seems like a good tradeoff to me.

    This is in stark contrast to the catch-exception-to-avoid-an-if thing that CDarklock was so sure was an optimization, which increased the runtime by a factor of 100, from a fifth of a SECOND to a third of a MINUTE.

    "More computing sins are committed in the name of efficiency
    (without necessarily achieving it) than for any other single reason -
    including blind stupidity."
    - W.A. Wulf



  • Therefore of course my "clearly defined and limited environment" qualification. There are still very low-livel and resource-limited situations.



  • @Thuktun said:

    Parsing text with validation *needs* overhead. The original code only works when given valid data and performs oddly when you give it bad data.

    Perfectly correct and one of the caveats of moving code around haphazardly: the assumption here is that validation has been performed elsewhere. When this is true, all is well. When it is not...

    "A0" is parsed as 170, rather than being flagged as an error or exception.

    And that's a risk you have to take. When you control the input, you simply... don't do that. if the user controls the input, I generally prefer to validate on the fly so the user gets told "this must be numeric" before he even clicks OK.

    Anyone who decides to directly edit the string in memory with a rogue out-of-process thread or some similar mechanism deserves what he gets. And before you go whining about security, remember that this attack vector requires running arbitrary code on the local machine as the logged-in user anyway.

    This is in stark contrast to the catch-exception-to-avoid-an-if thing that CDarklock was so sure was an optimization, which increased the runtime by a factor of 100, from a fifth of a SECOND to a third of a MINUTE.


    I was not "so sure" of anything. I simply said it LOOKS LIKE an optimisation. Your response was that when you ran fake data through the loop a million times, it was a lot slower than running the same fake data through another loop a million times.

    When I pointed out that you still needed to run fake data through both loops a million times when it WOULDN'T throw exceptions, and then identify how frequently the code would encounter each scenario in real practical usage, you got abusive and nasty about it.

    When your hypothesis is flawed, you must either fix it or discard it. That is the law. To do otherwise is unacceptable.


Log in to reply