The Slow Math



  • Just found this in a math library in our software. It's a function to return 1 if the input is greater than 0, -1 if the input is less than 0, and 0 if the input is 0.


    Int16	Sign(Double value)
    {
    	if (value)
    		return (value/fabs(value));
    	else
    		return (0);
    }
    


  • Wow I must be really rusty in C++, because unless fabs() doesn't do what I think it does, I really don't see why this is so bad...

    Can anyone explain?



  • Looks okay to me.  I must be missing something.

    I guess another way would be something like

    Int16 Sign(Double value)
    {
      return (  value ? (value/fabs(value)) : 0);
    }

    But that's just replacing the "if" with a ternary op.

     



  • @clively said:

    Looks okay to me.  I must be missing something.

    I guess another way would be something like

    Int16 Sign(Double value)
    {
      return (  value ? (value/fabs(value)) : 0);
    }

    But that's just replacing the "if" with a ternary op.

     

    Right, I am sure there are a couple ways that this could be done (better?) but I don't see why this is a WTF. But I don't use C++ everyday either.



  • @MasterPlanSoftware said:

    Wow I must be really rusty in C++, because unless fabs() doesn't do what I think it does, I really don't see why this is so bad...

    Can anyone explain?

    It's got two problems:

    1. It's assuming that rounding will never cause x/abs(x) to be less than 1 -- probably a safe assumption, but you can't be sure.
    2. Floating-point division is much slower than floating-point comparison.


  • @Carnildo said:

    @MasterPlanSoftware said:

    Wow I must be really rusty in C++, because unless fabs() doesn't do what I think it does, I really don't see why this is so bad...

    Can anyone explain?

    It's got two problems:

    1. It's assuming that rounding will never cause x/abs(x) to be less than 1 -- probably a safe assumption, but you can't be sure.
    2. Floating-point division is much slower than floating-point comparison.
     

    I see... That is kind of what I figured, but it still seems pretty weak to me...

    Oh well.



  • @Carnildo said:

    It's got two problems:
    1) It's assuming that rounding will never cause x/abs(x) to be less than 1 -- probably a safe assumption, but you can't be sure.
    2) Floating-point division is much slower than floating-point comparison.
     

    It's as though the code was written by a mathematician, first and foremost, not a programmer who understands the subtleties of floating-point implementation in C/C++.



  • @CodeSimian said:

    @Carnildo said:

    It's got two problems:

    1. It's assuming that rounding will never cause x/abs(x) to be less than 1 -- probably a safe assumption, but you can't be sure.
    2. Floating-point division is much slower than floating-point comparison.
     

    It's as though the code was written by a mathematician, first and foremost, not a programmer who understands the subtleties of floating-point implementation in C/C++.

    I hadn't thought of it that way, but you're right: it was originally written by a mathematician.



  • It's not that the function doesn't do what it's supposed to, it's just that it does so stupidly.  I'd bet that this:

    Int16 Sign(Double value)

    {
    if (value > 0)
    		return 1;
    	else if (value < 0)
    		return -1;
    	else
    return 0;
    }

    Would be faster, because it doesn't require a divide or a function call, it just requires examining a sign bit.



  • @Heron said:

    It's not that the function doesn't do what it's supposed to, it's just that it does so stupidly.  I'd bet that this:

    Int16 Sign(Double value)

    {
    if (value > 0)
    		return 1;
    	else if (value < 0)
    		return -1;
    	else
    return 0;
    }

    Would be faster, because it doesn't require a divide or a function call, it just requires examining a sign bit.

    It's not stupid. It's just slow. I wouldn't care about either implementation as long as it works as advertised until profile-time. Then again, I am a mathematician.



  • It's superslow: function call, double division, double to int conversion and a comparison. Also, it will not handle +inf and -inf properly (which the two comparsion function will do properly). And the risk that x / -x is just a little bit less than -1 is not zero; perhaps even x/x isn't always 1. So, it's bad code, but not a TWTF: it's obvious that x/|x| should yield sgn(x) when x<>0.



  • @TGV said:

    And the risk that x / -x is just a little bit less than -1 is not zero; perhaps even x/x isn't always 1.

    Actually it's exactly 1, or -1. x and -x differ only by sign bit and x==x. There is no tricky math - it will work.



  • @viraptor said:

    @TGV said:

    And the risk that x / -x is just a little bit less than -1 is not zero; perhaps even x/x isn't always 1.

    Actually it's exactly 1, or -1. x and -x differ only by sign bit and x==x. There is no tricky math - it will work.

     

    Actually, on some architectures/compilers/langauages, x / -x may return something slightly different from 1 or -1, if x is a floating point number.

    For instance, the floating-point registers on many architectures (including x86) are not the same size as float's or double's. This means that the values are truncated when copied into RAM. However, if the value never gets copied into RAM, then it may still retain it's original precision. The machine code that the compiler outputs for "x / -x", could therefore end up dividing a truncated value by an untruncated value (or vice/versa) and may then give a value slightly different from 1 or -1.



  • @poopdeville said:

    @Heron said:

    It's not that the function doesn't do what it's supposed to, it's just that it does so stupidly.  I'd bet that this:

    Int16 Sign(Double value)

    {
    if (value > 0)
    		return 1;
    	else if (value < 0)
    		return -1;
    	else
    return 0;
    }

    Would be faster, because it doesn't require a divide or a function call, it just requires examining a sign bit.

    It's not stupid. It's just slow. I wouldn't care about either implementation as long as it works as advertised until profile-time. Then again, I am a mathematician.

     

    While I'm against premature optimization, it's thinking like this that makes programs that are slow, but not slow enough to try and make them faster with profiling.  It takes basically no extra effort to choose the faster one, and in such cases, the faster one should be chosen unless there is a compelling reason to not use it. 



  •  @tster said:

    While I'm against premature optimization, it's thinking like this that makes programs that are slow, but not slow enough to try and make them faster with profiling.  It takes basically no extra effort to choose the faster one, and in such cases, the faster one should be chosen unless there is a compelling reason to not use it. 

     As I spend today optimizing some CPU-limited code, I agree.  The advice not to optimize prematurely is okay when optimization means doing a lot of work and making the code fragile and inscrutable.  But often you are faced with many different algorithms for accomplishing a task, and spending a few seconds to pick the right one saves both time and clarity.

     Working by hand, the first algorithm says "To find the sign of -4.689132, divide -4.689132 by ( -4.689132 > 0.0 ? +(-4.689132) : -(-4.689132) )."  The saner algorithm is "Return ( -4.689132 > 0.0 ? +1 : ( -4.689132 < 0.0 ? -1 : 0 )  )."

     



  • Of course, depending on architecture/language/library/etc. comparing floats to zero is dangerous anyway, that's to the wonderful rounding nonsense. Floats are inherently rough estimates. In floatland, (5+2-3)/4 != 0 .

    Where -0.000000000000000001 from rounding errors "should" = 0, the above solutions would give -1 as a result where for most people, for most purposes, you're probably looking for a result of 0 instead.

    Bah, I just hate floats. Most places I see them used, what's really called for is something like Decimal.



  •  That's double with a capital D so I'm not sure what kind of carnage they've added, but for regular CPP implementations, floating point comparison is tricky business.

    double a = 1.0;
    double b = 1.0;
    double c = a - b;

    if(c == 0)
         printf("you'd expect to see this");
    else
         printf("but you might see this, because c is really 0.00000000000000000000000000001 or something.");



  • @RayS said:

    In floatland, (5+2-3)/4 != 0 .
    In Intland and Mathland the same is true.



  • @tster said:

    @poopdeville said:

    @Heron said:

    It's not that the function doesn't do what it's supposed to, it's just that it does so stupidly.  I'd bet that this:

    Int16 Sign(Double value)

    {
    if (value > 0)
    		return 1;
    	else if (value < 0)
    		return -1;
    	else
    return 0;
    }

    Would be faster, because it doesn't require a divide or a function call, it just requires examining a sign bit.

    It's not stupid. It's just slow. I wouldn't care about either implementation as long as it works as advertised until profile-time. Then again, I am a mathematician.

     

    While I'm against premature optimization, it's thinking like this that makes programs that are slow, but not slow enough to try and make them faster with profiling.  It takes basically no extra effort to choose the faster one, and in such cases, the faster one should be chosen unless there is a compelling reason to not use it.  

     

    I don't disagree.  Still, if it's already been implemented and works acceptably, the time taken to re-implement will often be longer than the time saved.  Obviously this depends on the project -- I would hope Microsoft, or the Linux kernel team, or other large, popular projects would perform this kind of optimization on "all" its code.  (Meaning, they'd replace obviously suboptimal functions as they found them) A hundred clock cycles per call per millions of users comes to about a 1/10th of a second saved.  Embedded hardware is another domain where I would expect more aggressive optimization.



  • @Lingerance said:

    @RayS said:
    In floatland, (5+2-3)/4 != 0 .
    In Intland and Mathland the same is true.
    RayS lives in Zeroequalsoneland, so it checks out.



  • Sure, it works, but it's just a bit too clever/weird. It's the kind of asinine thing you had to do in college: "write a function that returns the sign of a float without using relational operators."



  • The real WTF is that they're devoting two full bytes to a return value that would fit in two bits. =)



  • @Welbog said:

    @Lingerance said:

    @RayS said:
    In floatland, (5+2-3)/4 != 0 .
    In Intland and Mathland the same is true.
    RayS lives in Zeroequalsoneland, so it checks out.

    Actually, both this example (after correcting for 1 != 0), and 

    @vt_mruhlin said:

    double a = 1.0;
    double b = 1.0;
    double c = a - b;

    if(c == 0)
         printf("you'd expect to see this");
    else
         printf("but you might see this, because c is really 0.00000000000000000000000000001 or something.");


    are completely wrong, because integers can be represented exactly in floating point notation just so long as there are only (mantissa size) significant bits.



  • @vt_mruhlin said:

     That's double with a capital D so I'm not sure what kind of carnage they've added, but for regular CPP implementations, floating point comparison is tricky business.

    double a = 1.0;
    double b = 1.0;
    double c = a - b;

    if(c == 0)
         printf("you'd expect to see this");
    else
         printf("but you might see this, because c is really 0.00000000000000000000000000001 or something.");

     

    I appreciate the principle you are trying to state, but I must point out that floating point represents many integers exactly.  For example, a 64-bit double can (easily) represent every 32-bit integer precisely.  So in the above example, c will always be exactly zero.  Period.

     Where you might get into trouble is summing non integers (ie. the floating point part), or too-large integers.  For example, in Python:

    >>> 0.3 + 0.3 + 0.3 == .9

    False

    >>> 3 * 0.3 == .9

    False

     



  • @vt_mruhlin said:

     That's double with a capital D so I'm not sure what kind of carnage they've added, but for regular CPP implementations, floating point comparison is tricky business.

    Some of the compilers this software used to support used something other than "double" for their high-precision floating-point datatype. These days, "Double" is just a synonym for "double".



  • @ChadN said:

    I appreciate the principle you are trying to state, but I must point out that floating point represents many integers exactly.  For example, a 64-bit double can (easily) represent every 32-bit integer precisely.  So in the above example, c will always be exactly zero.  Period.

     Where you might get into trouble is summing non integers (ie. the floating point part), or too-large integers.  For example, in Python:

    >>> 0.3 + 0.3 + 0.3 == .9

    False

    >>> 3 * 0.3 == .9

    False

     

    Yup everyone, I'm the real WTF for the 1/0 typo...

    But yeah I wasn't intending to say anything about a particular scenario of 1 != 1, but the general point is that this function appears to be generic in purpose, so who knows what kind of rounding errors have happened to the value before they pass it to you to be checked for sign.

    People using exact comparisons for floats often run into unexpected behaviour when things don't quite add up, or are just plain abusing floats in the first place.


Log in to reply