Javascript calculation



  • why does javascript give me results with lots of trailing numbers
    when i need to calculate a x*y
    when i do a 11 * 1.1 it gives me 12.100000000000001 instead of 12.1
    are there any solution?
    x and y can be of any value between 2000.00 and -1000.00 and "usually" between 50 and -50

    help?
    must be in javascript and hopefully short



  • this is due to an inaccuracy in floats, which i first encountered in C++.

    eg:

    i = 5.0 + 5.0;

    i==10.0 ? FALSE

     

     

    as for a workaround... i always liked multiplying accuracy by 10 and using an integer.

    ie: 110*11 = 121

    i hate floats.



  • Loving javascript at the moment...

    On modern(ish) browsers you can use toFixed(precision) ie:


       var foo = 11 * 1.1
    if ( foo.toFixed )
    foo = foo.toFixed(2)

    That'll only work on IE 5.0+. Works on Firefox. Not sure about any of those freaky other browsers people use.
    Hope this helps


  • Had the same issue a while ago:






    It's almost a proper reason to write your own arithmetic functions.



    add(a, b)



    subtract(a,b)



    multiply(a,b)



    divide(a,b)



    Back to Lisp!



    One day I will properly learn to use toFixed, toPrecision and toExponential.



  • @needhelp said:

    why does javascript give me results with lots of trailing numbers
    when i need to calculate a x*y
    when i do a 11 * 1.1 it gives me 12.100000000000001 instead of 12.1
    are there any solution?
    x and y can be of any value between 2000.00 and -1000.00 and "usually" between 50 and -50

    help?
    must be in javascript and hopefully short

    14 years ago, Goldberg wrote a paper called What Every Computer Scientist Should Know About Floating-Point Arithmetic.

    Please, go fucking read it



  • Leave out the 'fucking' and I'll give you a 'thanks'.



  • Come on guys, give him a break... script programmers under the age of
    30 often don't know binary and have never programmed in assembler - its
    a real shame but that's reality. In case you haven't time to read up,
    friend, here's the 30 sec explanation:



    Numbers are of course represented as binary in memory, and decimal 0.1
    in binary is an irrational number (it can't be stored in a finite
    number of bits) - try using windows calculator in scientific mode to
    convert it.  So javascript is converting it to binary and back
    again, and so the conversion back isn't exactly what you put in.



    Learn binary and learn the javascript functions parseFloat(), toFixed() and so on - it won't take long.



  • @sao said:

    this is due to an inaccuracy in floats, which i first encountered in C++.

    eg:

    i = 5.0 + 5.0;

    i==10.0 ? FALSE

     

     

    as for a workaround... i always liked multiplying accuracy by 10 and using an integer.

    ie: 110*11 = 121

    i hate floats.


    I think the correct way is to use "abs(a-b) < 0.01" Where a,b are the floats, 0.01 is the ammount they can differ by.

    IIRC money should always done by using a integer for the number of /cents/.


  • @versatilia said:

    Come on guys, give him a break... script programmers under the age of 30 often don't know binary and have never programmed in assembler - its a real shame but that's reality. In case you haven't time to read up, friend, here's the 30 sec explanation:

    Numbers are of course represented as binary in memory, and decimal 0.1 in binary is an irrational number (it can't be stored in a finite number of bits) - try using windows calculator in scientific mode to convert it.  So javascript is converting it to binary and back again, and so the conversion back isn't exactly what you put in.

    Learn binary and learn the javascript functions parseFloat(), toFixed() and so on - it won't take long.

    Pah, I DO know Binary, and I have programmed in assembler.. and I'm only 29...

    (but I never really thought about why floats went wrong [:$])

    Drak



  • @paranoidgeek said:


    I think the correct way is to use "abs(a-b) < 0.01" Where a,b are the floats, 0.01 is the ammount they can differ by.




    That's what I try to do. I hope that's right. Now I'm off to read that pdf...



  • This is kind of off topic, but do C# and .NET in general use IEEE floating point formats?

    I have noticed quite a bit more consistnacy in floating point numbers in C#, but wouldnt that mean no math-coprocessor help?

    Meh, whatever. I hope someday we have floating point numbers entirely consistent to DECIMAL operations... A man can dream...



  • Ive been programming micros in assembly for years.  Ive never
    encountered floating point numbers.  Always when I need decimal
    precision I find a way to represent it using ints.  Now that I
    know how floats work I think theyre bs.  If only C++ did it the
    way I do it in assembly, the top 3 threads here wouldn't be complaining
    about decimal numbers not comparing properly.  Im a beginner at
    C++ so correct me if theres already this decimal datatype Im talking
    about.



  • When I've done floating-point calculations that had to be accurate, I
    would set up the code to round off after every few operations, or I
    would go the "deciamal" route: multiply by a scaling factor and use
    ints.  Occasionally, I've re-arranged equations to reduce the
    impact of round-offs, as mentioned in the "What Every Programmer Should
    Know about Floating Point" article.


Log in to reply