0 = 3,1E15?

And you thought Excel 2007 merely could not count to 100000. This product, used by thousands of businesses to calculate things like your paycheck and safety margins on nuclear failsafes, does not seem to think that 0,2 minus 0,2 is 0.
How to recreate
· Open Excel 2007 SP1.
· Make new Excel 2007 spreadsheet.
· Enter '6' in any field.
· Go one field down, enter '=' > arrow up > '0,2' ()
· Copy/paste formula many times below it.
· Wait until the count hits 0... or rather, 3,1E15.
· Expand cell width and # of significant numbers to see the real number: 0,000000000000003053113317719180 !!
() The real WTF is that some countries use the point instead of the comma as a decimal marker.

Same problem occurs in Excel 2000. Must not be using binarycoded decimal.

@Brother Laz said:
(*) The real WTF is that some countries use the point instead of the comma as a decimal marker.
No, the real WTF is that some countries use a symbol for the decimal POINT that is other than a POINTHow do countries that use the comma say the numbers anyway? Because we'd say something like "three point one four". Would the French, or Germans, or whoever use the word that they use for the comma?

@m0ffx said:
How do countries that use the comma say the numbers anyway? Because we'd say something like "three point one four". Would the French, or Germans, or whoever use the word that they use for the comma?
Yes. What's your point?

@Brother Laz said:
The real WTF is that some countries use the point instead of the comma as a decimal marker.
Actually it isn't determined by country, Canada's English speakers will use a point for a decimal point and a comma to separate triplets ($1,000,000.00) whereas the French speaking parts will use commas for decimal points and spaces for separating triplets (1 000 000,00$). Also French speakers place the dollar sign in a sane location, I end up using the French style except for the decimal point which I use a period, I find that most readable especially when using coordinates. Nothing inhibits one from using any method really, people know what you mean when you show them a number in most formats. TRWTF is French style coordinates (2,3,6), is that x 2,3 and y 6, x 2 and y 3,6 or x 2, y 3 and z 6?

We say ...
We say "three comma fourteen", except not in English Imagine the pain of writing a tuple of real numbers:
(1,2, 3,4, 7, 8,9) "One comma two comma three comma four comma seven comma eight comma nine". Good luck figuring out which are decimal separators and which are element separators!
It's so unbelievably stupid to have two different notations for something this common. I promise we'll ditch the comma as soon as you people discover the metric system.

Awww, someone doesn't understand floating point  how cute

@ahnfelt said:
I live in the UK. We use a mix of metric and imperial measures. Products sold in shops now have to be sold in metric (although quantities like 454 grams or 2.27 liters are still common). Most younger people use metric for short distances (older people are more likely to use imperial), but miles for longer ones, and miles per hour for road speeds. Personal weights are normally in stones and pounds (14 pounds = 1 stone), and personal heights often in feet and inches, as are vehicle heights and widths. Scientists almost universally use SI (basically metric). The Imperial 'ton' used here is the long ton, and is very close to the metric tonne (within 2%) so makes no real difference, nowadays laws refer to the metric. I expect by 2100 Imperial units will fall largely out of use in the UK, with the possible exception of road distances and speeds.We say "three comma fourteen", except not in English Imagine the pain of writing a tuple of real numbers:
(1,2, 3,4, 7, 8,9) "One comma two comma three comma four comma seven comma eight comma nine". Good luck figuring out which are decimal separators and which are element separators!
It's so unbelievably stupid to have two different notations for something this common. I promise we'll ditch the comma as soon as you people discover the metric system.

@Brother Laz said:
And you thought Excel 2007 merely could not count to 100000. This product, used by thousands of businesses to calculate things like your paycheck and safety margins on nuclear failsafes, does not seem to think that 0,2 minus 0,2 is 0.
Looks like someone isn't familiar with Floating Point's Accuracy problems . Yet another one. Sigh.

@realmerlyn said:
@Brother Laz said:
And you thought Excel 2007 merely could not count to 100000. This product, used by thousands of businesses to calculate things like your paycheck and safety margins on nuclear failsafes, does not seem to think that 0,2 minus 0,2 is 0.
Looks like someone isn't familiar with Floating Point's Accuracy problems . Yet another one. Sigh.
Interestingly enough the vast majority of the millions of people who use Excel don't either.

The RWTF is that we all don't use "·" (ASCII 250) for the decimal point, like everyone does when handwriting. But somebody back in ancient times decided to leave it off the keyboard, and so it was lost to humankind. (They also forgot to include it in standard low ascii, to make room for the sooftenusedpunctuationmark, the tilde, one assumes)

You can force Excel to use fixedpoint for specific fields if the floatingpoint inaccuracies are a problem. See: http://support.microsoft.com/kb/214118

@robbak said:
The RWTF is that we all don't use "·" (ASCII 250) for the decimal point, like everyone does when handwriting. But somebody back in ancient times decided to leave it off the keyboard, and so it was lost to humankind. (They also forgot to include it in standard low ascii, to make room for the sooftenusedpunctuationmark, the tilde, one assumes)
The use of a period as a decimal separator on computers predates ASCII and keyboards. It reaches back to the days of punched cards, when there were only about 60 characters in the character set and adding more would mean building larger and more expensive card readers. It has merely been carried forwards from there.

@ahnfelt said:
We say "three comma fourteen", except not in English Imagine the pain of writing a tuple of real numbers:
(1,2, 3,4, 7, 8,9) "One comma two comma three comma four comma seven comma eight comma nine". Good luck figuring out which are decimal separators and which are element separators!
It's so unbelievably stupid to have two different notations for something this common. I promise we'll ditch the comma as soon as you people discover the metric system.
Many more countries use a period as a decimal point than those that use the imperial system...

@damncrackmonkey said:
Awww, someone doesn't understand floating point  how cute
I understand floats all right, but two things I don't understand:
Why Excel is still using floats. I could see it if it was in a program that regularly handles tens of millions of data points, or if most users are programmers and can be expected to know about floats. Excel is neither  it's a program aimed at everyday users who expect their computer to be accurate with numbers, and it usually processes data sets small enough that an arbitraryprecision default number format wouldn't hurt. Even Windows Calc uses bignums as of XP!
How 65534+1 could possibly evaluate to 100000 under ANY computer numerical representation, floating point or no.

@Lingerance said:
TRWTF is French style coordinates (2,3,6), is that x 2,3 and y 6, x 2 and y 3,6 or x 2, y 3 and z 6?
TRWTF is that you don't know french style coordinates and still rant about it!
x = 2,3 and y = 6 ==> (2,3;6)
x = 2 and y = 3,6 ==> (2;3,6)
x = 2, y = 3 and z = 6 ==> (2;3;6)

@joemck said:
How 65534+1 could possibly evaluate to 100000 under ANY computer numerical representation, floating point or no.
Excel uses several different internal representations of numbers depending on what the value of the number is, and they botched the conversion between two of them. Yes, this is a silly way to implement mathematics.

@robbak said:
The RWTF is that we all don't use "·" (ASCII 250) for the decimal point, like everyone does when handwriting.
You just used a middot (& middot . That's for when you have modest math background and do multiplication on paper. Some people do that. In the Netherlands, children in school learn ×, and most people with keyboards use * for multiplication, unless they're not mathy  then they use a plain x.
The middot can also be used a typographic field separation symbol. Or at least I'm using it as such.
The middot is not a decimal separator.
2*3 = 2 · 3 = 2×3 = 6

@shawnz said:
@ahnfelt said:
Living in a country that commonly uses the imperial system (though allegedly the EU is trying to rectify this,) I was unaware that we should be using a comma instead of a point for the radix point.We say "three comma fourteen", except not in English Imagine the pain of writing a tuple of real numbers:
(1,2, 3,4, 7, 8,9) "One comma two comma three comma four comma seven comma eight comma nine". Good luck figuring out which are decimal separators and which are element separators!
It's so unbelievably stupid to have two different notations for something this common. I promise we'll ditch the comma as soon as you people discover the metric system.
Many more countries use a period as a decimal point than those that use the imperial system...
Citation please?

About how countries separate numbers when they use comma for the decimal point, they use semicolon. They even decided to drop support for comma separated values files in the Swedish version of Excel. We got semicolon separated values files instead.

@henke37 said:
They even decided to drop support for comma separated values files in the Swedish version of Excel. We got semicolon separated values files instead.
Smells like a WTF.
Here in Hungary we too use comma for the decimal point, but the Hungarian Excel supports csv properly.
Just checked that, well, not exactly properly. When I saved a table with some decimals in it, I could select CSV as a "values separated with semicolons" format. Looking in the file, the decimal points were points, the separator was comma, not semicolons.

@Brother Laz said:
(*) The real WTF is that some countries use the point instead of the comma as a decimal marker.
Jesus, I'm glad you threw in that reminder at the end... For a minute there I thought you were doing some kind of crazy coordinate point subtraction, or some other arithmetic on sets of numbers. Seriously.

@tray said:
@henke37 said:
They even decided to drop support for comma separated values files in the Swedish version of Excel. We got semicolon separated values files instead.
Smells like a WTF.One which is great fun when your business operates in several countries in Europe and they expect to be able to pass files around. I keep telling them "they're not compatible" and they just stare blankly.
I do have good reasons for my low opinion of Excel.

Sigh.
0.2 does not have a binary, floatingpoint representation. Quite simply, 0.2 != 0.2 in IEEE floatingpoint. I'm not entirely positive whether Excel uses singleprecision floats with 23 bits of mantissa, or doubleprecision with 52 bits, but in either case, it only displays the first 15 bits. You can test this easily; try typing 1.00000000000002 into a cell; you'll see that it "sticks" (it displays as "1", but the full value is shown at the top). Then try typing 1.000000000000002; you'll see that it doesn't stick, and just becomes "1", even in the top input box.
Excel doesn't discard the extra bits as the result of a calculation, it just doesn't display them. As per the aforementioned example, 1.00000000000002 just displays as 1. However, the mantissa always starts at the first nonzero digit; so when the zeros don't trail anything, and you just have 0.000000000000003053113317719180 as you yourself have "discovered", then the first part of the mantissa is 3.053113317719180. Not coincidentally, that's 15 significant bits.
You can see this inaccuracy long before you hit zero. If you Copy/Paste Values on the "1" cell in your example, you'll see that it is actually 0.999999999999997. If you do this on the "2" instead, you won't see any change in the cell value, but it will still change the end result. Why? Because it wasn't exactly "2" stored in that cell, it was 2 followed by at least 14 zeros followed by something else.
If you have any inclination to try to save face here by telling us that the Real WTF™ is how only 15 bits of precision are displayed, then don't bother, because you yourself have just demonstrated that Excel actually IS keeping the correct precision internally, so it really doesn't matter what's displayed for an intermediate result as long as the end result is correct.
Yes, this type of arithmetic system (albeit with higher precision) is used to calculate safety margins on nuclear failsafes and rocket engines. There's not a thing wrong with that. It's a whole lot more accurate than using fixedpoint arithmetic and losing half your precision when you take the reciprocal of some very small number. Not like a 1/300000000000000th margin of error would make much of a difference anyway, since safety margins are usually on the order of 50%  that's why they're called "safety margins".
Thanks for comin' out  now quit wasting your time on this garbage and get back to work.

@Aaron said:
Yes, this type of arithmetic system (albeit with higher precision) is used to calculate safety margins on nuclear failsafes and rocket engines.
No, those use a different, more reliable type of arithmetic system, known as arbitraryprecision arithmetic. Unlike fixedpoint and floatingpoint systems, it does not have errors; it always yields the answer correct to the requested degree of precision.
It's a whole lot more accurate than using fixedpoint arithmetic and losing half your precision when you take the reciprocal of some very small number.
Actually, the maximum error in floating point elementary arithmetic (such as simple subtraction or division) is equal to the value  every single bit in the answer can be wrong. Extreme care and understanding is required to prevent this from happening. Fixedpoint arithmetic has a far smaller maximum error, and a better amortised error for most common operations. Floating point is not more precise, except in those certain very specialised cases for which it was designed (primarily relating to things which model physics and statistics).

@Aaron said:
Sigh.
What every computer scientist should know about floatingpoint arithmetic.<snip>
Perhaps you should write it up and publish it.

@seaturnip said:
@m0ffx said:
How do countries that use the comma say the numbers anyway? Because we'd say something like "three point one four". Would the French, or Germans, or whoever use the word that they use for the comma?
Yes. What's your point?
My comma is that points are used to represent every 3 digits, commas are decimals... no wait commans every 3 digits, points decimals... no wait dollar signs slash space comma point AAAAAGH!!!

@robbak said:
The RWTF is that we all don't use "·" (ASCII 250) for the decimal point
The real WTF is that you think a character value of 250 is valid ASCII.

@realmerlyn said:
Looks like someone isn't familiar with Floating Point's Accuracy problems . Yet another one. Sigh.
People don't care about floating points, they just want their €300 package to display 0,2 minus 0,2 as 0, which by the way is not rocket science to implement because freaking calc.exe gets it right. So I guess if you want your Excel calculations to be accurate, you had better compute out everything using calc?
As if the fact that commands get translated isn't enough reason to hate Excel. (I'm using the Dutch version at home, and the English version at work. TEKST.SAMENVOEGEN() is what exactly in English?)
......
@Aaron said:
0.2 does not have a binary,
floatingpoint representation. Quite simply, 0.2 != 0.2 in IEEE
floatingpoint. (...)Yes, this type of arithmetic system
(albeit with higher precision) is used to calculate safety margins on
nuclear failsafes and rocket engines. There's not a thing wrong with
that. It's a whole lot more accurate than using fixedpoint arithmetic
and losing half your precision when you take the reciprocal of some
very small number. Not like a 1/300000000000000th margin of error
would make much of a difference anyway, since safety margins are
usually on the order of 50%  that's why they're called "safety
margins".Thanks for comin' out  now quit wasting your time on this garbage and get back to work.
My point is that Excel gets it wrong and everything else (including calc.exe and Google calc) gets it right. And such a small deviation can have a major impact when you're checking whether some field is 0 or not. This field will probably not be considered 0.
I've come to the conclusion that comparing the result of any calculation to a number is a dangerous proposition in Excel, because apparently math is too hard or something.

Yeah, definitely a more sophisticated and slower performance real number format (i.e. neither floating point nor fixed point) would be a better default for Excel. My guess is that this is yet another of those things that date from the ancient history of Excel and make it generally weird and inelegant crap.

@Brother Laz said:
People don't care about floating points, they just want their €300 package to display 0,2 minus 0,2 as 0, which by the way is not rocket science to implement because freaking calc.exe gets it right. So I guess if you want your Excel calculations to be accurate, you had better compute out everything using calc?
calc.exe uses an arbitraryprecision calculation engine because people kept whining about how it got calculations wrong. It is slower than just using floats. People also want their €300 package to crunch hundreds of thousands of numbers right away. Try doing that with Google.

it will be fine with me if my paycheck is off by a margen of 3.1*10^15

@asuffield said:
One which is great fun when your business operates in several countries in Europe and they expect to be able to pass files around. I keep telling them "they're not compatible" and they just stare blankly.
I do have good reasons for my low opinion of Excel.
Usually you don't even have to pass files to other countries, and Excel is definitively not the only program with such problems. Try connecting computers with e.g. a swedish OS to international instruments which use a textbased protocol for communication. Even with software that is supposed to simplify stuff like this (e.g. LabVIEW), there's a good chance that you'll run into some retarded problems with periods and commas.
This is especially fun if it's a lowlevel driver messing this up, and the only indication that something did not quite work out is the fact that your results are really off and that the sortof expensive equipment you are using is beeping constantly.

@cvi said:
Try connecting computers with e.g. a swedish OS to international instruments which use a textbased protocol for communication.
I am aware of only one company who makes an OS that has to have an entirely different version for each country.

@joemck said:
How 65534+1 could possibly evaluate to 100000 under ANY computer numerical representation, floating point or no.
There's been some very good writeups on exactly how and what the Excel bug really was. And in the end, in spite of all the doom and gloom and "OMG EXCEL CRAHSES THE WORLDZORZ@#$@$@!!!!11111one" panic blog posts, it comes out that internally the proper value was always stored  it was purely a DISPLAY BUG.
Go and do some digging about it. Joel On Software had a link to a very good and detailed writeup on the thing. It was a bug that came out from when Excel went from 16bit > 32bit code internally in the formatting routines, and the developer(s) forgot to take into account the high 16bits in a 32bit register, filling only the lower 16bit ones. On only SOME floating point values, a bit in the AX (or whatever it really was) register would overflow into the full EAX and throw off the formatting. It's no surprise that the bug didn't get caught in testing. The code path in question triggered on ~10 out of 32 billion possible values.

If we are going to worry about compatibility, what about the fact that some applications, like Excel, changes the name of the mathematical functions depending on the localization used.
Yes, in USA, it is SUM, but here in Sweden it is called SUMMA.
Let's hope they did make a mapping table for the inevitable situation that somebody sends a file using a function to another country.

@MarcB said:
The code path in question triggered on ~10 out of 32 billion possible values.
You obviously pointed out the 32 billion to make us think the developers would have had to blindly and manually check the correctness of every single value. However, 65535 isn't just some random unexpected number  it's an obvious edge case that should have been tested by the developers. It's no surprise that the bug didn't get caught during the user trials, but the developers should have intuitively known which values to test.

@imikedaman said:
@MarcB said:
The code path in question triggered on ~10 out of 32 billion possible values.
You obviously pointed out the 32 billion to make us think the developers would have had to blindly and manually check the correctness of every single value. However, 65535 isn't just some random unexpected number  it's an obvious edge case that should have been tested by the developers. It's no surprise that the bug didn't get caught during the user trials, but the developers should have intuitively known which values to test.Test, hell. They should have written the algorithm correctly in the first place. Bugs in deployed software are not "testing failures", they are bloody bugs in the code.

@asuffield said:
Test, hell. They should have written the algorithm correctly in the first place.
How exactly do you expect people to write perfect code on their first attempt, and every time? And how would you know whether the code is correct without any testing?

@imikedaman said:
How exactly do you expect people to write perfect code on their first attempt, and every time?
If you don't expect exactly that, people will never get code right on any attempt. People who do not strive to get things right do not ever get things right.

Here's the link to the detailed analysis (PDF).
Ok, so they should test the 32 billion possible values. And then let's expand this to having to test all places where "math" is done in Excel and test all 2^16, 2^32, or 2^64 possible values in each of those. And let's not forget the chaining factor. Might be done correctly in part A but results of part B,C,...Z will fail because of particular value of A.
Doesn't seem quite as simple then. A complete exhaustive test of all possible inputs and outputs of a program, especially for something the size and complexity of Excel is quite literally impossible.
I'm not defending Microsoft. This particular case probably could've been caught if a proper test had been written. But that's the problem with unit tests. They only look for stuff the programmer can think of in advance, and a test most likely wouldn't have caught this in the first place, since it was a DISPLAY bug. Any code that would add a couple numbers that triggered the fault would still have seen the proper value stored in memory, because the only place the problem showed up was ONSCREEN.
Now that they know about this, I'm sure they'll add some more stuff to their standard tests to compare inmemory values with onscreen displayed values.

@MarcB said:
But that's the problem with unit tests.
Precisely. Testing is not the answer. Thinking is the answer.

@asuffield said:
If you don't expect exactly that, people will never get code right on any attempt. People who do not strive to get things right do not ever get things right.
You're joking, right? That sounds a lot more like one of those B.S. motivational posters a clueless manager puts on the wall than something a developer would ever say.@MarcB said:
Here's the link to the detailed analysis (PDF).
Ok, so they should test the 32 billion possible values.
"The site claimed exactly 12 of the 9.2*10^18 possible 64bit floatingpoint values suffer from this bug, with six values between 65534.99999999995 and 65535, and six between 65535.99999999995 and 65536."
Notice anything familiar about those numbers? It's the 65535 from the last post  the one edge case that exists in a conversion from 16 to 32 bits. You can intuitively determine that only the values near the edge case would need to be tested. I'm guessing the developer who worked on that code had a ridiculous amount of work left on the rest of Excel and simply forgot to test the code. After all, no one's perfect.

@imikedaman said:
I'm guessing the developer who worked on that code had a ridiculous amount of work left on the rest of Excel and simply forgot to test the code.
He was writing it in assembler, so my bet is that he was a moron. Don't write stuff like that in assembler. That's just dumb. Also don't write idiotically complex bittwiddling hacks for code that is not performanceintensive. This is the display routine, for pity's sake. Something written in three lines of C would have been better.

@asuffield said:
He was writing it in assembler, so my bet is that he was a moron. Don't write stuff like that in assembler. That's just dumb. Also don't write idiotically complex bittwiddling hacks for code that is not performanceintensive. This is the display routine, for pity's sake. Something written in three lines of C would have been better.
Okay man, I'm not sure what your deal is. First you spout off WTFworthy gibberish, and now you're calling topnotch developers morons for making a single mistake in a massive legacy code base. There's no way you're in any position to be saying these things.
Excel has been around for over [b]two decades[/b] now, and back then you wrote code in assembly and used 16bit data types. Excel is probably stuffed to the brim with legacy code from a previous era. Even today it's still in their best interest to keep the code running as fast as possible since Excel can run on old hardware and needs to format hundreds of fields at a time. The faster it is, the smoother the experience will be for the end user.

You forgot your tag.

@imikedaman said:
the one edge case that exists in a conversion from 16 to 32 bits.
There's also the subtraction case. Nothing like ending up with 0xFFFFFFFF instead of the expected 0xFFFF when subtracting one.
In any case, stupid bug, but not one that would've been caught in any "sensible" automated test, and probably not caught in manual testing either, unless the coder's intimately familiar with all aspects of handling floating point numbers in assembler.


@MarcB said:
I think that was asuffield's point: if you're working on formatting floatingpoint numbers in assembler, you have no business not knowing all aspects of floating point numbers.In any case, stupid bug, but not one that would've been caught in any "sensible" automated test, and probably not caught in manual testing either, unless the coder's intimately familiar with all aspects of handling floating point numbers in assembler.