Rewriting a legacy system is no trivial matter, and this particular system is story on it's own, but this system is the reason I learned about the "packed decimal" number format.
To put it simply, packed decimal is a system for storing numbers in a "packed" or "compressed" method. This is achieved by eliminating the decimal, you always assume two decimal places when converting. More importantly (read: bad) the last digit can be a number or character.
To skip all the details, the idea is this:
If the last digit is a alpha character, get the ascii value for it, subtract 16, and get the ascii char of the result. that is the last digit value. oh and make the result negative (because it conained an alpha character)
Thus, 1234A become -12340 (A = ascii(val(A) - 16) = 0)
Why?! you ask. Well to save space of coarse! Back in the day storage was a big issue. So I understood why they did what they did.
The perversion in all this? Our system had to import records from a fixed width flatfile that used this format. I wrote a pre-processor to read in the flatfile, calculate the correct values, and write out to a new text file. Immediately I noticed the new file was dramatically smaller than the original, 8Mb vs 40Mb (a test file, live data goes up to 1Gb).
"Oh crap" I thought, "It's probably missing some rows". Confused as I was, after verifying all rows and columns accounted for, I clicked what it was: The original is a fixed column length format, I wrote a tab-delimited file (which is preferred for our system).
The fixed-length file adds superfluous whitespace which bloats the file like you won't believe, about 5 times bigger than it needs to be.
"WHY?!" I shout at the roof, "Why mission to use the 'packed decimal' number format when tab delimited was clearly the solution?" With storage being such a big issue, they still used the most bloated form of transferring data.