434,786 Members | 1,143 Online + Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,786 IT Pros & Developers. It's quick & easy.

Scientific notation - no rounding errors?

 P: n/a Hi all, Math is not my strongest area so forgive me if I use some of the wrong terminology. It seems that scientific notation is immune to rounding errors. For example: (4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error! 4.98e2 + 5.51 // returns 503.51, correct! Why are scientific notation numbers not affected? And if this is true, does this mean that scientific notation would be safe to use in a floating-point addition function? For example, 4.98 + 0.2, which comes out to 5.180000000000001 (incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to 5.18 (correct!) Any insight would be appreciated... -- Joe May 16 '06 #1
9 Replies

 P: n/a Joe Attardi wrote on 16 mei 2006 in comp.lang.javascript: Math is not my strongest area so forgive me if I use some of the wrong terminology. It seems that scientific notation is immune to rounding errors. For example: (4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error! 4.98e2 + 5.51 // returns 503.51, correct! Why are scientific notation numbers not affected? Wrong, it is untrue. And if this is true, does this mean that scientific notation would be safe to use in a floating-point addition function? Unanswerable. For example, 4.98 + 0.2, which comes out to 5.180000000000001 (incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to 5.18 (correct!) Only a single negative example can prove a point [to be incorrect], while any finite number of positive examples is not enough for correctness. Any insight would be appreciated... Javascript arythmetic and storage use binary [=base 2] numbers. All fraction numbers or results that are not exact floating point binaries could introduce rounding errors, seen from a decimal world, or through overflow of the mantissa. So decimal 0.5 and 5e-1 are both exactly binary 0.1 and will make no problems. 0.2 or 2e-1 [1/5] already gives me the creeps binary. -- Evertjan. The Netherlands. (Please change the x'es to dots in my emailaddress) May 16 '06 #2

 P: n/a Evertjan. wrote: (4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error! 4.98e2 + 5.51 // returns 503.51, correct! Why are scientific notation numbers not affected? Wrong, it is untrue. If it is untrue, then why do the two examples come out with different values? Only a single negative example can prove a point [to be incorrect], while any finite number of positive examples is not enough for correctness. I realize that. I'm not trying to do a proof here, I'm just asking for advice. Javascript arythmetic and storage use binary [=base 2] numbers. All fraction numbers or results that are not exact floating point binaries could introduce rounding errors, seen from a decimal world, or through overflow of the mantissa. So what can I do to properly show the sum of an addition? The point of trying to use the scientific notation is to do what I think is called scaled integer arithmetic? That is, 4.98 + 0.2 becomes 498 + 20, then the addition is integer addition, then the result is divided back down for the correct result. If using multiplication and division to move the decimal point won't work, due to the floating point inaccuracies, what about (I know this sounds messy) converting them to strings, counting the decimal places in the string, removing the decimal point from the string, convert back to numbers, perform the calculations, and merely insert a decimal point in the string of the result? I know that sounds inefficient but it at least would get a proper result, I think. May 16 '06 #3

 P: n/a This problem is not limited to JavaSript. You may want to bone up on the subject in a general cs book. There are a number of solutions to handle rounding errors and I'm sure I don't know half of them. Your solution of scaling the number, truncating to an integer and then scaling back is one. The truncation hides the round off error. The best solution is to format the numbers to show only the number of significant digits your application requires. This hides the rounding errors. It's up to you to design your application so that these errors do not accumulate until a noticeable error occurs. If you use a spreadsheet, these same factors apply. They just don't show you all the digits by default. That's how the errors are hidden in a spreadsheet. Rob:-] May 16 '06 #4

 P: n/a Joe Attardi wrote: Hi all, Math is not my strongest area so forgive me if I use some of the wrong terminology. It seems that scientific notation is immune to rounding errors. For example: (4.98 * 100) + 5.51 // returns 503.51000000000005, rounding error! 4.98e2 + 5.51 // returns 503.51, correct! Why are scientific notation numbers not affected? And if this is true, does this mean that scientific notation would be safe to use in a floating-point addition function? For example, 4.98 + 0.2, which comes out to 5.180000000000001 (incorrect!), would become (498e2 + 0.2e2) / 1e2, which comes out to 5.18 (correct!) Any insight would be appreciated... Modern computers use number systems based on 2 or powers thereof such as 2(binary), 4, 8(octal), 16(hex), 32, 64, 128 and so on. Most western math is on a base 10 system. Anytime you convert from one number system to another, there are going to be some errors introduced, as non-exact numbers result in some cases. If you use decimals and allow division, this ensures some numbers can not be represented exactly in some systems. Consider that in the ordinary base 10 number system, division of 1 by 3 gives a decimal number .33333 to infinity that can not be written exactly. The reason for the scientific number system is that some huge and very small numbers are often used, and you do not want to have to write a lot of leading or trailing zeros to do calculations. Some scientifc calculations, if not very carefully programmed, will cause underflows or overflows even if the computer can handle exp(100). The 'funny" roundoffs have been with computing for well over 50 years since digital computers based on a binary system were introduced, and methods for handling this have been around just as long. People that did money calculations soon started using cents rather than dollars in the US so that fractions were avoided in additions and subtractions so that "funny" numbers such as \$1.00000001 would not disturb the bean counters. In fact IBM had two different programs systems that were widely used. Fortran was used for scientific calculations, and Cobal was used for money calculations. May 16 '06 #5

 P: n/a Here's a nice set of JavaScript techniques to deal with the problem: http://www.mredkj.com/javascript/numberFormat.html Or for a more rigorous treatment of the subject: http://docs.sun.com/source/806-3568/ncg_goldberg.html May 16 '06 #6

 P: n/a var num = 10; var result = num.toFixed(2); // result will equal 10.00 num = 930.9805; result = num.toFixed(3); // result will equal 930.981 num = 500.2349; result = num.toPrecision(4); // result will equal 500.2 num = 5000.2349; result = num.toPrecision(4); // result will equal 5000 num = 555.55; result = num.toPrecision(2); // result will equal 5.6e+2 May 16 '06 #7

 P: n/a Rob wrote: The best solution is to format the numbers to show only the number of significant digits your application requires. I basically need to compare a maximum to a computed total. The maximum isn't known until runtime. So, to get the number of decimal places I need, I could just count the number of decimal places in the maximum and round the computed total to that many places? May 16 '06 #8 