473,772 Members | 2,442 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Why does 1.2 % 0.01 not equal to zero?

The remainder is non zero due to rounding errors. How can I remove the
rounding errors?
Aug 3 '08 #1
11 2674
On Sun, 03 Aug 2008 14:54:26 -0700, Bernard.Mangay <fc****@googlem ail.com>
wrote:
The remainder is non zero due to rounding errors. How can I remove the
rounding errors?
The same way you deal with rounding errors with any floating point
calculation: don't expect exact results. Instead, any time you want to do
a comparison, you need to decide how close is "good enough" and
incorporate that into your comparison.

For example:

double result = 1.2 % 0.01;

if (result < 0.000001)
{
// might as well be zero!
}
Aug 3 '08 #2
On Aug 3, 11:01*pm, "Peter Duniho" <NpOeStPe...@nn owslpianmk.com>
wrote:
On Sun, 03 Aug 2008 14:54:26 -0700, Bernard.Mangay <fcm...@googlem ail.com*
wrote:
The remainder is non zero due to rounding errors. How can I remove the
rounding errors?

The same way you deal with rounding errors with any floating point *
calculation: don't expect exact results. *Instead, any time you want todo *
a comparison, you need to decide how close is "good enough" and *
incorporate that into your comparison.

For example:

* * *double result = 1.2 % 0.01;

* * *if (result < 0.000001)
* * *{
* * * * *// might as well be zero!
* * *}

I thought that might work, but sometimes it gives me a remainder of
0.099999999 which fails the fix you suggested.
Aug 3 '08 #3
Bernard.Mangay <fc****@googlem ail.comwrote:
The remainder is non zero due to rounding errors. How can I remove the
rounding errors?
Use decimal instead of double if you want accurate representations of
numbers like this.

See http://pobox.com/~skeet/csharp/floatingpoint.html for more
information.

--
Jon Skeet - <sk***@pobox.co m>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Aug 3 '08 #4
Bernard.Mangay <fc****@googlem ail.comwrote:
For example:

* * *double result = 1.2 % 0.01;

* * *if (result < 0.000001)
* * *{
* * * * *// might as well be zero!
* * *}

I thought that might work, but sometimes it gives me a remainder of
0.099999999 which fails the fix you suggested.
Only because it's an error on the other side - easy to fix:

if (result < 0.000001 || result 0.01-0.000001)
{
// Might as well be zero
}

--
Jon Skeet - <sk***@pobox.co m>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Aug 3 '08 #5
Tom
How about forcing integer % implementation?

int mult = 100000;
int remainder = (Int32)(1.2 * mult) % (Int32)(0.01 * mult);

I like nice clean integer zeros for control purposes ... but then the
Int32.MaxValue issue might enter onto the scene. Next up is Int64?

Another benefit >For speed of calculation integer math wins the race
by a *large* margin on calculation intensive operations. Perhaps most
noticeable when doing large data count summations.

Skeet's link is a very good read. The quantification and deep
understanding of error propagation is certainly not a simple topic.
Makes my head hurt in a hurry ... but if you have good math skills and
a love for extreme precision you will find the calculus and
statistical exploration of this topic perhaps of great personal
interest. Advanced engineering and physics courses often delve deeply
into this area.

-- Tom
Aug 5 '08 #6
On Mon, 04 Aug 2008 23:57:25 -0700, Tom <Th********@ear thlink.netwrote :
[...]
Another benefit >For speed of calculation integer math wins the race
by a *large* margin on calculation intensive operations. Perhaps most
noticeable when doing large data count summations.
I guess that depends on what one means by "large margin" and "large data".

It's true that the floating point version winds up about half as fast as
the integer version (takes 90% longer). But on my computer, it can
perform 100,000,000 such calculations in about a second (plus a little for
floating point, minus a little for integer). That's 100 _million_.

And of course, you will only realize that sort of speed-up if the _only_
thing you are doing is calculating remainders. In a real-world scenario,
your i/o is going to account for the bulk of your processing time (even
memory i/o). You'd be hard-pressed to come up with a production example
where the difference is significant.

That's all not even counting the difficulty in making reasonable
judgements on performance differences using simple test programs. For
example, the program I wrote actually does the "is zero" calculation.
When I commented that part of the calculation out, the loop got _slower_,
not faster as one would expect. I check the IL, and confirmed that
removing the check had _only_ the direct effect of eliminating a portion
of the generated code. But something about the change in the method
causes it to run significantly slower _without_ the extra line of code in
it.

Performance tuning is very tricky, and often counter-intuitive.

The "transform into integer space" trick is good to know, but most code
should not be rewritten to use it.

(Note also, on top of all the other problems, that it only enjoys that
large an improvement when you only care about "equal to zero". If you
have to convert back to floating point, a lot of the speed is lost again
("only" about 40% faster in that case)).

Pete
Aug 5 '08 #7
Tom
Hey Pete --

My integer method is sort of a back door cheat. It avoids the real
issue of imprecise binary representation of rational numbers. If the
question was posed by a student who needs to convince their instructor
of sufficient understanding ... I much prefer your approach. If on the
other hand it is part of a control/descision block of code ... I
prefer the simplification my method represents and it is one I have
used several times. I consider double % double a trap of sorts that
introduces results often forgotten by those who have not used it
recently. Neither approach is bullet proof; however, the inconsistent
results of the % operator on floats or doubles is so insidious ... I
have a hard time convincing myself that it is a good feature within
C#.

What happens when one has >>

1.20000001 % 0.01

In the above case the 0.00000001 remainder might be very important?
So checks against the RHS value of 0.01 are not a good practice. In
this case the multiplier would need to be 100,000,000 and
Int32.MaxValue = 2,147,483,647 issues start to emerge.

Eventually even the 128 bit decimal representation fails to provide
sufficient accuracy. Irrational numbers such as 1/3'rd and Pi are not
accurately represented even with decimal utilization. I think it false
to make statements such as: "Usage of decimal type assures
_sufficient_ accuracy."

Int32.MaxValue = 2147483647
Int64.MaxValue = 922337203685477 5807
decimal.MaxValu e = 792281625142643 37593543950335

In summary:

1) A thorough understanding of the entire mathematical methodology
used and its inherent precision is necessary. A deep understanding of
the problem domain is mandatory and this is not a trivial task. It
requires the researcher(s) (physicist, engineer, mathematician, etc.)
and the computer scientist to work together.

2) If you see float % float or double % double in code ... become
very concerned.

-----------------------------------------------

My observed speed enhancement converting to integers was associated
with huge stock data files where the data was fully loaded into ram to
avoid hard drive i/o speed issues. In this specific case there was a
lot of numerical summation work being done (integrations, averages,
statistics, etc.) in a dynamic tuning algorithm with lots of looping
within the optimization routine.

Once values such as 987.125 were converted to 987125 the rewritten
program ran in approximately 1/5'th the time as the program using
doubles for data storage. I was thrilled to have results in 12 minutes
vs 1 hour. Situations where such improvements can be had are rare ...
but in my case it involves code in continuous operation and is one of
the best performance enhancements I have stumbled onto. In the texts
and courses I have been exposed too ... the speed differences between
integer math and doubles (or floats) has been ignored. (I have never
learned assembly nor been exposed to the very low level compiler
optimization methods.)

I encourage those where performance is critical to evaluate the
possibility of using integers. The conversion is not a simple process.
It is tricky enough where I suggest side-by-side testing between
integer code and doubles code until all ambiguities are understood and
removed.

-- Tom


Aug 6 '08 #8
On Tue, 05 Aug 2008 17:27:54 -0700, Tom <Th********@ear thlink.netwrote :
Hey Pete --
Hey Tom! :)

Seriously though...I don't disagree with the _philosophical_ points you've
made. However, I see the implications differently, at least to some
extent, and I do think your conclusions are at least a little off. I'll
try to touch on just a few points:
[...]
What happens when one has >>

1.20000001 % 0.01

In the above case the 0.00000001 remainder might be very important?
Nope. If you are using a value such as 0.00000001 as your threshold for
"essentiall y zero", then if you started with a value of "1.20000001 ",
that's considered, by definition, equal to 1.2.

You do have to pick the correct epsilon for your application, but once
you've done so, there are no worries such as the above. If the difference
between 1.20000001 and 1.2 is significant, then 0.00000001 is too large an
epsilon for your application, by definition.
So checks against the RHS value of 0.01 are not a good practice. In
this case the multiplier would need to be 100,000,000 and
Int32.MaxValue = 2,147,483,647 issues start to emerge.

Eventually even the 128 bit decimal representation fails to provide
sufficient accuracy. Irrational numbers such as 1/3'rd and Pi are not
accurately represented even with decimal utilization. I think it false
to make statements such as: "Usage of decimal type assures
_sufficient_ accuracy."
All of that is true. But it's also true if you are converting floating
point to integer for the purpose of the calculation. Doing that doesn't
avoid those problems.
[...]
2) If you see float % float or double % double in code ... become
very concerned.
No question. I'm just saying that the answer isn't necessarily to change
the code to convert to ints and back again. Such code could be less
readable, and/or more bug-prone (different kinds of overflow
possibilities, for example).

The fact is, I think if one finds oneself doing the remainder operation on
floating point values, it's possible that there's a more fundamental
problem. For example, one is probably using floating point when a
different type would be more appropriate (e.g. decimal, as Jon suggested
in this case).

Fixing a theoretical performance problem is always a bad idea anyway.
Make sure it's a _real_ performance problem before you fix it. But beyond
that, one can become distracted by the performance problem (the trees),
and fail to see a more significant design/correctness problem (the forest).

Yes, if you see a floating point remainder calculation, that's cause for
concern. But I don't think that rewriting the code to improve the
computational performance is really what one ought to be focusing on.
-----------------------------------------------

My observed speed enhancement converting to integers was associated
with huge stock data files where the data was fully loaded into ram to
avoid hard drive i/o speed issues.
Did that data fit entirely into the CPU cache?

If not, then you're still dealing with serious memory i/o problems (as I
mentioned before), and any _computational_ speedup (being just part of the
total performance cost) converting to integer calculations is unlikely to
be nearly as significant as the simple test I did might suggest.
[...]
Once values such as 987.125 were converted to 987125 the rewritten
program ran in approximately 1/5'th the time as the program using
doubles for data storage.
Assuming the _only_ change was from using floating point to integers, a 5x
improvement doesn't sound possible given my test.

Now, if you were converting to Int32 from double, per your example, _that_
would be significant. After all, it would have halved the size of your
data, resulting in much less memory i/o and better cache performance.
That _in conjunction with_ the less-than-2x computational improvement
might have accounted for a 5x improvement.

But note that converting from double to int is actually wrong, as those
are not the same sized storage or precision. You run the risk of
overflowing your ints. Or conversely, you could have just been using
floats and enjoyed the same memory bandwidth improvements that switching
from a 64-bit to 32-bit storage would have regardless.
I was thrilled to have results in 12 minutes
vs 1 hour. Situations where such improvements can be had are rare ...
Indeed they are. Like I said, your suggestion isn't completely useless.
I just think in the vast majority of cases, it's a premature optimization.

Pete
Aug 6 '08 #9
Tom wrote:
How about forcing integer % implementation?

int mult = 100000;
int remainder = (Int32)(1.2 * mult) % (Int32)(0.01 * mult);
Because it _doesn't solve the problem_.

The binary representation of .01 may be a little more or less than .01.
When you multiply by 100000, the result may be a little more or less than
1000.00. If it is less, when you cast to integer without any rounding, you
will get 999. Same goes for the other operand.

Furthermore, you are still doing more operations than necessary. You've
eliminated the FP division, but you have two FP multiplies. You're better
off (at least in the case of constants) multiplying by the reciprocal,
rounding, subtracting -- cheap FP operations, probably cheaper than integer
modulo -- and performing a equal-with-tolerance test.

I.e.

double x = 1.2;
const double divisor = 0.01;
const double divisor_recip = 1.0 / divisor; // evaluated at compile-time

double divided = x * divisor_recip;
double remainderoverdi visor = divided - Math.Round(divi ded); // may replace
with Math.Floor(divi ded + .5) if faster

if (Math.Abs(remai nderoverdivisor ) < (.0001 * divisor_recip)) ... //
compile-time expression again, introduce another "const" variable as needed
to force compile-time evaluation
Even when divisor is not known at compile-time, this method may still yield
considerable savings if the reciprocal operation can be moved outside a loop
for example.
>
I like nice clean integer zeros for control purposes ... but then the
Int32.MaxValue issue might enter onto the scene. Next up is Int64?

Another benefit >For speed of calculation integer math wins the race
by a *large* margin on calculation intensive operations. Perhaps most
noticeable when doing large data count summations.

Skeet's link is a very good read. The quantification and deep
understanding of error propagation is certainly not a simple topic.
Makes my head hurt in a hurry ... but if you have good math skills and
a love for extreme precision you will find the calculus and
statistical exploration of this topic perhaps of great personal
interest. Advanced engineering and physics courses often delve deeply
into this area.

-- Tom

Aug 6 '08 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
2493
by: Daniel Billingsley | last post by:
I'm trying to figure out what happens when I cast a double with (int)... does it round or drop the decimal part? MS says: "When you convert from a double or float value to an integral type, the value is rounded towards zero to the nearest integral value." Uh... what? I've hard of "rounding to the nearest integral value", "rounding up", "rounding down", but what in tarnation is "rounded towards
17
3063
by: orekinbck | last post by:
Hi There Say I want to check if object1.Property1 is equal to a value, but object1 could be null. At the moment I have code like this: if (object1 != null) { if (object1.Property == desiredValue)
47
46221
by: sudharsan | last post by:
Hi could you please explain wat atoi( ) function is for and an example how to use it?
42
2151
by: Sheldon | last post by:
Hi, This program works when tested with gdb ( see results 1) but when used in a larger program where 12 becomes 1500 there exists a problem when freeing the memory ( see results 2). Can anyone give any advise here? -------------------------------------------------- #include <stdlib.h> #include <stdio.h> int main(void) {
1
11708
by: gomzi | last post by:
hi, When i try to select the number of rows in my table depending on a where condition i get this error : - Index (zero based) must be greater than or equal to zero and less than the size of the argument list. my select statement is like this:- Dim con As New MySqlConnection(ConfigurationManager.AppSettings("connectionstring")) Dim cmd As New MySqlCommand
26
1998
by: Shraddha | last post by:
I got a small programm on net...but there was different initialisation that I saw... It was as follows: for ( i = ~0 ; i ; i>>=1); right shift is ok..But what is meaned by " i = ~0 "...
1
1809
by: buntyindia | last post by:
Hi , I am exporting files from my application. Files those are not supposed to be exported are of two types : 1.) Blank with Zero bytes 2.) Other one is in a format # #Mon Aug 13 14:33:02 IST 2007
7
6750
by: jackie | last post by:
I know in c that 3/2 gets 1 and 3%2 is also 1,and i think this is guaranteed by C99(is that true?),but on the other hand,-3/2 may be -1 or -2,and -3%2 may be -1 and 1 respectively,it is implement- dependent,so,is my understanding of this true(positive division and mod is guaranteed while negative is not)? thx for your help in advance..
3
2667
mseo
by: mseo | last post by:
hi, I have a report connected to a form where I can print the invoices between the two dates in the form but I have the problem: when the checkbox = true refers to Canceled invoice but the values in this record still as it is, and I want to make the values equal zero if the checkbox = true, I tried hardly to make them equal zeros but the record disappeared from the report I want to include the record within the report equal zero even it has...
0
9621
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9454
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
1
10039
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9914
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8937
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7461
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6716
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5355
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
3
2851
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.