By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,965 Members | 1,690 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,965 IT Pros & Developers. It's quick & easy.

Bug in floating-point addition: is anyone else seeing this?

P: n/a
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>a = 1e16-2.
a
9999999999999998.0
>>a+0.999 # gives expected result
9999999999999998.0
>>a+0.9999 # doesn't round correctly.
10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?

Mark
Jun 27 '08 #1
Share this Question
Share on Google+
31 Replies


P: n/a
Mark Dickinson schrieb:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>a = 1e16-2.
a
9999999999999998.0
>>>a+0.999 # gives expected result
9999999999999998.0
>>>a+0.9999 # doesn't round correctly.
10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?
It is working under OSX:

(TG1044)mac-dir:~/projects/artnology/Van_Abbe_RMS/Van-Abbe-RMS deets$ python
Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04)
[GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Welcome to rlcompleter2 0.96
for nice experiences hit <tabmultiple times
>>a = 1e16-2.
a
9999999999999998.0
>>a+0.9999
9999999999999998.0
>>>
But under linux, I get the same behavior:

Python 2.5.1 (r251:54863, May 2 2007, 16:56:35)
[GCC 4.1.2 (Ubuntu 4.1.2-0ubuntu4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Welcome to rlcompleter2 0.96
for nice experiences hit <tabmultiple times
>>a = 1e16-2.
a+0.9999
10000000000000000.0
>>>

So - seems to me it's a linux-thing. I don't know enough about
IEEE-floats to make any assumptions on the reasons for that.

Diez
Jun 27 '08 #2

P: n/a
On May 21, 11:38 am, Mark Dickinson <dicki...@gmail.comwrote:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.>>a = 1e16-2.
>a
9999999999999998.0
>a+0.999 # gives expected result
9999999999999998.0
>a+0.9999 # doesn't round correctly.

10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?

Mark
I see it too
Jun 27 '08 #3

P: n/a
Mark Dickinson <di******@gmail.comwrote:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:

dickinsm@weyl:~python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>a = 1e16-2.
a
9999999999999998.0
>>>a+0.999 # gives expected result
9999999999999998.0
>>>a+0.9999 # doesn't round correctly.
10000000000000000.0

The last result here should be 9999999999999998.0,
not 10000000000000000.0. Is anyone else seeing this
bug, or is it just a quirk of my system?
On my system, it works:

Python 2.5.2 (r252:60911, May 21 2008, 18:49:26)
[GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>a = 1e16 - 2.; a
9999999999999998.0
>>a + 0.9999
9999999999999998.0

Marc
Jun 27 '08 #4

P: n/a
On May 21, 3:22*pm, Marc Christiansen <use...@solar-empire.dewrote:
On my system, it works:

*Python 2.5.2 (r252:60911, May 21 2008, 18:49:26)
*[GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
*Type "help", "copyright", "credits" or "license" for more information.
*>>a = 1e16 - 2.; a
*9999999999999998.0
*>>a + 0.9999
*9999999999999998.0

Marc
Thanks for all the replies! It's good to know that it's not just
me. :-)

After a bit (well, quite a lot) of Googling, it looks as though this
might be known problem with gcc on older Intel processors: those using
an x87-style FPU instead of SSE2 for floating-point. This gcc
'bug' looks relevant:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

Now that I've got confirmation I'll open a Python bug report: it's
not clear how to fix this, or whether it's worth fixing, but it
seems like something that should be documented somewhere...

Thanks again, everyone!

Mark
Jun 27 '08 #5

P: n/a
On May 21, 12:38*pm, Mark Dickinson <dicki...@gmail.comwrote:
>a+0.999 * * # gives expected result
9999999999999998.0
>a+0.9999 * # doesn't round correctly.

10000000000000000.0
Shouldn't both of them give 9999999999999999.0?

I wrote the same program under Flaming Thunder:

Set a to 10^16-2.0.
Writeline a+0.999.
Writeline a+0.9999.

and got:

9999999999999998.999
9999999999999998.9999

I then set the precision down to 16 decimal digits to emulate Python:

Set realdecimaldigits to 16.
Set a to 10^16-2.0.
Writeline a+0.999.
Writeline a+0.9999.

and got:

9999999999999999.0
9999999999999999.0
Jun 27 '08 #6

P: n/a
On Wed, May 21, 2008 at 4:34 PM, Dave Parker
<da********@flamingthunder.comwrote:
On May 21, 12:38 pm, Mark Dickinson <dicki...@gmail.comwrote:
>>a+0.999 # gives expected result
9999999999999998.0
>>a+0.9999 # doesn't round correctly.

10000000000000000.0

Shouldn't both of them give 9999999999999999.0?
My understand is no, not if you're using IEEE floating point.
I wrote the same program under Flaming Thunder:

Set a to 10^16-2.0.
Writeline a+0.999.
Writeline a+0.9999.

and got:

9999999999999998.999
9999999999999998.9999
You can get the same results by using python's decimal module, like this:
>>from decimal import Decimal
a = Decimal("1e16")-2
a
Decimal("9999999999999998")
>>a+Decimal("0.999")
Decimal("9999999999999998.999")
>>a+Decimal("0.9999")
Decimal("9999999999999998.9999")
>>>
--
Jerry
Jun 27 '08 #7

P: n/a
On May 21, 2:44*pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
Jun 27 '08 #8

P: n/a
Dave Parker schrieb:
On May 21, 2:44 pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
>My understand is no, not if you're using IEEE floating point.

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
Who says that rounding on base 10 is more correct than rounding on base 2?

And in scientific programming, speed matters - which is why e.g. the
cell-processor shall grow a double-precision float ALU. And generally
supercomputers use floats, not arbitrary precision BCD or even rationals.
Diez
Jun 27 '08 #9

P: n/a
On Wed, May 21, 2008 at 3:56 PM, Dave Parker
<da********@flamingthunder.comwrote:
On May 21, 2:44 pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
>My understand is no, not if you're using IEEE floating point.

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
--
If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have already.
Jun 27 '08 #10

P: n/a
On Wed, May 21, 2008 at 4:56 PM, Dave Parker
<da********@flamingthunder.comwrote:
On May 21, 2:44 pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
>My understand is no, not if you're using IEEE floating point.

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
However, this is not an issue of language correctness, it's an issue
of specification and/or hardware. If you look at the given link, it
has to do with the x87 being peculiar and performing 80-bit
floating-point arithmetic even though that's larger than the double
spec. I assume this means FT largely performs floating-point
arithmetic in software rather than using the FP hardware (unless of
course you do something crazy like compiling to SW on some machines
and HW on others depending on whether you trust their functional
units).

The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation. When accuracy is more important than speed of
number crunching (and don't argue to me that your software
implementation is faster than, or probably even as fast as, gates in
silicon) you use packages like Decimal.

Really, you're just trying to advertise your language again.
Jun 27 '08 #11

P: n/a
On May 21, 3:17*pm, "Chris Mellon" <arka...@gmail.comwrote:
If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have already.
Actually, I've only posted on 2 threads that were questions about
Python -- this one, and the one about for-loops where the looping
variable wasn't needed. I apologize if that irritates you. But maybe
some Python users will be interested in Flaming Thunder if only to
check the accuracy of the results that they're getting from Python,
like I did on this thread. I think most people will agree that having
two independent programs confirm a result is a good thing.
Jun 27 '08 #12

P: n/a
On May 21, 3:19*pm, "Dan Upton" <up...@virginia.eduwrote:
The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation.
I agree.

I also think that the precision/speed tradeoff should be under user
control -- not at the whim of the compiler writer. So, for example,
if a user says:

Set realdecimaldigits to 10.

then it's okay to use hardware double precision, but if they say:

Set realdecimaldigits to 100.

then it's not. The user should always be able to specify the
precision and the rounding mode, and the program should always provide
correct results to those specifications.
Jun 27 '08 #13

P: n/a
On Wed, May 21, 2008 at 4:29 PM, Dave Parker
<da********@flamingthunder.comwrote:
On May 21, 3:17 pm, "Chris Mellon" <arka...@gmail.comwrote:
>If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have already.

Actually, I've only posted on 2 threads that were questions about
Python -- this one, and the one about for-loops where the looping
variable wasn't needed. I apologize if that irritates you. But maybe
some Python users will be interested in Flaming Thunder if only to
check the accuracy of the results that they're getting from Python,
like I did on this thread. I think most people will agree that having
two independent programs confirm a result is a good thing.
--
Please don't be disingenuous. You took the opportunity to pimp your
language because you could say that you did this "right" and Python
did it "wrong". When told why you got different results (an answer you
probably already knew, if you know enough about IEEE to do the
auto-conversion you alluded to) you treated it as another opportunity
to (not very subtly) imply that Python was doing the wrong thing. I'm
quite certain that you did this intentionally and with full knowledge
of what you were doing, and it's insulting to imply otherwise.

You posted previously that you wrote a new language because you were
writing what you wanted every other language to be. This is very
similar to why Guido wrote Python and I wish you the best of luck. He
was fortunate enough that the language he wanted also happened to be
the language that lots of other people wanted. You don't seem to be so
fortunate, and anti-social behavior on newsgroups dedicated to other
languages is unlikely to change that. You're not the first and you
won't be the last.
Jun 27 '08 #14

P: n/a
On May 21, 3:41*pm, "Chris Mellon" <arka...@gmail.comwrote:
When told why you got different results (an answer you
probably already knew, if you know enough about IEEE to do the
auto-conversion you alluded to) ...
Of course I know a lot about IEEE, but you are assuming that I also
know a lot about Python, which I don't. I assumed Python was doing
the auto-conversion, too, because I had heard that Python supported
arbitrary precision math. Jerry Hill explained that you had to load a
separate package to do it.
you treated it as another opportunity
to (not very subtly) imply that Python was doing the wrong thing.
This person who started this thread posted the calculations showing
that Python was doing the wrong thing, and filed a bug report on it.

If someone pointed out a similar problem in Flaming Thunder, I would
agree that Flaming Thunder was doing the wrong thing.

I would fix the problem a lot faster, though, within hours if
possible. Apparently this particular bug has been lurking on Bugzilla
since 2003: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323
Jun 27 '08 #15

P: n/a
Dave Parker schrieb:
On May 21, 3:19 pm, "Dan Upton" <up...@virginia.eduwrote:
>The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation.

I agree.

I also think that the precision/speed tradeoff should be under user
control -- not at the whim of the compiler writer. So, for example,
if a user says:

Set realdecimaldigits to 10.

then it's okay to use hardware double precision, but if they say:

Set realdecimaldigits to 100.

then it's not. The user should always be able to specify the
precision and the rounding mode, and the program should always provide
correct results to those specifications.
Which is exactly what the python decimal module does.

Diez
Jun 27 '08 #16

P: n/a
On May 21, 4:21*pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Which is exactly what the python decimal module does.
Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent program, I'll know to
use Python with the decimal module.
Jun 27 '08 #17

P: n/a
On May 21, 3:28 pm, Dave Parker <davepar...@flamingthunder.comwrote:
On May 21, 4:21 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Which is exactly what the python decimal module does.

Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent program, I'll know to
use Python with the decimal module.
Utterly shameless.
Jun 27 '08 #18

P: n/a
On May 21, 4:56 pm, Dave Parker <davepar...@flamingthunder.comwrote:
On May 21, 2:44 pm, "Jerry Hill" <malaclyp...@gmail.comwrote:
My understand is no, not if you're using IEEE floating point.

Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always get correctly rounded
results for +, -, *, and /, like Flaming Thunder gives. Correct
rounding and accurate results are fairly crucial to mathematical and
scientific programming, in my opinion.
Having done much mathematical and scientific prorgamming in my day, I
would say your opinion is dead wrong.

The crucial thing is not to slow down the calculations with useless
bells and whistles. Scientists and engineers are smart enough to use
more precision than we need, and we don't really need that much. For
instance, the simulations I run at work all use single precision (six
decimal digits) even though double precision is allowed.
Carl Banks
Jun 27 '08 #19

P: n/a
On May 21, 7:01*pm, Carl Banks <pavlovevide...@gmail.comwrote:
The crucial thing is not to slow down the calculations with useless
bells and whistles.
Are you running your simulations on a system that does or does not
support the "useless bell and whistle" of correct rounding? If not,
how do you prevent regression towards 0?

For example, one of the things that caused the PS3 to be in 3rd place
behind the Wii and XBox 360 is that to save a cycle or two, the PS3
cell core does not support rounding of single precision results -- it
truncates them towards 0. That led to horrible single-pixel errors in
the early demos I saw, which in term helped contribute to game release
delays, which has turned into a major disappointment for Sony.
Jun 27 '08 #20

P: n/a
On May 21, 11:27 pm, Dave Parker <davepar...@flamingthunder.com>
wrote:
On May 21, 7:01 pm, Carl Banks <pavlovevide...@gmail.comwrote:
The crucial thing is not to slow down the calculations with useless
bells and whistles.

Are you running your simulations on a system that does or does not
support the "useless bell and whistle" of correct rounding? If not,
how do you prevent regression towards 0?
The "useless bell and whistle" is switching to multiprecision.

I'm not sure whether our hardware has a rounding bias or not but I
doubt it would matter if it did.

For example, one of the things that caused the PS3 to be in 3rd place
behind the Wii and XBox 360 is that to save a cycle or two, the PS3
cell core does not support rounding of single precision results -- it
truncates them towards 0. That led to horrible single-pixel errors in
the early demos I saw, which in term helped contribute to game release
delays, which has turned into a major disappointment for Sony.
And you believe that automatically detecting rounding errors and
switching to multi-precision in software would have saved Sony all
this?
Carl Banks
Jun 27 '08 #21

P: n/a
On May 21, 3:38*pm, Mark Dickinson <dicki...@gmail.comwrote:
>>a = 1e16-2.
a
9999999999999998.0
>a+0.999 * * # gives expected result
9999999999999998.0
>a+0.9999 * # doesn't round correctly.

10000000000000000.0
Notice that 1e16-1 doesn't exist in IEEE double precision:
1e16-2 == 0x1.1c37937e07fffp+53
1e16 == 0x1.1c37937e08p+53

(that is, the hex representation ends with "7fff", then goes to
"8000").

So, it's just rounding. It could go up, to 1e16, or down, to 1e16-2.
This is not a bug, it's a feature.
Jun 27 '08 #22

P: n/a
On May 22, 1:26*am, Henrique Dante de Almeida <hda...@gmail.com>
wrote:
On May 21, 3:38*pm, Mark Dickinson <dicki...@gmail.comwrote:
>a = 1e16-2.
>>a
9999999999999998.0
>>a+0.999 * * # gives expected result
9999999999999998.0
>>a+0.9999 * # doesn't round correctly.
10000000000000000.0

*Notice that 1e16-1 doesn't exist in IEEE double precision:
*1e16-2 == 0x1.1c37937e07fffp+53
*1e16 == 0x1.1c37937e08p+53

*(that is, the hex representation ends with "7fff", then goes to
"8000").

*So, it's just rounding. It could go up, to 1e16, or down, to 1e16-2.
This is not a bug, it's a feature.
I didn't answer your question. :-/

Adding a small number to 1e16-2 should be rounded to nearest (1e16-2)
by default. So that's strange.

The following code compiled with gcc 4.2 (without optimization) gives
the same result:

#include <stdio.h>

int main (void)
{
double a;

while(1) {
scanf("%lg", &a);
printf("%a\n", a);
printf("%a\n", a + 0.999);
printf("%a\n", a + 0.9999);
}
}
Jun 27 '08 #23

P: n/a
On May 22, 1:36*am, Henrique Dante de Almeida <hda...@gmail.com>
wrote:
On May 22, 1:26*am, Henrique Dante de Almeida <hda...@gmail.com>
wrote:
On May 21, 3:38*pm, Mark Dickinson <dicki...@gmail.comwrote:
>>a = 1e16-2.
>a
9999999999999998.0
>a+0.999 * * # gives expected result
9999999999999998.0
>a+0.9999 * # doesn't round correctly.
10000000000000000.0
*Notice that 1e16-1 doesn't exist in IEEE double precision:
*1e16-2 == 0x1.1c37937e07fffp+53
*1e16 == 0x1.1c37937e08p+53
*(that is, the hex representation ends with "7fff", then goes to
"8000").
*So, it's just rounding. It could go up, to 1e16, or down, to 1e16-2.
This is not a bug, it's a feature.

*I didn't answer your question. :-/

*Adding a small number to 1e16-2 should be rounded to nearest (1e16-2)
by default. So that's strange.

*The following code compiled with gcc 4.2 (without optimization) gives
the same result:

#include <stdio.h>

int main (void)
{
* * * * double a;

* * * * while(1) {
* * * * * * * * scanf("%lg", &a);
* * * * * * * * printf("%a\n", a);
* * * * * * * * printf("%a\n", a + 0.999);
* * * * * * * * printf("%a\n", a + 0.9999);
* * * * }

}

However, compiling it with "-mfpmath=sse -msse2" it works. (it
doesn't work with -msse either).
Jun 27 '08 #24

P: n/a
On May 22, 1:41*am, Henrique Dante de Almeida <hda...@gmail.com>
wrote:
>
*Notice that 1e16-1 doesn't exist in IEEE double precision:
*1e16-2 == 0x1.1c37937e07fffp+53
*1e16 == 0x1.1c37937e08p+53
*(that is, the hex representation ends with "7fff", then goes to
"8000").
*So, it's just rounding. It could go up, to 1e16, or down, to 1e16-2..
This is not a bug, it's a feature.
*I didn't answer your question. :-/
*Adding a small number to 1e16-2 should be rounded to nearest (1e16-2)
by default. So that's strange.
*The following code compiled with gcc 4.2 (without optimization) gives
the same result:
#include <stdio.h>
int main (void)
{
* * * * double a;
* * * * while(1) {
* * * * * * * * scanf("%lg", &a);
* * * * * * * * printf("%a\n", a);
* * * * * * * * printf("%a\n", a + 0.999);
* * * * * * * * printf("%a\n", a + 0.9999);
* * * * }
}

*However, compiling it with "-mfpmath=sse -msse2" it works. (it
doesn't work with -msse either).
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.

It reads the 64-bit double and converts it to a 80-bit long double.
In this case, 1e16-2 + 0.9999 == 1e16-1. When requested by the printf
call, this 80-bit number (1e16-1) is converted to a double, which
happens to be 1e16.
Jun 27 '08 #25

P: n/a
Henrique Dante de Almeida <hd****@gmail.comwrote:
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.
Actually, the 80387 and the '87 FPU in all other IA-32 processors
do use IEEE 745 double-precision arithmetic when requested to do so.
The problem is that GCC doesn't request that it do so. It's a long
standing problem with GCC that will probably never be fixed. You can
work around this problem the way the Microsoft C/C++ compiler does
by requesting that the FPU always use double-precision arithmetic.
That way your answers are only wrong when you use long double or float.

Ross Ridge

--
l/ // Ross Ridge -- The Great HTMU
[oo][oo] rr****@csclub.uwaterloo.ca
-()-/()/ http://www.csclub.uwaterloo.ca/~rridge/
db //
Jun 27 '08 #26

P: n/a
Dave Parker wrote:
On May 21, 7:01*pm, Carl Banks <pavlovevide...@gmail.comwrote:
>The crucial thing is not to slow down the calculations with useless
bells and whistles.

Are you running your simulations on a system that does or does not
support the "useless bell and whistle" of correct rounding? If not,
how do you prevent regression towards 0?

For example, one of the things that caused the PS3 to be in 3rd place
behind the Wii and XBox 360 is that to save a cycle or two, the PS3
cell core does not support rounding of single precision results -- it
truncates them towards 0. That led to horrible single-pixel errors in
the early demos I saw, which in term helped contribute to game release
delays, which has turned into a major disappointment for Sony.
First of all - calling the PS3 technologically behind the WII (that is on
par with the PS2 wrt to it's computational power) is preposterous.

And that put aside, I don't get why a discussion about single or double
precision floats that SHARE THE SAME ROUNDING BEHAVIOR - just in different
scales - has to do with automatically adapting calculations to higher
precision numbers such as decimals or any other arbitrary precision number
format.

Diez
Jun 27 '08 #27

P: n/a
On May 22, 1:14 am, bukzor <workithar...@gmail.comwrote:
On May 21, 3:28 pm, Dave Parker <davepar...@flamingthunder.comwrote:
On May 21, 4:21 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Which is exactly what the python decimal module does.
Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent program, I'll know to
use Python with the decimal module.

Utterly shameless.
You may find a more appreciative (and less antagonised) audience for
your language in comp.lang.cobol
Jun 27 '08 #28

P: n/a
>
This person who started this thread posted the calculations showing
that Python was doing the wrong thing, and filed a bug report on it.

If someone pointed out a similar problem in Flaming Thunder, I would
agree that Flaming Thunder was doing the wrong thing.

I would fix the problem a lot faster, though, within hours if
possible. Apparently this particular bug has been lurking on Bugzilla
since 2003: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323
I wonder how you would accomplish that, given that there is no fix.

http://hal.archives-ouvertes.fr/hal-00128124

Diez
Jun 27 '08 #29

P: n/a
On May 22, 5:09*am, Ross Ridge <rri...@caffeine.csclub.uwaterloo.ca>
wrote:
Henrique Dante de Almeida *<hda...@gmail.comwrote:
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.

Actually, the 80387 and the '87 FPU in all other IA-32 processors
do use IEEE 745 double-precision arithmetic when requested to do so.
The problem is that GCC doesn't request that it do so. *It's a long
standing problem with GCC that will probably never be fixed. *You can
work around this problem the way the Microsoft C/C++ compiler does
by requesting that the FPU always use double-precision arithmetic.
Even this isn't a perfect solution, though: for one thing, you can
only
change the precision used for rounding, but not the exponent range,
which remains the same as for extended precision. Which means you
still don't get strict IEEE 754 compliance when working with very
large or very small numbers. In practice, I guess it's fairly
easy to avoid the extremes of the exponent range, so this seems like
a workable fix.

More seriously, it looks as though libm (and hence the Python
math module) might need the extended precision: on my machine
there's a line in /usr/include/fpu_control.h that says

#define _FPU_EXTENDED 0x300 /* libm requires double extended
precision. */

Mark
Jun 27 '08 #30

P: n/a
On May 22, 6:57*am, "Diez B. Roggisch" <de...@nospam.web.dewrote:
I wonder how you would accomplish that, given that there is no fix.

http://hal.archives-ouvertes.fr/hal-00128124

Diez
For anyone still following the discussion, I highly
recommend the above mentioned paper; I found it
extremely helpful.

http://bugs.python.org/issue2937

Mark
Jun 27 '08 #31

P: n/a
On May 22, 6:09*am, Ross Ridge <rri...@caffeine.csclub.uwaterloo.ca>
wrote:
Henrique Dante de Almeida *<hda...@gmail.comwrote:
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.

Actually, the 80387 and the '87 FPU in all other IA-32 processors
do use IEEE 745 double-precision arithmetic when requested to do so.
True. :-/

It seems that it uses a flag to control the precision. So, a
conformant implementation would require saving/restoring the flag
between calls. No wonder why gcc doesn't try to do this.

There are two possible options for python, in that case:

- Leave it as it is. The python language states that floating point
operations are based on the underlying C implementation. Also, the
relative error in this case is around 1e-16, which is smaller than the
expected error for IEEE doubles (~2e-16), so the result is non-
standard, but acceptable (in the general case, I believe the rounding
error could be marginally bigger than the expected error in extreme
cases, though).

- Use long doubles for archictectures that don't support SSE2 and use
SSE2 IEEE doubles for architectures that support it.

A third option would be for python to set the x87 precision to double
and switch it back to extended precision when calling C code (that
would be too much work for nothing).
Jun 27 '08 #32

This discussion thread is closed

Replies have been disabled for this discussion.