473,396 Members | 1,895 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

how long is double

f
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.

In debug version, the print out is:
-12.25938471938358500000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

In the release version, the print out is:
-12.25938471938358300000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

The above sum = a + b + c is just a part of my computation. I found
that my whole computation crushed in the debug version because some
number became zero and another number divide this number. But this did
not happened in the release version.

Why?

Thanks,

ff

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #1
32 22582
"f" <ff****@yahoo.com> wrote in message
news:8f*************************@posting.google.co m...
I have this

double sum, a, b, c;
sum = a + b + c;
You used a, b, and c without initializing them. Was this your intent?
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.

In debug version, the print out is:
-12.25938471938358500000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

In the release version, the print out is:
-12.25938471938358300000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,
Looks the same to me. And where is that trailing comma coming from?

The above sum = a + b + c is just a part of my computation. I found
that my whole computation crushed in the debug version because some
number became zero and another number divide this number. But this did
not happened in the release version.
Perhaps you can coax The Great Carsoni out of retirement. Or post some code.

Why?

Thanks,

ff

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]


Your massive crossposting is very bad usenet manners.

--
Cy
http://home.rochester.rr.com/cyhome/
Jul 22 '05 #2

"f" <ff****@yahoo.com> wrote in message
news:8f*************************@posting.google.co m...
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.

In debug version, the print out is:
-12.25938471938358500000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

In the release version, the print out is:
-12.25938471938358300000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

The above sum = a + b + c is just a part of my computation. I found
that my whole computation crushed in the debug version because some
number became zero and another number divide this number. But this did
not happened in the release version.

Why?


Apparently the discrepancy is between -12.25938471938358500000
and -12.25938471938358300000, a difference of 2 in the 17th significant
digit.

I'm guessing--and this is only a guess--that in production mode, the
compiler tells the machine to compute a+b+c by using its extended-precision
intermediate register (80 bits, if I remember correctly), and in debug mode
it actually truncates the intermediate result to 64 bits. That might well
give a discrepancy of about this degree. I'd have to look at the binary
representations of the numbers to be sure, and I'm too lazy right now :-) --
but if your computation is so brittle that such a small deviation crashes
it, you may want to rethink it.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #3
On 3 Jan 2004 21:26:11 -0500, ff****@yahoo.com (f) wrote in
comp.lang.c++:
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.

In debug version, the print out is:
-12.25938471938358500000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

In the release version, the print out is:
-12.25938471938358300000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

The above sum = a + b + c is just a part of my computation. I found
that my whole computation crushed in the debug version because some
number became zero and another number divide this number. But this did
not happened in the release version.

Why?

Thanks,

ff


If you value accuracy in floating point calculations, do not use
Visual C++ under any circumstances. When Microsoft changed from
16-bit to 32-bit operating systems and compilers, they made changes to
their floating point code for the purpose of providing compatibility
on all the different processors for which they would provide Windows
NT, most of which never happened.

In particular, they made these two changes:

1. In their 16-bit compilers, the long double type used the full 80
bit extended precision of the Intel floating point hardware. In their
32-bit compilers, long double is the same as double and uses the 64
bit double precision mode of the floating point hardware. There is no
way in Visual C++ at all to utilize the higher precision mode build
into the hardware.

2. In most cases, their math code sets the floating point hardware
control bits to limit precision to 64 bits, instead of using the 80
bit format normally used internally by the FPU. That means that you
lose precision on calculations entirely inside the FPU, and not just
when you store values to RAM.

Microsoft made the decision years ago that programmers were not
trustworthy to decide for themselves whether they were better off with
the highest precision the Intel FPU can provide, or they should
sacrifice performance and accuracy for compatible floating point
results on other processors that nobody actually bought Windows NT on.
They made the decision for you, and took away your control over the
precision of your results.

If you want the maximum precision and accuracy that the Intel FPU is
capable of providing, you have to give up on Visual C++ and switch to
another compiler, such as Borland or GNU, that gives you extended
precision long double and doesn't truncate the floating point control
bits.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.contrib.andrew.cmu.edu/~a...FAQ-acllc.html

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #4


f schrieb:
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.


Your code yields undefined behavior, because sum, a, b and c have never
been initialized before using them for computations / printing.
So the outcome of the program is undefined, thus anything might happen -
and different outcome between debug and release version is just one
possibility.
regards,

Thomas

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #5
In message <8f*************************@posting.google.com> , f
<ff****@yahoo.com> writes
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.

In debug version, the print out is:
-12.25938471938358500000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

In the release version, the print out is:
-12.25938471938358300000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

The above sum = a + b + c is just a part of my computation. I found
that my whole computation crushed in the debug version because some
number became zero and another number divide this number. But this did
not happened in the release version.

Why?


FP arithmetic is very sensitive to such things as rounding mode and
order of evaluation. On x86 architectures there are considerable
differences between calculations done entirely in register and ones
where the intermediate results are written back to memory. My guess is
that in debug mode more intermediate results are being written back and
thereby are being stripped of guard digits.

For example your problem with '0' can be the consequence of subtracting
two values that are almost equal and are actually 'equal' within the
limits of the precision supported by memory values (which often have
lower precision than register values). This is an interesting case
because it means that the heavily optimised (minimum of writing back)
release version works as naively expected while the debug version that
adheres strictly to the semantics of the abstract C++ machine fails.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
or http://www.robinton.demon.co.uk
Happy Xmas, Hanukkah, Yuletide, Winter/Summer Solstice to all.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #6
"Jack Klein" <ja*******@spamcop.net> wrote in message
news:6n********************************@4ax.com...
Microsoft made the decision years ago that programmers were not
trustworthy to decide for themselves whether they were better off with
the highest precision the Intel FPU can provide, or they should
sacrifice performance and accuracy for compatible floating point
results on other processors that nobody actually bought Windows NT on.
They made the decision for you, and took away your control over the
precision of your results.

If you want the maximum precision and accuracy that the Intel FPU is
capable of providing, you have to give up on Visual C++ and switch to
another compiler, such as Borland or GNU, that gives you extended
precision long double and doesn't truncate the floating point control
bits.


Well, yes, but... You're correct that Microsoft has settled on a
conservative floating-point model. The reasons you attribute for
the choice are doubtless uncharitable, at best. Moving to a compiler
that supports 80-bit arithmetic does not, however, ensure superior
floating-point results. We've found, for example:

-- that the freely available Borland compiler computes long double
floating-point literals only to double precision, thereby making
a hash of our long double math functions

-- that two popular packagings of GNU C++ on Windows, Mingw and
Cygwin, either fail to set the FPU mode optimally or let it flap
in the breeze, thereby making a hash of some double and all long
double calculations

The OP was surprised at a change of the least-significant bit in
a 53-bit result. Old floating-point hands know that the slightest
rearrangement of operations can yield such a difference. This does
not signal the End of Western Civilization as We Know It. Nor
will applying the kneejerk Anything But Microsoft fix stave off
that inevitable end. If you want good floating-point results,
you have to:

a) test the quality of your environment,

b) know how to fix it if it's broken, and

c) know what's good when you see it.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #7
"Jack Klein" <ja*******@spamcop.net> wrote in message
news:6n********************************@4ax.com...
If you want the maximum precision and accuracy that the Intel FPU is
capable of providing, you have to give up on Visual C++ and switch to
another compiler, such as Borland or GNU, that gives you extended
precision long double and doesn't truncate the floating point control
bits.


What about intel's compiler? :-)

I do think however that the code is merely an uninitialized variable
problem.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #8

"f" <ff****@yahoo.com> wrote in message
news:8f*************************@posting.google.co m...
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);


Most likely the difference is that the debug mode does some fstores to put
things back into
memory locations. The floating point registers on the pentium are really 80
bits wide.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #9
In message <bt**********@news1.tilbu1.nb.home.nl>, Servé Lau
<la*****@home.nl> writes
What about intel's compiler? :-)

I do think however that the code is merely an uninitialized variable
problem.


I very much doubt it, were that the case the results would have been
totally different. Much more likely that the OP snipped out too much
code. This is re-inforced by the rest of his article.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
or http://www.robinton.demon.co.uk
Happy Xmas, Hanukkah, Yuletide, Winter/Summer Solstice to all.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #10
"Ron Natalie" <ro*@sensor.com> wrote in message
news:3f***********************@news.newshosting.co m...
double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

Most likely the difference is that the debug mode does some fstores to put
things back into
memory locations. The floating point registers on the pentium are really

80 bits wide.


IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference at
the last bit of the precision.

We have the builtin op + working on builtin types -- the compiler is allowed
to rearrange the operations it knows to be associative and commutative, or
even precalculate the result if able, isn't it?

I recall an old discussion wwhere the compile time environment was different
from the runtime environment, including the fp precision and the outcome was
way more different.

Paul

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #11
In message <3f******@andromeda.datanet.hu>, Balog Pal <pa**@lib.hu>
writes
IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference at
the last bit of the precision.


I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements re
order of evaluation of sub-expressions (i.e. that there is no
requirement)
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
or http://www.robinton.demon.co.uk
Happy Xmas, Hanukkah, Yuletide, Winter/Summer Solstice to all.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #12
On Mon, 05 Jan 2004 11:18:40 -0500, Francis Glassborow wrote:

[ f'up comp.lang.c++ ]
In message <3f******@andromeda.datanet.hu>, Balog Pal <pa**@lib.hu>
writes
IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference at
the last bit of the precision.


I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements re
order of evaluation of sub-expressions (i.e. that there is no
requirement)


I think so. if b is almost -c and a is much smaller, more precision is
lost is a is added to b first than when b is added to c first. There is
nothing the compiler can do about this. If this order changes due to
optimization switches may be seen as a QOI issue, but even that is a
bridge to far for me.

M4
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #13
In message <pa****************************@remove.this.part.r tij.nl>,
Martijn Lievaart <m@remove.this.part.rtij.nl> writes
On Mon, 05 Jan 2004 11:18:40 -0500, Francis Glassborow wrote:

[ f'up comp.lang.c++ ]
In message <3f******@andromeda.datanet.hu>, Balog Pal <pa**@lib.hu>
writes
IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference at
the last bit of the precision.


I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements re
order of evaluation of sub-expressions (i.e. that there is no
requirement)


I think so. if b is almost -c and a is much smaller, more precision is
lost is a is added to b first than when b is added to c first. There is
nothing the compiler can do about this. If this order changes due to
optimization switches may be seen as a QOI issue, but even that is a
bridge to far for me.


The Standard requires that additions are evaluated on a left to right
basis. There are no alternatives. In the case of integer operations the
compiler can usually get away with the 'as if' rule and reorder them but
in the case of floating point calculations where re-ordering produces a
different result it manifestly cannot do that and conform.

Want to quote the Standard to support a contention that re-ordering is
allowed (other than by the as-if rule)?
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
or http://www.robinton.demon.co.uk
Happy Xmas, Hanukkah, Yuletide, Winter/Summer Solstice to all.
Jul 22 '05 #14

"Francis Glassborow" <fr*****@robinton.demon.co.uk> wrote in message news:Ej**************@robinton.demon.co.uk...
The Standard requires that additions are evaluated on a left to right
basis. There are no alternatives. In the case of integer operations the
compiler can usually get away with the 'as if' rule and reorder them but
in the case of floating point calculations where re-ordering produces a
different result it manifestly cannot do that and conform.\


The standard says NO SUCH THING. With little exception
the order of operations in C++ is unspecified. Do not confuse
parsing association/precedence with evaluation.

Jul 22 '05 #15
On Mon, 05 Jan 2004 18:00:17 -0500, Ron Natalie wrote:

"Francis Glassborow" <fr*****@robinton.demon.co.uk> wrote in message
news:Ej**************@robinton.demon.co.uk...
The Standard requires that additions are evaluated on a left to right
basis. There are no alternatives. In the case of integer operations the
compiler can usually get away with the 'as if' rule and reorder them
but in the case of floating point calculations where re-ordering
produces a different result it manifestly cannot do that and conform.\


The standard says NO SUCH THING. With little exception the order of
operations in C++ is unspecified. Do not confuse parsing
association/precedence with evaluation.


I couldn't find it either. Both you and Francis normally know what you are
talking about, so I'll wait for Francis answer before deciding for sure.

M4
Jul 22 '05 #16
"Francis Glassborow" <fr*****@robinton.demon.co.uk> wrote in message
news:qP**************@robinton.demon.co.uk...
IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference atthe last bit of the precision.


I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements re
order of evaluation of sub-expressions (i.e. that there is no
requirement)


Suppose I have expression, like

double d = 1.1 + x + 2.2 + 3.3; // we have double x in scope

here the compiler must generate code that will do 3 separate additions an
that order? And emitting code equivalent to expression

double d = x + 6.6;

is not really allowed, just in practice we allow the compiler doing it
anyway with some switch?

IIRC the question for this thread used MSVC which has the /Op [Improve Float
Consistency] option. Which the original poster likely dind't use
consistently.

Paul


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #17
According to the Note in the Standard (1.9.15, pg 7, PDF Pg 33):
"operators can be regrouped according to the usual mathematical
rules....[caveat about machines in which overflows produce an
exception]...the above expression statement can be rewritten by the
implementation in any of the above ways because the same result will occur."

I guess the key point is the meaning of "same result". I was told
(pre-C89) that a C compile could assume infinite precision when reordering
floating point expressions, something not ruled by that statement, and not
addressed (as far as I could see) elsewhere in the Standard.

--
Truth,
James Curran
Home: www.noveltheory.com Work: www.njtheater.com
Blog: www.honestillusion.com Day Job: www.partsearch.com
(note new day job!)

"Francis Glassborow" <fr*****@robinton.demon.co.uk> wrote in message
news:qP**************@robinton.demon.co.uk...

I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements re
order of evaluation of sub-expressions (i.e. that there is no
requirement)


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #18
"P.J. Plauger" <pj*@dinkumware.com> writes:
If you want good floating-point results, you have to:

a) test the quality of your environment,


Is there a good test suite that will work for this? (Both for
detecting hardware failures/bugs and C++ compiler stupidities.)

Not that a test program would eliminate the need to know what you're
doing, of course.

--
Ed Avis <ed@membled.com>

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #19

"

I couldn't find it either. Both you and Francis normally know what you are
talking about, so I'll wait for Francis answer before deciding for sure.

The fourth paragraph of section 5 of the C++ standard:
Except where noted, the order of evaluation of operands of individual operators and subexpressions of individual

expressions, and the order in which side effects take place, is unspecified.

Jul 22 '05 #20
On Mon, 05 Jan 2004 23:15:15 -0500, Ron Natalie wrote:

"

I couldn't find it either. Both you and Francis normally know what you
are talking about, so I'll wait for Francis answer before deciding for
sure.

The fourth paragraph of section 5 of the C++ standard: Except where
noted, the order of evaluation of operands of individual operators and
subexpressions of individual

expressions, and the order in which side effects take place, is
unspecified.


Got it. Thx.

M4
Jul 22 '05 #21

=>
Suppose I have expression, like

double d = 1.1 + x + 2.2 + 3.3; // we have double x in scope

here the compiler must generate code that will do 3 separate additions an
that order? And emitting code equivalent to expression

double d = x + 6.6;


The compiler is free to reorder the expression. If you want to enforce ordering
you have to introduce sequence points in the calculation.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #22

"James Curran" <Ja*********@mvps.org> wrote in message news:bt********@netlab.cs.rpi.edu...
According to the Note in the Standard (1.9.15, pg 7, PDF Pg 33):
"operators can be regrouped according to the usual mathematical
rules....[caveat about machines in which overflows produce an
exception]...the above expression statement can be rewritten by the
implementation in any of the above ways because the same result will occur."

First, Note's are non-normative.
Second, the regrouping it's talking about isn't just the reordering of the
order of evaluation. The operative description is in the beginning of
Section 5 (4th paragraph).
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #23
> IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference at
the last bit of the precision.

Good catch. This is exactly what happens in case of a test app I compiled
using VC 6.0 in Debug & Release.

Cumulative results (of summation) in both cases are as follows:

debug:
ST0 = -1.14359638839963047e+0001
ST0 = -1.15118805451323815e+0001
ST0 = -1.22593847193835845e+0001 <= end result

release:
ST0 = -7.47504174251202524e-0001
ST0 = -8.23420835387278838e-0001
ST0 = -1.22593847193835827e+0001 <= end result
MK
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #24
In article <l1************@budvar.future-i.net>,
Ed Avis <ed@membled.com> wrote:
"P.J. Plauger" <pj*@dinkumware.com> writes:
If you want good floating-point results, you have to:

a) test the quality of your environment,


Is there a good test suite that will work for this? (Both for
detecting hardware failures/bugs and C++ compiler stupidities.)


William Kahan's well-known Paranoia program attempts to deduce,
in a portable way, the basic properties of floating-point arithmetic
on a given implementation. Versions are available for K&R C, Fortran 66,
and others at www.netlib.org/paranoia/. Though archaic, the Fortran
version should be compatible with modern Fortran compilers; the C
version may require modification to be compatible with ANSI C or C++.

--Eric

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #25
Martijn Lievaart <m@remove.this.part.rtij.nl> wrote in message
news:<pa****************************@remove.this.p art.rtij.nl>...
On Mon, 05 Jan 2004 11:18:40 -0500, Francis Glassborow wrote: [ f'up comp.lang.c++ ]
In message <3f******@andromeda.datanet.hu>, Balog Pal <pa**@lib.hu>
writes
IMHO a more likely explanation can be that the 3 numbers in a + b +
c are added in a different order. And that is a possible source of
difference at the last bit of the precision.
I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements
re order of evaluation of sub-expressions (i.e. that there is no
requirement)
I think so. if b is almost -c and a is much smaller, more precision is
lost is a is added to b first than when b is added to c first. There
is nothing the compiler can do about this. If this order changes due
to optimization switches may be seen as a QOI issue, but even that is
a bridge to far for me.


I think that this was Francis' point. According to the standard, a+b+c
is (a+b)+c. The compiler is free to rearrange this any way it pleases,
as long as the results are the same as if it had done (a+b)+c. On most
machines, with integer arithmetic, there is no problem. On no machine
that I know of, however, can the compiler legally rearrange floating
point, unless it absolutely knows the values involved.

There was quite a lot of discussion about this when the C standard was
first being written. K&R explicitly allowed rearrangement, even when it
would result in different results. In the end, the C standard decided
not to allow this.

--
James Kanze GABI Software mailto:ka***@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #26
"James Curran" <Ja*********@mvps.org> wrote in message
news:<bt********@netlab.cs.rpi.edu>...
According to the Note in the Standard (1.9.15, pg 7, PDF Pg 33):
"operators can be regrouped according to the usual mathematical
rules....[caveat about machines in which overflows produce an
exception]...the above expression statement can be rewritten by the
implementation in any of the above ways because the same result will
occur." I guess the key point is the meaning of "same result". I was told
(pre-C89) that a C compile could assume infinite precision when
reordering floating point expressions, something not ruled by that
statement, and not addressed (as far as I could see) elsewhere in the
Standard.


In K&R 1, there was an explicit license for the compiler to rearrange
expressions according to the usual laws of algebra. Thus, without
considering possible overflow or rounding errors. The authors of the C
standard removed this liberty, intentionally.

I suppose that a compiler writer could wriggle out on the grounds that
the standard doesn't require a minimum precision for floating point
arithmetic. On the other hand, the considerations of overflow would
still probably hold -- while most hardware will give the correct results
for integer arithmetic, provided they are representable, even if there
was an intermediate overflow, this is not generally the case for
floating point.

--
James Kanze GABI Software mailto:ka***@gabi-soft.fr
Conseils en informatique orientée objet/ http://www.gabi-soft.fr
Beratung in objektorientierter Datenverarbeitung
11 rue de Rambouillet, 78460 Chevreuse, France, +33 (0)1 30 23 45 16

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #27

"Francis Glassborow" <fr*****@robinton.demon.co.uk> wrote in message
news:5D**************@robinton.demon.co.uk...
In message <8f*************************@posting.google.com> , f
<ff****@yahoo.com> writes
I have this

double sum, a, b, c;
sum = a + b + c;
printf("%.20f = %.20f, %.20f, %.20f", sum, a, b, c);

I found that the debug version and release version of the same code
give me different result. I am using VC++ 6.0.

In debug version, the print out is:
-12.25938471938358500000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

In the release version, the print out is:
-12.25938471938358300000 = -11.43596388399630500000,
-0.07591666113607631300, -0.74750417425120252000,

The above sum = a + b + c is just a part of my computation. I found
that my whole computation crushed in the debug version because some
number became zero and another number divide this number. But this did
not happened in the release version.

Why?
FP arithmetic is very sensitive to such things as rounding mode and
order of evaluation. On x86 architectures there are considerable
differences between calculations done entirely in register and ones
where the intermediate results are written back to memory. My guess is
that in debug mode more intermediate results are being written back and
thereby are being stripped of guard digits.


That is correct.
For example your problem with '0' can be the consequence of subtracting
two values that are almost equal and are actually 'equal' within the
limits of the precision supported by memory values (which often have
lower precision than register values). This is an interesting case
because it means that the heavily optimised (minimum of writing back)
release version works as naively expected while the debug version that
adheres strictly to the semantics of the abstract C++ machine fails.


I ran into this problem in project I did several years; the release build
produced slightly different results than the debug build. The Microsoft
compiler has a 'Improve Float Consistency' option (/Op) which fixes this
problem. Unfortunately enabling this option slows down floating point
intensive code quite a bit.

--
Peter van Merkerk
peter.van.merkerk(at)dse.nl

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #28
On Tue, 06 Jan 2004 06:35:26 -0500, kanze wrote:
I think that this was Francis' point. According to the standard, a+b+c
is (a+b)+c. The compiler is free to rearrange this any way it pleases,
as long as the results are the same as if it had done (a+b)+c. On most
machines, with integer arithmetic, there is no problem. On no machine
that I know of, however, can the compiler legally rearrange floating
point, unless it absolutely knows the values involved.

There was quite a lot of discussion about this when the C standard was
first being written. K&R explicitly allowed rearrangement, even when it
would result in different results. In the end, the C standard decided
not to allow this.


Then this seems a place where C and C++ differ, see the answer and quote
from the C++ standard from Ron Natalie.

Anyone who can confirm this?

M4
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #29
"Ed Avis" <ed@membled.com> wrote in message
news:l1************@budvar.future-i.net...
"P.J. Plauger" <pj*@dinkumware.com> writes:
If you want good floating-point results, you have to:

a) test the quality of your environment,


Is there a good test suite that will work for this? (Both for
detecting hardware failures/bugs and C++ compiler stupidities.)

Not that a test program would eliminate the need to know what you're
doing, of course.


Fred Tydeman has an incredibly thorough test suite for floating-point
support. That's what we've used to hunt down the most subtle problems,
both in our own libraries and in the environments we build it on.
We have a product called a Quick Proofer which is way less thorough,
but still does a remarkably good job of highlighting FPP lapses.
And we're developing a very powerful set of math function tests in
house that we're not in a hurry to sell.

As for free stuff, there's bugger all out there that's worth the
bother.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 22 '05 #30
In message <3f***********************@news.newshosting.com> , Ron Natalie
<ro*@sensor.com> writes
Suppose I have expression, like

double d = 1.1 + x + 2.2 + 3.3; // we have double x in scope

here the compiler must generate code that will do 3 separate additions an
that order? And emitting code equivalent to expression

double d = x + 6.6;


The compiler is free to reorder the expression. If you want to enforce ordering
you have to introduce sequence points in the calculation.


There are two places in the Standard that might be considered relevant:

1) 1.9 para 15 (which is a note and so non-normative but can give a clue
as to intent)

There it requires that the operators are really associative and
commutative -- and a non-normative footnote to the non-normative note
adds that this is never considered to be the case for overloaded
operators)

It then proceeds to give source code examples re int values and fails to
give any guidance in the case of floating point.

When I combine the above with C's rules (which explicitly forbid
re-ordering) I come to the conclusion that fp arithmetic operators are
not 'really' associative and commutative and so the limited licence to
regroup (note not re-order) does not apply. But even if it did the best
that could be achieved with the above is:

double d = 1.1 + x + (2.2 + 3.3);
which the compiler could transform to :

double d = 1.1 + x + 5.5;

Had the writers of that section meant re-order they could have said so.

2) Section 5 para 4 is actually no more helpful. Historically this
formulation has not been taken as a licence for re-ordering successive
applications of the same operator other than where explicit licence is
granted (or can be deduced fro the as-if rule) if op is a left-to-right
associative operator:

a op b op c;

must be evaluated as:
(a op b) op c;
and not as:
a op (b op c);

However:

a op1 b op2 c op1 d;

with op1 having a higher precedence than op2 allows for (a op1 b) and (c
op1 d) to be evaluated in either order. That has been the normal
interpretation of the rule with regard to order of evaluation of
sub-expressions.

It is probably safer with modern high optimisation implementations to
write code with sequence points enforcing intent but I can see nothing
in the C++ Standard that allows C++ to treat floating point expressions
differently to the way that a C compiler is required to.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
or http://www.robinton.demon.co.uk
Happy Xmas, Hanukkah, Yuletide, Winter/Summer Solstice to all.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #31
In message <pa****************************@remove.this.part.r tij.nl>,
Martijn Lievaart <m@remove.this.part.rtij.nl> writes
On Tue, 06 Jan 2004 06:35:26 -0500, kanze wrote:
I think that this was Francis' point. According to the standard, a+b+c
is (a+b)+c. The compiler is free to rearrange this any way it pleases,
as long as the results are the same as if it had done (a+b)+c. On most
machines, with integer arithmetic, there is no problem. On no machine
that I know of, however, can the compiler legally rearrange floating
point, unless it absolutely knows the values involved.

There was quite a lot of discussion about this when the C standard was
first being written. K&R explicitly allowed rearrangement, even when it
would result in different results. In the end, the C standard decided
not to allow this.


Then this seems a place where C and C++ differ, see the answer and quote
from the C++ standard from Ron Natalie.

Anyone who can confirm this?


I do not think so, I think C++ in the non-normative note [1.9 para 15]
was attempting to make current practice explicit -- i.e. regrouping.
Clause 5 para 4 largely para-phrases 6.5 paras 1-3 of the current C
Standard (or 6.3 in the old C Standard)

I do not think we intended any extra licence in C++ other than that
granted in C.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
or http://www.robinton.demon.co.uk
Happy Xmas, Hanukkah, Yuletide, Winter/Summer Solstice to all.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #32
On 7 Jan 2004 03:50:05 -0500, Martijn Lievaart
<m@remove.this.part.rtij.nl> wrote:
On Tue, 06 Jan 2004 06:35:26 -0500, kanze wrote:
> I think that this was Francis' point. According to the standard, a+b+c
> is (a+b)+c. The compiler is free to rearrange this any way it pleases,
> as long as the results are the same as if it had done (a+b)+c. On most
> machines, with integer arithmetic, there is no problem. On no machine
> that I know of, however, can the compiler legally rearrange floating
> point, unless it absolutely knows the values involved. > There was quite a lot of discussion about this when the C standard was
> first being written. K&R explicitly allowed rearrangement, even when it
> would result in different results. In the end, the C standard decided
> not to allow this.

Then this seems a place where C and C++ differ, see the answer and quote
from the C++ standard from Ron Natalie. Anyone who can confirm this?


His quote is about order of evaluation. It relates to a*b+c*d where
either of a*b or c*d may be evaluated first. In the code under
discussion, a+b+c, it is (a+b)+c. C may be evaluated before a+b but
changing it to a+(b+c) or (a+c)+b is only allowed by the as-if rule.
The non-normative note in 1.9/15 amplifies this. It seems that the
C++ rules are the same as the C rules.

I have no idea how Ron based his other post that claimed an expression
could be reordered giving different results. There is no support in
either of the sited verses.

John

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
Jul 22 '05 #33

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Michael Mair | last post by:
Hi there, actually, I have posted the same question in g.g.help. As there were no answers, I am still not sure whether this is a bug or only something open to the compiler that is seemingly...
3
by: RoSsIaCrIiLoIA | last post by:
I have rewrote the malloc() function of K&R2 chapter 8.7 typedef long Align; ^^^^ Here, should I write 'long', 'double' or 'long double'? I know that in my pc+compiler sizeof(long)=4,...
5
by: Daniel Rudy | last post by:
How does one covert a interger number in a unsigned long long int (64-bit) to long double (80-bit) storage? I looked at math.h and I found function that convert double to long long, but didn't...
10
by: Bryan Parkoff | last post by:
The guideline says to use %f in printf() function using the keyword float and double. For example float a = 1.2345; double b = 5.166666667; printf("%.2f\n %f\n", a, b);
69
by: fieldfallow | last post by:
Hello all, Before stating my question, I should mention that I'm fairly new to C. Now, I attempted a small demo that prints out the values of C's numeric types, both uninitialised and after...
67
by: lcw1964 | last post by:
This may be in the category of bush-league rudimentary, but I am quite perplexed on this and diligent Googling has not provided me with a clear straight answer--perhaps I don't know how to ask the...
52
by: lcw1964 | last post by:
Greetings, all, I am trying to port a little bit of math code to gcc, that in the original version used the long double version of several functions (in particular, atanl, fabsl, and expl). I...
5
by: lcw1964 | last post by:
Greetings again, I will burden the group with yet another tenderfoot question, but since conscientious googling hasn't yield a lucid answer I thought I would risk the shortcut of asking here...
10
by: ratcharit | last post by:
Currently using cosine function in math.h Currently I get: 1 = cos(1e^-7) Is there another way for cos to return value of high accuracy say: 0.999999 = cos(1e^-7)
0
by: Charles Coldwell | last post by:
James Kanze <james.kanze@gmail.comwrites: True, with some additional considerations. The commonly used IEEE 754 floating point formats are single precision: 32 bits including 1 sign bit, 23...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.