473,729 Members | 2,234 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Trouble with integer floating point conversion

Hi all!

The following piece of code has (for me) completely unexpected behaviour.
(I compile it with gcc-Version 4.0.3)
Something goes wrong with the integer to float conversion.
Maybe somebody out there understands what happens.
Essentially, when I subtract the (double) function value GRID_POINT(2) from
a variable which has been assigned the same value before this gives a
non-zero result and I really do not understand why.

The program prints
5.0000000000000 00000e-01; -2.7755575615628 91351e-17;
0.0000000000000 00000e+00

And the comparison
if(temp1==GRID_ POINT(2))
has negative outcome.

When I replace
((double)(i)) by 2.0
everything behaves as expected. So the output is
5.0000000000000 00000e-01; 0.0000000000000 00000e+00; 0.0000000000000 00000e+00

But: even if the integer to float conversion is inexact (which, I think,
should not be the case) something like
temp1 = GRID_POINT(2);
temp3 = temp1-GRID_POINT(2);
should still result in temp3==0.0, whatever function GRID_POINT does.

What do You think?

Thank you!

---------------------------------------------------
#include <stdio.h>

double GRID_POINT(int i);

double GRID_POINT(int i)
{
return ( 0.1 + ( (80.1-0.1)/(400.0) )*((double)(i)) );
}

int main (void) {

double temp1, temp2, temp3;

temp1 = GRID_POINT(2);
temp2 = GRID_POINT(2);
temp3 = temp1-GRID_POINT(2);

printf("%.18e; %.18e; %.18e\n", temp1, temp3, temp1-temp2 );

if(temp1==GRID_ POINT(2)){
printf("these two are equal\n");
}

return 0;
}
---------------------------------------------------
Dec 12 '07 #1
39 3566
I should add, that the problem, described below, does not occur, when the
Intel compiler is used.
When compiling with gcc I don't use any optimisation options.
This does not solve my problem, however.

rembremading wrote:
Hi all!

The following piece of code has (for me) completely unexpected behaviour.
(I compile it with gcc-Version 4.0.3)
Something goes wrong with the integer to float conversion.
Maybe somebody out there understands what happens.
Essentially, when I subtract the (double) function value GRID_POINT(2)
from a variable which has been assigned the same value before this gives a
non-zero result and I really do not understand why.

The program prints
5.0000000000000 00000e-01; -2.7755575615628 91351e-17;
0.0000000000000 00000e+00

And the comparison
if(temp1==GRID_ POINT(2))
has negative outcome.

When I replace
((double)(i)) by 2.0
everything behaves as expected. So the output is
5.0000000000000 00000e-01; 0.0000000000000 00000e+00;
0.0000000000000 00000e+00

But: even if the integer to float conversion is inexact (which, I think,
should not be the case) something like
temp1 = GRID_POINT(2);
temp3 = temp1-GRID_POINT(2);
should still result in temp3==0.0, whatever function GRID_POINT does.

What do You think?

Thank you!

---------------------------------------------------
#include <stdio.h>

double GRID_POINT(int i);

double GRID_POINT(int i)
{
return ( 0.1 + ( (80.1-0.1)/(400.0) )*((double)(i)) );
}

int main (void) {

double temp1, temp2, temp3;

temp1 = GRID_POINT(2);
temp2 = GRID_POINT(2);
temp3 = temp1-GRID_POINT(2);

printf("%.18e; %.18e; %.18e\n", temp1, temp3, temp1-temp2 );

if(temp1==GRID_ POINT(2)){
printf("these two are equal\n");
}

return 0;
}
---------------------------------------------------
Dec 12 '07 #2
rembremading wrote:
I should add, that the problem, described below, does not occur, when the
Intel compiler is used.
When compiling with gcc I don't use any optimisation options.
This does not solve my problem, however.
The Intel compiler sets probably the precision of the machine
to 64 bits.

The gcc compiler and other compilers like lcc-win set the precision
of the machine at 80 bits.

This means that the calculations are done using MORE precision
than double precision what can lead to different results.

You can set the precision of the machine yourself to 64 bits
within the gcc run time.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Dec 12 '07 #3
On Dec 12, 9:05 pm, jacob navia <ja...@nospam.c omwrote:
rembremading wrote:
I should add, that the problem, described below, does not occur, when the
Intel compiler is used.
When compiling with gcc I don't use any optimisation options.
This does not solve my problem, however.

The Intel compiler sets probably the precision of the machine
to 64 bits.

The gcc compiler and other compilers like lcc-win set the precision
of the machine at 80 bits.

This means that the calculations are done using MORE precision
than double precision what can lead to different results.

You can set the precision of the machine yourself to 64 bits
within the gcc run time.
Much better to assume only what the Standard guarantees about floating-
point operations, and use third-party libraries like GMP if you need
extra precision.
--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatiquehtt p://www.cs.virginia .edu/~lcc-win32
Dec 12 '07 #4
Fr************@ googlemail.com wrote:
On Dec 12, 9:05 pm, jacob navia <ja...@nospam.c omwrote:
>rembremading wrote:
>>I should add, that the problem, described below, does not occur, when the
Intel compiler is used.
When compiling with gcc I don't use any optimisation options.
This does not solve my problem, however.
The Intel compiler sets probably the precision of the machine
to 64 bits.

The gcc compiler and other compilers like lcc-win set the precision
of the machine at 80 bits.

This means that the calculations are done using MORE precision
than double precision what can lead to different results.

You can set the precision of the machine yourself to 64 bits
within the gcc run time.

Much better to assume only what the Standard guarantees about floating-
point operations, and use third-party libraries like GMP if you need
extra precision.
The standard doesn't guarantee *anything* at all.

Myself I *thought* that the standard implied IEEE 754 but people
here pointed me to an obscure sentence that allows an implementation to
get away with that too, and implement some other kind of weird
floating point.

Since the standard doesn't force even IEEE754, there is *nothing*
the C language by itself can guarantee the user.

To come back to the original poster problem.

The machine has two ways of floating point operations:
1) 80 bits intenral precision
b) 64 bits internal precision.

Microsoft and Intel use (2), gcc and lcc-win use (1).

This means that when values are stored only in floating point
registers and NOT stored into memory, the same computation can
yield *different* results as the user has noticed.

The final fix to this is to set the machine to use only 64 bits
precision ONLY. This can be done with a few assembler instructions.
lcc-win rpovides a function to do this, maybe gcc does too.

Another possibility is to use the floating point environment functions
to read the environment, figure out where the precision bit
is, and then modify that and write it back to the floating
point unit.
>--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatiquehtt p://www.cs.virginia .edu/~lcc-win32

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32
Dec 12 '07 #5
rembremading <rembremad...@g mx.netwrote:
Hi all!

The following piece of code has (for me) completely
unexpected behaviour.
(I compile it with gcc-Version 4.0.3)
Something goes wrong with the integer to float conversion.
Did you apply -ffloat-store?

Without that (and others), gcc is not a conforming compiler
on many architectures.

--
Peter
Dec 13 '07 #6
Flash Gordon wrote:
jacob navia wrote, On 12/12/07 21:50:
.... snip ...
>
>The final fix to this is to set the machine to use only 64 bits
precision ONLY. This can be done with a few assembler instructions.
lcc-win rpovides a function to do this, maybe gcc does too.

Of course, changing the mode in which the processor operates behind
the back of the compiler can cause interesting effects. Interesting
as in the ancient Chinese curse, that is.
>Another possibility is to use the floating point environment
functions to read the environment, figure out where the precision
bit is, and then modify that and write it back to the floating
point unit.

Or better yet assume only what is guaranteed or use a library
designed to provide better guarantees if you need them. Or limit
yourself to compilers that guarantee the behaviour you require if
it is acceptable to so limit yourself.
All this silly arguing, and nobody bothers mentioning the
applicable section of the standard. From N869, about float.h:

[#10] The values given in the following list shall be
replaced by implementation-defined constant expressions with
(positive) values that are less than or equal to those
shown:

-- the difference between 1 and the least value greater
than 1 that is representable in the given floating
point type, b1-p

FLT_EPSILON 1E-5
DBL_EPSILON 1E-9
LDBL_EPSILON 1E-9

-- minimum normalized positive floating-point number,
bemin-1

FLT_MIN 1E-37
DBL_MIN 1E-37
LDBL_MIN 1E-37
--
Merry Christmas, Happy Hanukah, Happy New Year
Joyeux Noel, Bonne Annee.
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home .att.net>

--
Posted via a free Usenet account from http://www.teranews.com

Dec 13 '07 #7
jacob navia <ja***@nospam.c omwrites:
Flash Gordon wrote:
[...]
>>Since the standard doesn't force even IEEE754, there is *nothing*
the C language by itself can guarantee the user.
(I think jacob wrote the above; the attribution was snipped.)
>>
Are you seriously trying to claim that the only way to provide any
form of guarantee on floating point is to enforce IEEE754?

If there isn't even an accepted standard that is enforced, how
can you guarantee anything.
[...]
>
How could the standard guarantee ANYTHING about the precision of
floating point calculations when it doesn't even guarantee a common
standard?

Yes I am seriously saying that the absence of an enforced standard
makes any guarantee IMPOSSIBLE.
First of all, the C standard says that accuracy of floating-point
operations is implementation-defined, though the implementation is
allowed to say that the accuracy is unknown. This doesn't refute
jacob's claim, but it does mean that a particular implementation is
allowed to make guarantees that the standard doesn't make.

It's entirely possible for a language standard to make firm guarantees
about floating-point accuracy without mandating adherence to one
specific floating-point representation. For example, the Ada standard
has a detailed parameterized floating-point model with specific
precision requirements; Ada's requirements are satisfied by an IEEE
754 implementation, but can also be (and have been) satisfied by a
number of other floating-point representations . For example, in Ada
1.0 + 1.0 is guaranteed to be exactly equal to 2.0.

The C standard could have followed the same course (and since it was
written several years after the first Ada standard, I'm not entirely
sure why it didn't). But instead, the authors of the C standard chose
to leave floating-point precision issues up to each implementation.

In practice, each C implementation and each Ada implementation usually
just uses whatever floating-point representation and operations are
provided by the underlying hardware. (Exception: some systems
probably use software floating-point, and some others might need to do
some tweaking on top of what the hardware provides.) In most cases,
the precision provided by the hardware is as good as what the language
standard *could* have guaranteed.

Yes, jacob, it's true that the C standard makes no guarantees about
floating-point precision. It does make some guarantees about
floating-point range, which seems to contradict your claim above that
"there is *nothing* the C language by itself can guarantee the user".
(Perhaps in context it was sufficiently clear that you were talking
only about precision; I'm not going to bother to go back up the thread
to check.)

As for *why* the C standard doesn't require IEEE 754, there have been
plenty of computers that support other representations , and it would
have been absurd for the C standard to require IEEE 754 on systems
that don't provide it. (Examples: VAX, older Cray vector systems, and
IBM mainframes each have their own floating-point formats; there are
undoubtedly more examples.) The intent of IEEE 754 is to define a
single universal floating-point standard, and we're headed in that
direction, but we're not there yet -- and we certainly weren't there
when the C89 standard was being written.

[...]
This is an example of how the "regulars" spread nonsense just with the
only objective of "contradict ing jacob" as they announced here
yesterday.
Tedious rant ignored.

--
Keith Thompson (The_Other_Keit h) <ks***@mib.or g>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Dec 13 '07 #8
What would be the other options which make gcc a conforming compiler?

-Andreas

Peter Nilsson wrote:
rembremading <rembremad...@g mx.netwrote:
>Hi all!

The following piece of code has (for me) completely
unexpected behaviour.
(I compile it with gcc-Version 4.0.3)
Something goes wrong with the integer to float conversion.

Did you apply -ffloat-store?

Without that (and others), gcc is not a conforming compiler
on many architectures.

--
Peter
Dec 13 '07 #9
On Dec 13, 4:31 am, Keith Thompson <ks...@mib.orgw rote:
jacob navia <ja...@nospam.c omwrites:
Flash Gordon wrote:
[...]
>Since the standard doesn't force even IEEE754, there is *nothing*
the C language by itself can guarantee the user.

(I think jacob wrote the above; the attribution was snipped.)
Are you seriously trying to claim that the only way to provide any
form of guarantee on floating point is to enforce IEEE754?
If there isn't even an accepted standard that is enforced, how
can you guarantee anything.

[...]
How could the standard guarantee ANYTHING about the precision of
floating point calculations when it doesn't even guarantee a common
standard?
Yes I am seriously saying that the absence of an enforced standard
makes any guarantee IMPOSSIBLE.

First of all, the C standard says that accuracy of floating-point
operations is implementation-defined, though the implementation is
allowed to say that the accuracy is unknown. This doesn't refute
jacob's claim, but it does mean that a particular implementation is
allowed to make guarantees that the standard doesn't make.

It's entirely possible for a language standard to make firm guarantees
about floating-point accuracy without mandating adherence to one
specific floating-point representation. For example, the Ada standard
has a detailed parameterized floating-point model with specific
precision requirements; Ada's requirements are satisfied by an IEEE
754 implementation, but can also be (and have been) satisfied by a
number of other floating-point representations . For example, in Ada
1.0 + 1.0 is guaranteed to be exactly equal to 2.0.

The C standard could have followed the same course (and since it was
written several years after the first Ada standard, I'm not entirely
sure why it didn't). But instead, the authors of the C standard chose
to leave floating-point precision issues up to each implementation.

In practice, each C implementation and each Ada implementation usually
just uses whatever floating-point representation and operations are
provided by the underlying hardware. (Exception: some systems
probably use software floating-point, and some others might need to do
some tweaking on top of what the hardware provides.) In most cases,
the precision provided by the hardware is as good as what the language
standard *could* have guaranteed.

Yes, jacob, it's true that the C standard makes no guarantees about
floating-point precision. It does make some guarantees about
floating-point range, which seems to contradict your claim above that
"there is *nothing* the C language by itself can guarantee the user".
(Perhaps in context it was sufficiently clear that you were talking
only about precision; I'm not going to bother to go back up the thread
to check.)
You quoted it: "How could the standard guarantee ANYTHING about
the precision of floating point calculations... "
[...]
This is an example of how the "regulars" spread nonsense just with the
only objective of "contradict ing jacob" as they announced here
yesterday.

Tedious rant ignored.
So you just demonstrated that JN's paranoia is not *quite* paranoia.
Your post
is exactly what he refers to (using strong words like "nonsense",
since JN
gets pretty emotional in this grand war).

World will stop, and all newbies will get confused and die if you stop
arguing
with Jacob, sure. You simlpy have to do it again and again. Even when
you don't
have a point! Oh well.
Dec 13 '07 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
3374
by: rz0 | last post by:
Hi all, This is a question about both C89 and C99 and is based on my partial reading of the standard drafts (one from before C89 but mainly N1124). If appropriate, please give a separate answer for each version of the language. Let's consider the conversion from a given floating real type to a specific integer type.
5
6354
by: Nonoize | last post by:
Hi all, Really dumb question but I don't know how to resolve it. Looked in help and evry book I have. I have a table where the primary key was set as an Integer and its reached over 140K worth of records and the numbering has restarted from 1. I realize now that I should have set it to double. Can someone please advise how I can save my existing records and restart the numbering from say
10
3190
by: Mike S | last post by:
Does anyone know the logic behind why in VB.NET the result of a floating-point division ('/') is -rounded- on being converted to an integer type, such as with statements like Dim x As Integer = 2/3 'after assignment, x is 1, whereas a sane person would say it should be 0 Does Microsoft have a reason for this design decision? I understand that this type of rounding can reduce the overall error in long computation chains by reducing the...
8
6559
by: Candace | last post by:
I am using the following code to pick off each digit of a number, from right to left. The number I am working with is 84357. So for the first iteration it should return the number 7 and for the second iteration it should return the number 5, and so on. But for some reason on the first iteration returns the expected results. Each subsequent iteration returns the number plus 1. In order words, when I run the program I am getting: 7, 6, 4, and...
0
8913
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8761
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9280
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9200
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8144
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6722
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6016
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4525
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4795
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.