473,396 Members | 1,900 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

How to show accuracy differences?

Hi again!

I am trying to figure a simple example of a program that shows the
accuracy differences between float and double types. In particular I'm
searching for an operation that works on doubles, but fails on floats,
and my first guess was an eigenvector problem (using the power method).

Unfortunately I found that they work ok in both cases, and I'm
concerned about optimizations and "smart promotions" by the compiler
I'm using (powerpc-apple-darwin9-gcc-4.0.1).

Do you have any hints about this? (code follows, probably full of errors)

Thanks!
#include <stdio.h>
#include <math.h>

int main (int argc, const char *argv[])
{
float f11, f12, f21, f22, f1, f2, f1new, f2new, fnorm, ferr;
double d11, d12, d21, d22, d1, d2, d1new, d2new, dnorm, derr;

printf("EIG = { 1, 0.0000313743 } (%e , %e)\n", 1.0, 0.0000313743);

f11 = 1001.321;
f12 = -3.0;
f21 = 3.14159 / 100.0;
f22 = -6.21e-3;

f1 = f11;
f2 = f22;
fnorm = sqrt(f1 * f1 + f2 * f2);
f1 /= fnorm;
f2 /= fnorm;

printf("MAT %e , %e ; %e , %e / VEC %e , %e \n",
f11, f12, f21, f22, f1, f2);

/* {a11 b1 + a12 b2, a21 b1 + a22 b2} */
do
{
f1new = f11 * f1 + f12 * f2;
f2new = f21 * f1 + f22 * f2;
fnorm = sqrt(f1new * f1new + f2new * f2new);
f1new /= fnorm;
f2new /= fnorm;
ferr = sqrt((f1new - f1) * (f1new - f1) + (f2new - f2) *
(f2new - f2));
printf("OLD = %e , %e ; NEW = %e , %e ; ERR = %e \n",
f1, f2, f1new, f2new, ferr);
f1 = f1new;
f2 = f2new;
} while (ferr 0.001);
printf("FLT = %e , %e (ERR=%e) \n", f1, f2, ferr);

printf("\n\nDOUBLE\n\n");
d11 = 1001.321;
d12 = -3.0;
d21 = 3.14159 / 100.0;
d22 = -6.21e-3;

d1 = d11;
d2 = d22;
dnorm = sqrt(d1 * d1 + d2 * d2);
d1 /= dnorm;
d2 /= dnorm;

printf("MAT %e , %e ; %e , %e / VEC %e , %e \n",
d11, d12, d21, d22, d1, d2);

/* {a11 b1 + a12 b2, a21 b1 + a22 b2} */
do
{
d1new = d11 * d1 + d12 * d2;
d2new = d21 * d1 + d22 * d2;
dnorm = sqrt(d1new * d1new + d2new * d2new);
d1new /= dnorm;
d2new /= dnorm;
derr = sqrt((d1new - d1) * (d1new - d1) + (d2new - d2) *
(d2new - d2));
printf("OLD = %e , %e ; NEW = %e , %e ; ERR = %e \n",
d1, d2, d1new, d2new, derr);
d1 = d1new;
d2 = d2new;
} while (derr 0.001);
printf("DBL = %e , %e (ERR=%e) \n", d1, d2, derr);

printf("\n\nDIFF FLT-DBL=%e \n", sqrt((f1 - d1) * (f1 - d2) + (f2 -
d2) * (f2 - d2)));
return 0;
}
--

Sensei*<Sensei's e-mail is at Mac-dot-com>

Beware of bugs in the above code; I have only proved it correct, not tried it.
(Donald Knuth)

Apr 5 '08 #1
5 1532
"Sensei" wrote:
I am trying to figure a simple example of a program that shows the
accuracy differences between float and double types. In particular I'm
searching for an operation that works on doubles, but fails on floats,
and my first guess was an eigenvector problem (using the power method).

Unfortunately I found that they work ok in both cases, and I'm
concerned about optimizations and "smart promotions" by the compiler
I'm using (powerpc-apple-darwin9-gcc-4.0.1).

Do you have any hints about this?
(code follows, probably full of errors)
(snip code)
Before you go to that trouble, are you sure their *IS* any
difference between float and double on your machine and
compiler?

Try this simple program to compare the number of bits
used by floating-point data types on your compiler:

#include <stdlib.h>
#include <stdio.h>
int main(void)
{
printf("size of float = %2ld bits\n", 8 * sizeof(float));
printf("size of double = %2ld bits\n", 8 * sizeof(double));
printf("size of long double = %2ld bits\n", 8 * sizeof(long double));
return 0;
}

on my machine/OS/compiler (AMD-Athlon/Win2K/djgpp)
I get the following results, but yours may vary:

size of float = 32 bits
size of double = 64 bits
size of long double = 96 bits

Now, I just looked at your code, and I don't see how that's going
to work. If you're trying to stress the precision of double to its
limits, you'll need to use a lot more than 6 or 7 significant digits.
Try working with a number with a know digit pattern, such as pi:

3.141592653589793238462643383279502884197169399375 10

Then try different levels of truncation:

3.14159
3.141592
3.1415926
3.14159265
3.141592653
3.1415926535
3.14159265358
3.141592653589
3.1415926535897
3.14159265358979
.... etc ...

until such point as float and double start showing differences.

--
Cheers,
Robbie Hatley
lonewolf aatt well dott com
www dott well dott com slant user slant lonewolf slant
Apr 6 '08 #2
On 2008-04-06 05:00:42 +0200, "Robbie Hatley" <lo******@well.comsaid:
"Sensei" wrote:
>I am trying to figure a simple example of a program that shows the
accuracy differences between float and double types. In particular I'm
searching for an operation that works on doubles, but fails on floats,
and my first guess was an eigenvector problem (using the power method).

Unfortunately I found that they work ok in both cases, and I'm
concerned about optimizations and "smart promotions" by the compiler
I'm using (powerpc-apple-darwin9-gcc-4.0.1).

Do you have any hints about this?
(code follows, probably full of errors)
(snip code)

Before you go to that trouble, are you sure their *IS* any
difference between float and double on your machine and
compiler?
Yes of course it is different, as in 32 and 64 bits for float and
double, respectively. A long double in my environment exists and is 128
bits long.
Try this simple program to compare the number of bits
used by floating-point data types on your compiler: [...]
Now, I just looked at your code, and I don't see how that's going
to work. If you're trying to stress the precision of double to its
limits, you'll need to use a lot more than 6 or 7 significant digits.
Try working with a number with a know digit pattern, such as pi:

3.141592653589793238462643383279502884197169399375 10

Then try different levels of truncation:

3.14159
3.141592
3.1415926
3.14159265
3.141592653
3.1415926535
3.14159265358
3.141592653589
3.1415926535897
3.14159265358979
... etc ...

until such point as float and double start showing differences.

That was an obvious choice, and correctly shows how doubles increase
the number of correct digits. But that is not what I was trying to do,
maybe I've expressed my problem incorrectly.

I'd like to show an example that gives a completely erroneous output,
not using the obvious answer that is using the (N+1)-th digit where N
is the maximum accurate digit allowed by floats. This answer shows only
that doubles give more accuracy, but it fails at showing that this have
any impact on a code.

I am trying to get a "float gives random noise while double nails it"
program, but I have no clue on where I should focus. I don't know even
if it is possible. In your example, that would be "using float Pi gets
approximated to 4.73429 while double gives 3.14159": this kind of
output gives an immediate idea on how much we should care about
accuracy.

Thanks for any hints!

--

Sensei*<Sensei's e-mail is at Mac-dot-com>

Basic research is what I am doing when I don't know what I am doing.
(Wernher von Braun)

Apr 6 '08 #3
>I'd like to show an example that gives a completely erroneous output,
>not using the obvious answer that is using the (N+1)-th digit where N
is the maximum accurate digit allowed by floats. This answer shows only
that doubles give more accuracy, but it fails at showing that this have
any impact on a code.
One of the real killers of accuracy is subtracting two nearly-equal
numbers. Another problem that results from this is the solution of
simultaneous linear equations where the matrix is near-singular.
Throw in a little calculation inaccuracy, and the problem reduces
itself to almost zero divided by almost zero, or some constant divided
by almost zero.

>I am trying to get a "float gives random noise while double nails it"
program, but I have no clue on where I should focus. I don't know even
if it is possible. In your example, that would be "using float Pi gets
approximated to 4.73429 while double gives 3.14159": this kind of
output gives an immediate idea on how much we should care about
accuracy.
Here's one problem where you need lots of accuracy:

You want to figure out where point C is on a map. You have two points,
A, and B, a known distance apart. You go to point A and measure angle
BAC. You go to point B and measure angle ABC. You now have two angles
and a side of a triangle, and you can calculate all the angles and sides
of the triangle. It gets really tricky to get good results when your
baseline AB is very short compared to the distance AC or BC.

Now, suppose that C is some not very distant star, A is Earth in
the spring, and B is Earth in the fall. Your baseline is thus 2 *
93 million miles or so wide. Your measured angles are 90.00000 and
90.00001 degrees, (I'll make it a right triangle to make the math
easier.) respectively. How far away is the star? (the difference
between AC and BC won't be significant). If you're off one count
in the 7th place in the angle, you've gone from the correct answer
to infinity. If you're off two counts in the 7th place, you've got
the star on the wrong side of the Earth.

CA * sin(angle BAC - angle ABC) = AB
CA = AB / sin(angle BAC - angle ABC)
CA = 2 * 93 million miles / sin(.00001)

Through some experimenting, I observe that if you:

- take 90.00000, convert to radians, put result in a variable (float or double).
- take 90.00001, convert to radians, put result in a variable (float or double).
- Subtract the above two, put result in a variable (float or double).

The double and the float results are about 46% apart, giving a 46%
difference in the final result:

5.328e+13 miles for double
7.801e+13 miles for float

That's about 9 (or 13.2) light years, or 2-3 times the distance
from the nearest star (not counting the Sun). You'll
get even tinier angles for more distant stars.

Note that converting to radians, THEN subtracting, is a bad move.
Sometimes you just can't avoid this.

Apr 6 '08 #4
In article <47***********************@reader1.news.tin.itSens ei <Sensei's e-mail is at Mac-dot-comwrites:
....
I'd like to show an example that gives a completely erroneous output,
not using the obvious answer that is using the (N+1)-th digit where N
is the maximum accurate digit allowed by floats.
There are such problems. One of them is numerical differentiation of
a function. Typically that is done using sample points from the
function with some specific spacing and doing some calculation. When
the spacing becomes too small for a particular type the results will
be completely wrong. But it is difficult to find a specific function
and specific point where it does occur.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Apr 7 '08 #5
In article <Jy********@cwi.nl"Dik T. Winter" <Di********@cwi.nlwrites:
In article <47***********************@reader1.news.tin.itSens ei <Sensei's e-mail is at Mac-dot-comwrites:
...
I'd like to show an example that gives a completely erroneous output,
not using the obvious answer that is using the (N+1)-th digit where N
is the maximum accurate digit allowed by floats.

There are such problems. One of them is numerical differentiation of
a function. Typically that is done using sample points from the
function with some specific spacing and doing some calculation. When
the spacing becomes too small for a particular type the results will
be completely wrong. But it is difficult to find a specific function
and specific point where it does occur.
As an afterthought, you can try to do it with the function
f(x) = x * 1.0001
with the formula:
f'(1) = (f(1 + step) - f(1 - step)) / (2 * step)
where step starts at 0.5 and is halved each time. With float
the precision starts decreasing already after the step three,
with double after the fifth step.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Apr 7 '08 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
by: Michael Lang | last post by:
You'd think that the advantage to extra decimal places in a data type is better accuracy. Well think again. Look at this method and test results: ...
4
by: roni | last post by:
hi. is there a tool that compare 2 vs.net projects and show the differences in code ? (in text..) such tool will help me alot to see changes i made between backups for example. have a...
0
by: JK | last post by:
Hi does anyone know how to compare two text objects (in this case 2 different fields in SQL Server records) and show the differences like SourceSafe. In other words the two files/ fields are...
15
by: dkultasev | last post by:
I mean, when I am trying to get result of 1/3 (or smf) it returns 0.33333333334 or something. I understand that result can not be 100% right, but it is it possibly to get more accuracy ?...
5
by: Mr Fish | last post by:
Is it possible for me to record floats in a std::stringstream with 100% accuracy? If so how? Here's a little prog that demonstrates how the default ss loses accuracy ...
6
by: Thomas Lumley | last post by:
What does the standard guarantee about the accuracy of eg the trigonometric functions? There is obviously an implementation-dependent upper bound on the accuracy, since the answer is stored in a...
2
by: G8 | 8======D | last post by:
Does anybody know the timing accuracy of VB.Net. I remember that in VB.60, one can only trigger an event with time accuracy <10ms. How about VB.Net? Was this improved? I am making a project for...
8
by: Jon Rea | last post by:
Hi, I have just tried compiling some scientific calculation code on Redhad Linux using gcc-3.4.4 that I normally compile on MS Visual C++ 03 and 05. I was surprised to get different floating...
1
by: coop | last post by:
I'm trying to port a high-accuracy reaction-time gathering application from a RTLinux C implementation into a web-avaliable implementation. We're obviously expecting to lose some accuracy and...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.