473,403 Members | 2,354 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,403 software developers and data experts.

problem with double's precision

Hello,

I've got a bit of experience in C++, but I'm writing my first app that
is dependent on relatively precise math functions. The app requires
that I get a time stamp based on s sample number, from a time series.

This seems liek an easy thing to do:

long lSample = 500; (for example)
double dSampleRate = 1000.0;
double dTimestamp = (double)lSample / dSampleRate;

The problem I am having is that the division by 1000 is not precise,
presumably because 1/(10^n) cannot be represented as a standard
floating point number. So, with each subsequent sample, the calculated
time stamp is further from the actual time of the sample.

The same problem occurs if I simply add the sample period to the
previous timestamp, and start at dTimestamp = 0.0. It also occurs when
trying to get the sample period, e.g.

double dSamplePeriod = 1.0 / dSampleRate

then dSamplePeriod = 0.00100000004749745

Currently, I am using a class off the web, called CLargeDouble, to get
around this, but this class uses string conversion, and as such, slows
the program immensely.

This app is being compiled using MS Visual C++ 7, and run on XP
machines. However, when I ran it on our terminal server (Win 2000
server), the 1/1000 operation did _not_ include the garbage starting at
the 6th or 7th siginificant digit.

My question is, is there a standard, fast workaround to this problem?
Or does it seem that this involves some problem in my code, which only
occurs when running on an XP platform? My hope is to avoid truncating
at some arbitrary number of significant digits, since I will be using
different sample rates (e.g. 1024hz, the inverse of which can be
respesented in floating point).

Thanks in advance for any help.
-D.T.

Oct 10 '05 #1
5 2671
* to****@email.arizona.edu:

I've got a bit of experience in C++, but I'm writing my first app that
is dependent on relatively precise math functions. The app requires
that I get a time stamp based on s sample number, from a time series.

This seems liek an easy thing to do:

long lSample = 500; (for example)
double dSampleRate = 1000.0;
double dTimestamp = (double)lSample / dSampleRate;

The problem I am having is that the division by 1000 is not precise,
presumably because 1/(10^n) cannot be represented as a standard
floating point number. So, with each subsequent sample, the calculated
time stamp is further from the actual time of the sample.
Nope. The calculated time stamp is approximately the hypothetical value
associated with the sample number, and that approximation is not significant,
and in particular it doesn't get worse for larger values. Rather, the
assumption that sample number can be used to compute a time stamp is probably
invalid, i.e., your samples do not actually come at a fixed rate.

The same problem occurs if I simply add the sample period to the
previous timestamp, and start at dTimestamp = 0.0.


That is a different (and additional) problem, of accumulated error.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Oct 10 '05 #2
to****@email.arizona.edu wrote:
I've got a bit of experience in C++, but I'm writing my first app that
is dependent on relatively precise math functions. The app requires
that I get a time stamp based on s sample number, from a time series.

This seems liek an easy thing to do:

long lSample = 500; (for example)
double dSampleRate = 1000.0;
double dTimestamp = (double)lSample / dSampleRate;
So, supposedly your 'dTimestamp' should just be 0.5, right? Is it? The
value 0.5 is exactly representable in binary, BTW.
The problem I am having is that the division by 1000 is not precise,
presumably because 1/(10^n) cannot be represented as a standard
floating point number. So, with each subsequent sample, the calculated
time stamp is further from the actual time of the sample.
That's why you should probably simply define 'dTimestamp' as 0.5. Also,
instead of accumulating your time value (and the error it carries),
calculate it: (step_index * 500.) / 1000
The same problem occurs if I simply add the sample period to the
previous timestamp, and start at dTimestamp = 0.0. It also occurs when
trying to get the sample period, e.g.

double dSamplePeriod = 1.0 / dSampleRate

then dSamplePeriod = 0.00100000004749745
That looks awfully imprecise. Are you sure you're using 'double'?
Currently, I am using a class off the web, called CLargeDouble, to get
around this, but this class uses string conversion, and as such, slows
the program immensely.
As soon as I see a class whose name begins with 'C', I shudder. It's
apparently written by a Microsoft-centric programmer, and that's
frightening. You should be able to find a better representation of
a larger precision FP number on the web than one that uses string as
its internal storage...
This app is being compiled using MS Visual C++ 7, and run on XP
machines. However, when I ran it on our terminal server (Win 2000
server), the 1/1000 operation did _not_ include the garbage starting at
the 6th or 7th siginificant digit.
<shrug> Unless you post your code (without the CLargeDouble class), it
is impossible to comment on all of that.
My question is, is there a standard, fast workaround to this problem?
Yes, more than likely. Without seeing your code, however, we cannot
advise on using one technique or another.
Or does it seem that this involves some problem in my code, which only
occurs when running on an XP platform?
Nothing can be said about that. 'comp.lang.c++' is platform-indifferent.
My hope is to avoid truncating
at some arbitrary number of significant digits, since I will be using
different sample rates (e.g. 1024hz, the inverse of which can be
respesented in floating point).


Well, you're correct about this, whenever division is used, you should be
able to figure out a possible rounding error by replacing it with the
inverse multiplication. Dividing by 1000 carries potential rounding
errors, while dividing by 1024 shouldn't, at least on a system where
floating point numbers are IEEE-standard.

V
Oct 10 '05 #3
Thanks for the input. I've fiddles with the code a bit more, and now
have something reasonable - no iterative addition of the sample period,
just direct assignment (timestamp = period * sample number). For some
reason, this didn't work in an earlier version of the program - gave
the same results as the iterative solution (i.e. multiplying the period
by N gave the same value as adding the period N time). Now, seems to
work fine, putting aside the variability how precise each of products
turn out to be , e.g.

dSamplePeriod * (double)lIndex = dTimestamp;

(sample output)
0.001000000000 * 4 = 0.004000000190
0.001000000000 * 5 = 0.004999999888

If this does seem imprecise for a double, I'm guessing that is a
platform/compiler issue;

Anyhow, though the more subtle points of floating point math still
escape me, your suggestions got my code running a couple orders of
magnitude faster - so, many, many thanks!

-D.T.

Victor Bazarov wrote:
to****@email.arizona.edu wrote:
I've got a bit of experience in C++, but I'm writing my first app that
is dependent on relatively precise math functions. The app requires
that I get a time stamp based on s sample number, from a time series.

This seems liek an easy thing to do:

long lSample = 500; (for example)
double dSampleRate = 1000.0;
double dTimestamp = (double)lSample / dSampleRate;


So, supposedly your 'dTimestamp' should just be 0.5, right? Is it? The
value 0.5 is exactly representable in binary, BTW.
The problem I am having is that the division by 1000 is not precise,
presumably because 1/(10^n) cannot be represented as a standard
floating point number. So, with each subsequent sample, the calculated
time stamp is further from the actual time of the sample.


That's why you should probably simply define 'dTimestamp' as 0.5. Also,
instead of accumulating your time value (and the error it carries),
calculate it: (step_index * 500.) / 1000
The same problem occurs if I simply add the sample period to the
previous timestamp, and start at dTimestamp = 0.0. It also occurs when
trying to get the sample period, e.g.

double dSamplePeriod = 1.0 / dSampleRate

then dSamplePeriod = 0.00100000004749745


That looks awfully imprecise. Are you sure you're using 'double'?
Currently, I am using a class off the web, called CLargeDouble, to get
around this, but this class uses string conversion, and as such, slows
the program immensely.


As soon as I see a class whose name begins with 'C', I shudder. It's
apparently written by a Microsoft-centric programmer, and that's
frightening. You should be able to find a better representation of
a larger precision FP number on the web than one that uses string as
its internal storage...
This app is being compiled using MS Visual C++ 7, and run on XP
machines. However, when I ran it on our terminal server (Win 2000
server), the 1/1000 operation did _not_ include the garbage starting at
the 6th or 7th siginificant digit.


<shrug> Unless you post your code (without the CLargeDouble class), it
is impossible to comment on all of that.
My question is, is there a standard, fast workaround to this problem?


Yes, more than likely. Without seeing your code, however, we cannot
advise on using one technique or another.
Or does it seem that this involves some problem in my code, which only
occurs when running on an XP platform?


Nothing can be said about that. 'comp.lang.c++' is platform-indifferent.
> My hope is to avoid truncating
at some arbitrary number of significant digits, since I will be using
different sample rates (e.g. 1024hz, the inverse of which can be
respesented in floating point).


Well, you're correct about this, whenever division is used, you should be
able to figure out a possible rounding error by replacing it with the
inverse multiplication. Dividing by 1000 carries potential rounding
errors, while dividing by 1024 shouldn't, at least on a system where
floating point numbers are IEEE-standard.

V


Oct 11 '05 #4
to****@email.arizona.edu wrote:
Thanks for the input. I've fiddles with the code a bit more, and now
have something reasonable - no iterative addition of the sample period,
just direct assignment (timestamp = period * sample number). For some
reason, this didn't work in an earlier version of the program - gave
the same results as the iterative solution (i.e. multiplying the period
by N gave the same value as adding the period N time). Now, seems to
work fine, putting aside the variability how precise each of products
turn out to be , e.g.

dSamplePeriod * (double)lIndex = dTimestamp;

(sample output)
0.001000000000 * 4 = 0.004000000190
0.001000000000 * 5 = 0.004999999888

If this does seem imprecise for a double, I'm guessing that is a
platform/compiler issue;

Anyhow, though the more subtle points of floating point math still
escape me, your suggestions got my code running a couple orders of
magnitude faster - so, many, many thanks!


Why not use fixed point instead of floating point values? An easy way
to represent fixed point is simply to use integer values scaled
proportionately. If .001 is the precision needed than the integer value
1 represents 1/1000. Values will remain accurate within that margin:

const long kScaling = 1000;

long lSample = 500 * kScaling;
long dSampleRate = 1;
long dTimestamp = lSample / dSampleRate;

long dSamplePeriod = 1000 / dSampleRate

then dSamplePeriod will equal 1 (i.e. 1/1000th)

1 * 4 = 4 (i.e .004)
1 * 5 = 5 (i.e .005)

The values need be divided by 1000 only when they are to be displayed.
Internally, the program simply performs integer arithmetic.

Greg

Oct 11 '05 #5
doubles are not that accurate (normally a lot better than float)
If you need accuracy (and not speed) you might want to look at MAPM

http://www.tc.umn.edu/~ringx004/mapm-main.html

Oct 11 '05 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
by: Michael G | last post by:
double A = floor((2447297.0 - 122.1)/365.25); i get differing results. It varies between 6699 and 6700. The correct answer is 6699. The 6700 result really throws the entire group of calculations...
1
by: Shreyas Kulkarni | last post by:
hi there, recently i have got a problem regarding calculation of sum of digits in a floating point or precision number. the weird behaviour of compiler/language is preventing me from calculating...
31
by: Bjørn Augestad | last post by:
Below is a program which converts a double to an integer in two different ways, giving me two different values for the int. The basic expression is 1.0 / (1.0 * 365.0) which should be 365, but one...
3
by: questions? | last post by:
I have a problem involves under flow/over flow. ################################################### # include <stdio.h> # include <math.h> # include <stdlib.h> double rate; double t;...
6
by: R.Biloti | last post by:
Hi folks I wrote the naive program to show up the unit roundoff (machine precision) for single and double precision: #include <stdio.h> int main (void) { double x;
5
by: Jason | last post by:
I am having a rounding problem in a value i am trying to display in VB.NET. If I have the following code: Dim x As Single = 2726.795 Dim y As Single = Math.Round(2726.795, 2) Dim s as String =...
9
by: richard_lavoie | last post by:
Hi, I have something like this: vector<floatvec1; and I want to cast it, so I use vector vec2<double= static_cast< vector<double(vec1); I always become a error: syntax error before `>'...
29
by: Virtual_X | last post by:
As in IEEE754 double consist of sign bit 11 bits for exponent 52 bits for fraction i write this code to print double parts as it explained in ieee754 i want to know if the code contain any...
248
by: md | last post by:
Hi Does any body know, how to round a double value with a specific number of digits after the decimal points? A function like this: RoundMyDouble (double &value, short numberOfPrecisions) ...
0
by: Charles Coldwell | last post by:
James Kanze <james.kanze@gmail.comwrites: True, with some additional considerations. The commonly used IEEE 754 floating point formats are single precision: 32 bits including 1 sign bit, 23...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.