473,287 Members | 1,946 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,287 software developers and data experts.

loss of precision in doing file I/O

Hi all,

I have a doubt, I'll try to expose it to you as clearly as I can, maybe
it is not completely in topic, sorry about that.

I have a few vectors of doubles (output of some calculations) and I
check the norm and the dot product between couples of them before
storing them in a txt file, obtaining the expected results.

The vectors are orthonormal so the norm=1 and the dot product gives
something like 1.0e-18 that I suppose is quite close to zero...

When I read the vectors again from the txt file, I check the same things
and I find that the dot product is now around 1.0e-8.

Is this normal? Is there a way to avoid this loss of precision?

I am programming in c++ using visual studio 6 on win2000.

Thanks for your precious help.

--
there are no numbers in my email address
Jun 2 '06 #1
15 2902
"giff" writes:
I have a doubt, I'll try to expose it to you as clearly as I can, maybe it
is not completely in topic, sorry about that.

I have a few vectors of doubles (output of some calculations) and I check
the norm and the dot product between couples of them before storing them
in a txt file, obtaining the expected results.

The vectors are orthonormal so the norm=1 and the dot product gives
something like 1.0e-18 that I suppose is quite close to zero...

When I read the vectors again from the txt file, I check the same things
and I find that the dot product is now around 1.0e-8.

Is this normal? Is there a way to avoid this loss of precision?

I am programming in c++ using visual studio 6 on win2000.


If you read and write to the file using >> and << the data will be converted
to a character representation and the default precision will be used.
1.0e-8 sounds about like what I would expect. You could change the
precision [see ios::precision() ] or save the file in the binary form [see
read() and write()].
Jun 2 '06 #2
giff wrote:
Hi all,

I have a doubt, I'll try to expose it to you as clearly as I can, maybe
it is not completely in topic, sorry about that.

I have a few vectors of doubles (output of some calculations) and I
check the norm and the dot product between couples of them before
storing them in a txt file, obtaining the expected results.

The vectors are orthonormal so the norm=1 and the dot product gives
something like 1.0e-18 that I suppose is quite close to zero...

When I read the vectors again from the txt file, I check the same things
and I find that the dot product is now around 1.0e-8.

Is this normal? Is there a way to avoid this loss of precision?
See this FAQ and those following:

http://www.parashift.com/c++-faq-lit...html#faq-29.16

Can you show us a *complete* but *minimal* code snippet that we can cut
and paste into our editors to try it? It may be user error.
I am programming in c++ using visual studio 6 on win2000.


Irrelevant. (If it is relevant, you're in the wrong newsgroup.)

Cheers! --M

Jun 2 '06 #3
giff wrote:
I have a few vectors of doubles (output of some calculations) and I
check the norm and the dot product between couples of them before
storing them in a txt file, obtaining the expected results.

The vectors are orthonormal so the norm=1 and the dot product gives
something like 1.0e-18 that I suppose is quite close to zero...

When I read the vectors again from the txt file, I check the same
things and I find that the dot product is now around 1.0e-8.

Is this normal? Is there a way to avoid this loss of precision?
Write more digits to the output.
I am programming in c++ using visual studio 6 on win2000.


That shoulnd't matter.

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
Jun 2 '06 #4
osmium ha scritto:

If you read and write to the file using >> and << the data will be converted
to a character representation and the default precision will be used.
that's probably what happens
1.0e-8 sounds about like what I would expect. You could change the
precision [see ios::precision() ] or save the file in the binary form [see
read() and write()].


thanks for the hint
--
there are no numbers in my email address
Jun 2 '06 #5
mlimber ha scritto:
Is this normal? Is there a way to avoid this loss of precision?
See this FAQ and those following:

http://www.parashift.com/c++-faq-lit...html#faq-29.16


thanks

Can you show us a *complete* but *minimal* code snippet that we can cut
and paste into our editors to try it? It may be user error.


std::ofstream os("filename.txt" , ios_base::trunc|ios_base::out );

for(int n=0; n<nP; n++){

for(int i=0; i<nV; i++){

os << ((double*)(shapeV2d[n]->data.ptr))[i * 2] << " "
<< ((double*)(shapeV2d[n]->data.ptr))[i*2+1] << endl;

}
}

I write nP vectors stored as CvMat arrays (I am using the opencv libraries)

here is the reading routine:

std::ifstream is( filename.txt );

for(int n=0; n<nP; n++ ) {

for(int v=0; v<nV; v++){

is >> x >> y;
((double*)(shapeV2d[n]->data.ptr))[2 * v] = x;
((double*)(shapeV2d[n]->data.ptr))[2*v+1] = y;

}
}
Anyway I don't think this could be a huge problem for my project, I mean
1.0e-8 is still quite small and comparable to zero...

thanks

--
there are no numbers in my email address
Jun 2 '06 #6
Victor Bazarov ha scritto:

Write more digits to the output.


I am using <<, do you mean I should use a printf instead?
--
there are no numbers in my email address
Jun 2 '06 #7
giff <gi*********@444gmail.com> wrote:
I have a few vectors of doubles (output of some calculations) and I
check the norm and the dot product between couples of them before
storing them in a txt file, obtaining the expected results.

The vectors are orthonormal so the norm=1 and the dot product gives
something like 1.0e-18 that I suppose is quite close to zero...

When I read the vectors again from the txt file, I check the same things
and I find that the dot product is now around 1.0e-8.
I ran into something similar before where I would take a dot product of
a couple different (mathematical) vectors and then normalize them. When
the dot product was 1e-18 or whatever, the algorithm normalized this
which obviously threw all my calculations out the window. I had to
explicitly set the result to 0 in order for things to work correctly.

For example,

const double Epsilon = 1e-10; // or whatever is appropriate

// ...

double result = /* compute dot product */;

// need #include <cmath> for std::abs
if (std::abs(result) < Epsilon) {
result = 0;
}
Is this normal? Is there a way to avoid this loss of precision?


The above are symptoms of the limitations of floating-point type numbers
in general. You can search the FAQ for more info. In particular, the
FAQ has a link to David Goldberg's paper "What Every Computer-Scientist
Should Know About Floating Point Arithmetic".

--
Marcus Kwok
Replace 'invalid' with 'net' to reply
Jun 2 '06 #8
giff wrote:
Victor Bazarov ha scritto:

Write more digits to the output.


I am using <<, do you mean I should use a printf instead?


No, I don't mean that. RTFM about stream manipulators and
specifically about 'setprecision'.

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
Jun 2 '06 #9
Victor Bazarov wrote:
giff wrote:
I have a few vectors of doubles (output of some calculations) and I
check the norm and the dot product between couples of them before
storing them in a txt file, obtaining the expected results.

The vectors are orthonormal so the norm=1 and the dot product gives
something like 1.0e-18 that I suppose is quite close to zero...

When I read the vectors again from the txt file, I check the same
things and I find that the dot product is now around 1.0e-8.

Is this normal? Is there a way to avoid this loss of precision?


Write more digits to the output.
I am programming in c++ using visual studio 6 on win2000.


That shoulnd't matter.


Aah, but it does. There have been recent discussions on the boost mailing
list concerning roundtripping doubles/floats to/from text files with many VC
versions losing 1-2 bits precision for certain numeric values. These have
been posted on MS website as bugs, but have apparently been refuted by MS as
a design feature.

Jeff Flinn
Jun 2 '06 #10
"Jeff Flinn" writes:
Write more digits to the output.
I am programming in c++ using visual studio 6 on win2000.


That shoulnd't matter.


Aah, but it does. There have been recent discussions on the boost mailing
list concerning roundtripping doubles/floats to/from text files with many
VC versions losing 1-2 bits precision for certain numeric values. These
have been posted on MS website as bugs, but have apparently been refuted
by MS as a design feature.


Look at the numbers the OP gave! This thread is not about losing a few bits
between friends.
Jun 2 '06 #11
osmium wrote:
"Jeff Flinn" writes:
Write more digits to the output.

I am programming in c++ using visual studio 6 on win2000.

That shoulnd't matter.


Aah, but it does. There have been recent discussions on the boost
mailing list concerning roundtripping doubles/floats to/from text
files with many VC versions losing 1-2 bits precision for certain
numeric values. These have been posted on MS website as bugs, but
have apparently been refuted by MS as a design feature.


Look at the numbers the OP gave! This thread is not about losing a
few bits between friends.


Which is why I responded only to Victor's second point. Victor addressed the
large scale disparity in his first response. Once that is corrected, the OP
and possibly others Googling this thread can be saved some time in tracking
down these finer grain disparities.

Jeff Flinn
Jun 2 '06 #12
Jeff Flinn wrote:
osmium wrote:
"Jeff Flinn" writes:
Write more digits to the output.

> I am programming in c++ using visual studio 6 on win2000.

That shoulnd't matter.

Aah, but it does. There have been recent discussions on the boost
mailing list concerning roundtripping doubles/floats to/from text
files with many VC versions losing 1-2 bits precision for certain
numeric values. These have been posted on MS website as bugs, but
have apparently been refuted by MS as a design feature.


Look at the numbers the OP gave! This thread is not about losing a
few bits between friends.


Which is why I responded only to Victor's second point. Victor
addressed the large scale disparity in his first response. Once that
is corrected, the OP and possibly others Googling this thread can be
saved some time in tracking down these finer grain disparities.


I don't understand this. You "addressed" my second point by plucking
it out of context altogether. WTF?

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
Jun 2 '06 #13
Victor Bazarov wrote:
Jeff Flinn wrote:
osmium wrote:
"Jeff Flinn" writes:

> Write more digits to the output.
>
>> I am programming in c++ using visual studio 6 on win2000.
>
> That shoulnd't matter.

Aah, but it does. There have been recent discussions on the boost
mailing list concerning roundtripping doubles/floats to/from text
files with many VC versions losing 1-2 bits precision for certain
numeric values. These have been posted on MS website as bugs, but
have apparently been refuted by MS as a design feature.

Look at the numbers the OP gave! This thread is not about losing a
few bits between friends.


Which is why I responded only to Victor's second point. Victor
addressed the large scale disparity in his first response. Once that
is corrected, the OP and possibly others Googling this thread can be
saved some time in tracking down these finer grain disparities.


I don't understand this. You "addressed" my second point by plucking
it out of context altogether. WTF?


I didn't do the plucking. The context disappeared in osmium's reponse.

Jeff
Jun 2 '06 #14
giff wrote:
mlimber ha scritto:
Can you show us a *complete* but *minimal* code snippet that we can cut
and paste into our editors to try it? It may be user error.


[snip code]

Not quite what was requested since we couldn't cut, paste, and run that
without manufacturing a lot of other stuff (e.g., the ShapeV2d
structure type, main(), header files, etc.), but on inspection, it
certainly seems likely that, as others suggested, it is a problem with
the digits of precision you are writing to the file.

Cheers! --M

Jun 2 '06 #15
mlimber ha scritto:

Not quite what was requested since we couldn't cut, paste, and run that
without manufacturing a lot of other stuff


mmm right, sorry about that...

Jun 2 '06 #16

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Roger Leigh | last post by:
Hello, I'm writing a fixed-precision floating point class, based on the ideas in the example fixed_pt class in the "Practical C++ Programming" book by Steve Oualline (O' Reilly). This uses a...
16
by: BigMan | last post by:
How can I check if assignment of a float to a double (or vice versa) will result in loss of precision?
11
by: Marcin | last post by:
Hello! How I can make SQRT(2) with 20 digits precision (after float point)?
4
by: hunt n peck | last post by:
does anyone know why this expression: str(csng(18333.332)) equals: "18333.33" ? why am i losing precision from this operation?
8
by: Grant Edwards | last post by:
I'm pretty sure the answer is "no", but before I give up on the idea, I thought I'd ask... Is there any way to do single-precision floating point calculations in Python? I know the various...
1
by: jx2 | last post by:
1: where do i lose precision? everythink is long i dont multiply so its not posible to get enything different then long 2: how to fix it? long n,i; long numbers = new long;...
1
by: dissectcode | last post by:
Hello - I am looking at a scanf function that takes in a 16 bit unsigned int(UINT16), but its format specifier is %hd. According to ...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.