On Apr 2, 6:47 am, Jack Klein <jackkl...@spamcop.netwrote:
On Tue, 1 Apr 2008 01:10:24 -0700 (PDT), James Kanze
<james.ka...@gmail.comwrote in comp.lang.c++:
On Apr 1, 4:44 am, Jack Klein <jackkl...@spamcop.netwrote:
On Mon, 31 Mar 2008 19:04:01 -0700 (PDT), Markus Dehmann
<markus.dehm...@gmail.comwrote in comp.lang.c++:
double encoded = (double)i1 + (double)i2 / (double)100;
The obvious question is: why? If you really do receive a value
in this format (i.e. integer part and hundredths as two separate
values), and want to treat it as a single value, fine, but then
I don't understand why you want to go the other direction later.
And I can't think of any other reason why one would want to do
this.
I rather think I covered this in my next sentence:
Which is:
This is not a very good idea, as you have found. The
floating point data types in C, and just about every other
computer language, use a fixed number of bits, and that
limits their precision.
I can see reasons why one might want to do this on input. Some
external source is providing an integral value, followed by an
integral number of 100ths, and you want to do various
calculations on those values. Since the input is with an
accuracy of at most a 100th, you'll normally only output with
this accuracy as well, and for most trivial compuations, you can
pretty much ignore the rounding errors (which will be far
smaller).
I can't see a reason why one would want to go back, however.
(Maybe outputting to the same device?)
[...]
Can anyone tell me what's wrong and how to do this
right? (my code see below)
Your basic idea is wrong.
Hard to say without really knowing what his basic idea
is:-). Why does he want to do this? Anyway, two "obvious"
solutions come to mind:
Without knowing his reasoning, I gave him the benefit of the doubt,
and still decided that he was wrong. If he worked for my company, he
wouldn't write code that way a second time after the first code
review.
Even if it was what the requirements spefication demanded?
There are no "non-icky" ways to do this. If its a space issue
of some type, I will bet there are very few platforms where
sizeof(std::div_t) is greater than sizeof(double).
I can't really believe that it's a space issue, since a double
generally is the size of two int, and his input is two ints.
Putting the two values into a std::div_t would retain all the
integer bits with no loss, and still allow easy conversion to
a double if actually needed for some arcane purpose.
I wouldn't call computing a new value an "arcane purpose". And
if some external device is providing input in this format, then
you have to deal with it. The question is why the round trip.
Why does he want to go back to the original format?
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34