Hi
Does any body know, how to round a double value with a specific number
of digits after the decimal points?
A function like this:
RoundMyDouble (double &value, short numberOfPrecisions)
It then updates the value with numberOfPrecisions after the decimal
point.
Any help is appreciated.
Thanks.
md
Nov 22 '07
206 11528
Richard Heathfield wrote:
*value = (int)(*value * p + 0.5) / (double)p;
Using "int(value+.5)" is the wrong way to round because it works
incorrectly with negative values.
The correct way is "std::floor(value+.5)".
Gordon Burditt wrote:
>pgp@medusas2:~/tmp$ gcc std=c99 pedantic W Wall lm jn.c ojn jn.c:16: warning: unused parameter 'argc' pgp@medusas2:~/tmp$ ./jn 0.33 0.3300000000000000155: 0 decimals 0.0000000000000000 0.3300000000000000155: 1 decimals 0.3000000000000000 0.3300000000000000155: 2 decimals 0.3300000000000000 0.3300000000000000155: 3 decimals 0.3300000000000000 0.3300000000000000155: 4 decimals 0.3300000000000000 0.3300000000000000155: 5 decimals 0.3300000000000000 0.3300000000000000155: 6 decimals 0.3300000000000000 0.3300000000000000155: 7 decimals 0.3300000000000000 0.3300000000000000155: 8 decimals 0.3300000000000000 0.3300000000000000155: 9 decimals 0.3300000000000000 0.3300000000000000155: 10 decimals 0.3300000000000000 0.3300000000000000155: 11 decimals 0.3300000000000000 0.3300000000000000155: 12 decimals 0.3300000000000000 0.3300000000000000155: 13 decimals 0.3300000000000000 0.3300000000000000155: 14 decimals 0.3300000000000000
I'm disappointed with the misleading results here which suggest that the LHS has trailing cruft which the RHS doesn't, when both have such cruft.
None of the floatingpoint numbers printed here can be represented
exactly in binary floating point, except zero. For debugging
purposes, I recommend a better output format, say %300.200f ,
enough to ensure that you can have enough digits to exactly represent
the number you get as a result (this is quite a bit more than
FLT_DIG, DBL_DIG, or LDBL_DIG, as appropriate to the type being
used). Or perhaps someone is using base10 floating point.
I think you're asking too much here. Note..
00111111 11010101 00011110 10111000 01010001 11101011 10000101 00011111
Exp = 1021 (1)
111 11111111
Man = .10101 00011110 10111000 01010001 11101011 10000101 00011111
3.3000000000000002e01
The first line is the 64bit double, then I split it into exponent and
mantissa. The last line is the *printf format "%.16e" which will print
in decimal all the precision that a 64bit double has.
Representing this value (or any double) wider than 17 decimal digits can
only yield nonsense.

Joe Wright
"Everything should be made as simple as possible, but not simpler."
 Albert Einstein 
Richard Heathfield wrote:
James Kuyper said:
....
>The convention I'm familiar with accepts that most floating point operations are inexact,
The convention I'm familiar with is that we believe what people say unless
or until we have reason to believe they are lying or mistaken. I believed
what the OP said. If you choose to believe that he was lying or mistaken,
that's entirely up to you.
I believe that he was using English as it is normally used, with
infinitely many implicit assumptions. Failing to state assumptions that
are conventionally left unstated would make him neither mistaken nor a liar.
Juha Nieminen said:
Richard Heathfield wrote:
> *value = (int)(*value * p + 0.5) / (double)p;
Using "int(value+.5)" is the wrong way to round because it works
incorrectly with negative values.
Please bear in mind that the above code was not mine. Earlier in that
article, I wrote:
"Elsethread, you were given this suggestion (suitably modified so that it
will actually compile, and with a driver added):"
and further on in that same article, I wrote:
"As you can see, it doesn't really round at all."
The correct way is "std::floor(value+.5)".
No, that doesn't work in C. So we adjust it to:
floor(value+.5)
which has the merit of compiling in C, and the disadvantage of being poor
style in C++.
But it still fails to round the value of a double to a specified number of
decimal places. On my system, floor(0.33 * 10.0 + 0.5) / 10.0 yields
0.299999999999999988898  which is wrong in the first decimal place.

Richard Heathfield <http://www.cpax.org.uk>
Email: http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place"  dmr 29 July 1999
James Kuyper said:
<snip>
You've made it clear that you're aware that the code depends upon C99
features, but as far as I can tell you've not yet attempted to compile
it with a compiler in a mode where that compiler supports those features
of C99.
Right. I don't have such a compiler. So I did the best I could with the
features provided by gcc extensions. It is, of course, now clear from the
results I got that those gcc extensions were not compatible with the C99
features used by the code.
<snip>

Richard Heathfield <http://www.cpax.org.uk>
Email: http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place"  dmr 29 July 1999
On Nov 22, 11:10 pm, Flash Gordon <s...@flashgordon.me.ukwrote:
Which is precisely the problem. If it is wanted for display then there
are better ways, if it is wanted for further calculations then it is
very important that the OP understand why it is not possible in general.
I see the argument is still going on.
If taken to it's logical conclusion then:
double x = 0.1;
is not possible to do in general. In that case we can all give up.
The rounding problem has a solution which works within the limitations
of binary floating point, like many things.
Bart
#include <stdio.h>
#include <math.h>
int main(void)
{
double x=0.1;
double y;
y=(x*10.01.0);
printf("0.1*10  1.0 = %e\n",y);
}
Bart said:
On Nov 22, 11:10 pm, Flash Gordon <s...@flashgordon.me.ukwrote:
>Which is precisely the problem. If it is wanted for display then there are better ways, if it is wanted for further calculations then it is very important that the OP understand why it is not possible in general.
I see the argument is still going on.
If taken to it's logical conclusion then:
double x = 0.1;
is not possible to do in general.
It's possible and legal to initialise x in this way. What we *can't* do
(and this is the trap that many programmers fall into) is now assume that
x stores a value that is exactly onetenth.
In that case we can all give up.
No, there's no need to give up  we just have to be aware that we can't
always do with floating point representation the things we might like to
do, things that we *can* do with a textual representation. That doesn't
mean that floating point representation is useless. It just means that we
shouldn't expect it to do things that, by its very nature, it can't do.
Mathematicians have a similar problem, in that no sufficiently powerful
formal system can be both complete and consistent (both highly desirable
qualities)  but that doesn't stop mathematicians from using mathematics.
It just means they have to be careful *how* they use it. Similarly,
computer programmers need to be careful how they use floating point
representation.
<snip>

Richard Heathfield <http://www.cpax.org.uk>
Email: http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place"  dmr 29 July 1999
>>pgp@medusas2:~/tmp$ gcc std=c99 pedantic W Wall lm jn.c ojn
>>jn.c:16: warning: unused parameter 'argc' pgp@medusas2:~/tmp$ ./jn 0.33 0.3300000000000000155: 0 decimals 0.0000000000000000 0.3300000000000000155: 1 decimals 0.3000000000000000 0.3300000000000000155: 2 decimals 0.3300000000000000 0.3300000000000000155: 3 decimals 0.3300000000000000 0.3300000000000000155: 4 decimals 0.3300000000000000 0.3300000000000000155: 5 decimals 0.3300000000000000 0.3300000000000000155: 6 decimals 0.3300000000000000 0.3300000000000000155: 7 decimals 0.3300000000000000 0.3300000000000000155: 8 decimals 0.3300000000000000 0.3300000000000000155: 9 decimals 0.3300000000000000 0.3300000000000000155: 10 decimals 0.3300000000000000 0.3300000000000000155: 11 decimals 0.3300000000000000 0.3300000000000000155: 12 decimals 0.3300000000000000 0.3300000000000000155: 13 decimals 0.3300000000000000 0.3300000000000000155: 14 decimals 0.3300000000000000
I'm disappointed with the misleading results here which suggest that the LHS has trailing cruft which the RHS doesn't, when both have such cruft.
None of the floatingpoint numbers printed here can be represented exactly in binary floating point, except zero. For debugging purposes, I recommend a better output format, say %300.200f , enough to ensure that you can have enough digits to exactly represent the number you get as a result (this is quite a bit more than FLT_DIG, DBL_DIG, or LDBL_DIG, as appropriate to the type being used). Or perhaps someone is using base10 floating point.
I think you're asking too much here. Note..
No, I'm not. I want you to print out enough digits to get the
*EXACT* value of the result you actually got. This doesn't make
sense when you are interested in the value you are calculating, but
it does make sense when you are debugging floatingpoint rounding
problems. (You will never get an infinite repeating decimal taking
binary floating point values with finite mantissa bits and converting
them to decimal).
>00111111 11010101 00011110 10111000 01010001 11101011 10000101 00011111 Exp = 1021 (1)
111 11111111 Man = .10101 00011110 10111000 01010001 11101011 10000101 00011111 3.3000000000000002e01
The first line is the 64bit double, then I split it into exponent and mantissa. The last line is the *printf format "%.16e" which will print in decimal all the precision that a 64bit double has.
But it's not enough to print the exact value you are getting. When
you are debugging rounding problems, why introduce *more* rounding
error that may obscure the problem you are trying to debug?
>Representing this value (or any double) wider than 17 decimal digits can only yield nonsense.
No, it's not nonsense. The value you *actually got* can be
represented exactly if you use enough digits. The value you should
have gotten in infiniteprecision math, and taking into account the
accuracy of the inputs cannot be, and you have a point outside the
context of debugging rounding issues.
In data Sat, 24 Nov 2007 20:34:46 0000, Gordon Burditt scrisse:
>>>pgp@medusas2:~/tmp$ gcc std=c99 pedantic W Wall lm jn.c ojn jn.c:16: warning: unused parameter 'argc' pgp@medusas2:~/tmp$ ./jn 0.33 0.3300000000000000155: 0 decimals 0.0000000000000000 0.3300000000000000155: 1 decimals 0.3000000000000000 0.3300000000000000155: 2 decimals 0.3300000000000000 0.3300000000000000155: 3 decimals 0.3300000000000000 0.3300000000000000155: 4 decimals 0.3300000000000000 0.3300000000000000155: 5 decimals 0.3300000000000000 0.3300000000000000155: 6 decimals 0.3300000000000000 0.3300000000000000155: 7 decimals 0.3300000000000000 0.3300000000000000155: 8 decimals 0.3300000000000000 0.3300000000000000155: 9 decimals 0.3300000000000000 0.3300000000000000155: 10 decimals 0.3300000000000000 0.3300000000000000155: 11 decimals 0.3300000000000000 0.3300000000000000155: 12 decimals 0.3300000000000000 0.3300000000000000155: 13 decimals 0.3300000000000000 0.3300000000000000155: 14 decimals 0.3300000000000000
I'm disappointed with the misleading results here which suggest that the LHS has trailing cruft which the RHS doesn't, when both have such cruft.
None of the floatingpoint numbers printed here can be represented exactly in binary floating point, except zero. For debugging purposes, I recommend a better output format, say %300.200f , enough to ensure that you can have enough digits to exactly represent the number you get as a result (this is quite a bit more than FLT_DIG, DBL_DIG, or LDBL_DIG, as appropriate to the type being used). Or perhaps someone is using base10 floating point.
I think you're asking too much here. Note..
No, I'm not. I want you to print out enough digits to get the *EXACT* value of the result you actually got. This doesn't make sense when you are interested in the value you are calculating, but it does make sense when you are debugging floatingpoint rounding problems. (You will never get an infinite repeating decimal taking binary floating point values with finite mantissa bits and converting them to decimal).
>>00111111 11010101 00011110 10111000 01010001 11101011 10000101 00011111 Exp = 1021 (1) 111 11111111 Man = .10101 00011110 10111000 01010001 11101011 10000101 00011111 3.3000000000000002e01
The first line is the 64bit double, then I split it into exponent and mantissa. The last line is the *printf format "%.16e" which will print in decimal all the precision that a 64bit double has.
But it's not enough to print the exact value you are getting. When you are debugging rounding problems, why introduce *more* rounding error that may obscure the problem you are trying to debug?
>>Representing this value (or any double) wider than 17 decimal digits can only yield nonsense.
No, it's not nonsense. The value you *actually got* can be represented exactly if you use enough digits. The value you should have gotten in infiniteprecision math, and taking into account the accuracy of the inputs cannot be, and you have a point outside the context of debugging rounding issues.
i agree with above
the only place where there should be some approximation is
in the traslation "float to string" (or "string to float")
e.g.
1234567
fun(7, 12.9999999555556)="13.0000000"
or when print in stdout or in a file
and this could[should?] be done in the "string" not in the float
so i could say: "no approximation for float"!
On Nov 22, 10:12 am, Richard Heathfield <r...@see.sig.invalidwrote:
Jim Langston said:
"Richard Heathfield" <r...@see.sig.invalidwrote in message
news:v9******************************@bt.com...
Well, you have a syntax error right there: double &value isn't legal.
You presumably meant double *value.
Actually, double& value is legal in C++ but not C.
Right  but of course a crossposted article should "work" in all the
groups into which it's posted.
Certainly. But here, the original question is actually relevant
to both groups: for the most part, the C++ standard handles
floating point by saying: see the C standard (or alternatively,
by duplicating the wording of the C standard). I'd guess that
whoever posted the answer didn't notice that he was responding
to a crossposted article.
Here, the post worked in clc++ but not in clc, so either it
should not have been posted to clc or the pointer syntax
should have been used rather than the reference syntax (at
which point, of course, there would have been howls of protest
from the clc++ crowd, and perhaps rightly so).
Shit happens. Since the pointer syntax is a legal in C++ as
well, you shouldn't hear to many complaints from that side; it's
what I use when I'm aware of a crossposting. (In this case, I
didn't happen to notice the thread until it was fairly long.
And just seeing the names of the posters signaled that it was a
crosspostingI've never seen you respond to a posting in
clc++ which wasn't crossposted, and you're not the only one in
this case.)
This was cross posted to two newsgroups (c.l.c++ and c.l.c)
which is usually a bad idea just for this problem.
Indeed.
The languages aren't unrelated, and in this case, the original
question was quite appropriate for a crossposting. IMHO, it
doesn't seem to be asking too much to be somewhat tolerant with
regards to the actual syntax used in the answers, on the
grounds that whoever is responding may not be aware of the
crossposting. (Not to criticize your original response.
Nothing wrong with mentionning that the syntax isn't legal in
the language of the group in which you're posting, as long as
you go on to address the real issues, as you did. Let's just
not get hung up with it.)

James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.Cyrl'École, France, +33 (0)1 30 23 00 34
On Nov 23, 12:13 pm, KaiUwe Bux <jkherci...@gmx.netwrote:
Richard Heathfield wrote:
jacob navia said:
<snip>
2: As you can see, my solution works in your machine.
Either RH is not telling the truth (unlikely) or he is
using some minor error in the code to trip about.
If your code has an error, however minor, then it is broken,
in which case I suggest you post a fixed version. The
version you posted *does not work* on my system. Quite apart
from the fact that you are trying to solve an impossible
problem (the problem, remember, is that of rounding the
value of a double to a specified number of decimal places,
which simply can't be done), you are trying to solve it in a
way that produces bizarrely incorrect results on at least
one system.
Hm. I wonder if this might be a matter of interpreting the
problem.
The C standard says about sqrt() that it computed the
nonnegative square root. C++ inherits this requirement. If
your interpretation of the rounding problem is correct and if
we transfer it to the other arithmetic operations, then there
can be no conforming implementations. In fact, even elementary
school arithmetic (+,,*,/) cannot be done correctly under
that interpretation. However, that interpretation of the specs
is not the only possible.
A different interpretation of floating point computations is
that an operation (say multiplication, addition, sqrt, or
rounding to a given number of decimal places) should yield a
double (or float or whatever types are topical in the
corresponding newsgroup) that is closest to the exact
mathematical result. If I recall correctly, this is by and
large the position taken by IEEE754.
The problem is that neither the C nor the C++ standard require
such. And that both standards also allow greater precision in
the intermediate results. A liberty that is, in fact, used by
most compilers for at least one very common architecture today.
And which makes "rounding" using just floating point arithmetic
very, very difficult. (I remember seeing an implementation of
modf, a very long time ago, which used so trick involving
multiplication and division in a way to end up with the integral
part as a result of loss of precision. The person who was
porting the library to the 8086 was astonished to find that the
value written through the iptr argument wasn't an integer.)
When this (reasonable) interpretation is adopted,
I'm not too sure about the "reasonable" part. On an Intel
machine, a * b, where a and b are both double, does NOT give the
exact result, rounded to the nearest representable double. And
although Intel processors are pretty scarce in the milieu where
I work (the last Intel I actively programmed on was an 80386),
I've heard that they are still in use. (And other processors,
such as the AMD 64 bit processor on my home PC, also behave this
way.)
the problem of rounding to a fixed number of decimals is
solvable (and it is indeed not different from any other
computational problem). And if you don't adopt an
interpretation like that, floating point arithmetic in general
is "impossible".
I'd still leave it up to the implementation to handle the tricky
parts. Multiply by a power of 10, floor(), ciel() or round(),
and then divide by the same power, and you should get something
fairly close to a correct result (and if you're talking about
"rounding in base 10", close is all you're going to get).

James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.Cyrl'École, France, +33 (0)1 30 23 00 34
James Kanze wrote:
I'd still leave it up to the implementation to handle the tricky
parts. Multiply by a power of 10, floor(), ciel() or round(),
and then divide by the same power, and you should get something
fairly close to a correct result (and if you're talking about
"rounding in base 10", close is all you're going to get).
This is exactly what my function does. It is a close implementation of
the rounding, and never claimed to be the *exact* since that is impossible!

James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.Cyrl'École, France, +33 (0)1 30 23 00 34

jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique http://www.cs.virginia.edu/~lccwin32
On Sun, 25 Nov 2007 18:29:02 +0100, in comp.lang.c , jacob navia
<ja***@nospam.comwrote:
>Richard wrote:
(trollish stuff)
>Of course I agree with your short description of these people.
Two trolls agreeing with each other is hardly interesting.

Mark McIntyre
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
Brian Kernighan
Mark McIntyre <ma**********@spamcop.netwrites:
On Sun, 25 Nov 2007 18:29:02 +0100, in comp.lang.c , jacob navia
<ja***@nospam.comwrote:
>>Richard wrote:
(trollish stuff)
Facts of life with regard to C.L.C you mean.
>
>>Of course I agree with your short description of these people.
Two trolls agreeing with each other is hardly interesting.
Interesting enough for you to reply to I notice. Of course being in the
"small clique" which causes so much repugnance, you are indeed honour
bound to reply. In fairness at least you don't tell people to RTFM as
frequently as your namesake Blumel.
jacob navia wrote:
James Kanze wrote:
>I'd still leave it up to the implementation to handle the tricky parts. Multiply by a power of 10, floor(), ciel() or round(), and then divide by the same power, and you should get something fairly close to a correct result (and if you're talking about "rounding in base 10", close is all you're going to get).
This is exactly what my function does. It is a close implementation of
the rounding, and never claimed to be the *exact* since that is impossible!
No, your function does not use floor(), ceil(), or round() to calculate
floating point representations of the intermediate integral value. It
uses conversion to long long for that purpose. As a result it relies
upon nonportable assumptions about the relationship between the
precision of long long and the precision of long double. On
implementations where that assumption is invalid, your algorithm
unnecessarily produces results for some argument values that are less
accurate than they could be with a more appropriate algorithm. Note:
such an algorithm should actually use floorl(), ceill() or roundl(),
rather than floor(), ceil() and round(), for precisely the same reason
that your algorithm uses powl() rather than pow().
On Nov 21, 9:39 pm, md <mojtaba.da...@gmail.comwrote:
Hi
Does any body know, how to round a double value with a specific number
of digits after the decimal points?
A double probably doesn't have a decimal point. Floating point values
are commonly represented in binary, not decimal. So what you are
asking for is generally impossible.
A function like this:
RoundMyDouble (double &value, short numberOfPrecisions)
It then updates the value with numberOfPrecisions after the decimal
point.
The real number which this function is intended to compute might not
be representable in the floating point format used by your double
type.
At best, you can only write this function such that it produces a
close approximation of that number. But still, this function is
silly, and serves no purpose.
Usually this type of rounding is done in two situations.
One situation is that you are doing some kind of scientific or
mathematic computing, and want to display results to a given decimal
precision. In that case, the rounding and truncation is handed in the
conversion of floating point values to text in the output routine. You
do not actually massage your data to do the rounding. The numberto
string routine you use, whether it be within of printf or C++ ostreams
or whatever, will do the job of rendering a printed representation of
the number in decimal to the specified precision. You never adjust the
internal representation to achieve this. Internally, you always keep
the maximum precision afforded to you by the machine. A roudning
function like the above is of little use to you.
The second situation is that you are doing financial computing, and
need internally to have exact decimalbased arithmetic that follows
certain prescibed rounding rules. All intermediate results in
financial calculations must obey these rules. Whatever is printed in a
financial statement matches the internal representation. If your bank
book says that an account had 1234.53 dollars after a certain
transaction, it means exactly that. It doesn't mean there were
actually 1234.5321 dollars, which were printed to the nearest cent. In
this situation, it is simply inappropriate to be using floatingpoint
numbers, and so a routine which simulates decimal rounding over the
double type is also of no use.
On Nov 23, 3:26 pm, jacob navia <ja...@nospam.comwrote:
I got tired of #including math.h and finding out that atof
wasn't there but in stdlib.h. So I put it in math.h in lccwin.
This is of course not the case with gcc, that has it in stdlib.
GCC has it in stdlib.h, because, like, this thing called the ISO C
standard wants it in stdlib.h.
James Kanze wrote:
On Nov 23, 12:13 pm, KaiUwe Bux <jkherci...@gmx.netwrote:
>Richard Heathfield wrote:
jacob navia said:
<snip>
>2: As you can see, my solution works in your machine. Either RH is not telling the truth (unlikely) or he is using some minor error in the code to trip about.
If your code has an error, however minor, then it is broken,
in which case I suggest you post a fixed version. The
version you posted *does not work* on my system. Quite apart
from the fact that you are trying to solve an impossible
problem (the problem, remember, is that of rounding the
value of a double to a specified number of decimal places,
which simply can't be done), you are trying to solve it in a
way that produces bizarrely incorrect results on at least
one system.
>Hm. I wonder if this might be a matter of interpreting the problem.
>The C standard says about sqrt() that it computed the nonnegative square root. C++ inherits this requirement. If your interpretation of the rounding problem is correct and if we transfer it to the other arithmetic operations, then there can be no conforming implementations. In fact, even elementary school arithmetic (+,,*,/) cannot be done correctly under that interpretation. However, that interpretation of the specs is not the only possible.
>A different interpretation of floating point computations is that an operation (say multiplication, addition, sqrt, or rounding to a given number of decimal places) should yield a double (or float or whatever types are topical in the corresponding newsgroup) that is closest to the exact mathematical result. If I recall correctly, this is by and large the position taken by IEEE754.
The problem is that neither the C nor the C++ standard require
such. And that both standards also allow greater precision in
the intermediate results. A liberty that is, in fact, used by
most compilers for at least one very common architecture today.
And which makes "rounding" using just floating point arithmetic
very, very difficult. (I remember seeing an implementation of
modf, a very long time ago, which used so trick involving
multiplication and division in a way to end up with the integral
part as a result of loss of precision. The person who was
porting the library to the 8086 was astonished to find that the
value written through the iptr argument wasn't an integer.)
>When this (reasonable) interpretation is adopted,
I'm not too sure about the "reasonable" part.
a) This interpretation is not the only reasonable alternative
interpretation. Unfortunately, the OP has not given us much to go on. I
just wanted to demonstrate that the assumption, the OP wanted the rounding
to yield _exact_ results (as opposed to all other arithmetic operation) is
unfounded.
b) [comp.land.c++ only] Although the C++ does not require floating point
arithmetic to conform to IEEE754, it recognizes the relevance of IEEE754 in
that the header <limitsprovides compile time means to check for
conformance: std::numeric_limits<T>::is_iec559 for a floating point type T
is supposed to be true if and only if T conforms to IEEE754.
On an Intel
machine, a * b, where a and b are both double, does NOT give the
exact result, rounded to the nearest representable double. And
although Intel processors are pretty scarce in the milieu where
I work (the last Intel I actively programmed on was an 80386),
I've heard that they are still in use. (And other processors,
such as the AMD 64 bit processor on my home PC, also behave this
way.)
Interesting. My laptop uses an Intel processor and my C++ implementation
claims that my floating point types conform to IEEE754. I guess, the
compiler does something funny and corrects for the insufficient machine
instructions.
>the problem of rounding to a fixed number of decimals is solvable (and it is indeed not different from any other computational problem). And if you don't adopt an interpretation like that, floating point arithmetic in general is "impossible".
I'd still leave it up to the implementation to handle the tricky
parts. Multiply by a power of 10, floor(), ciel() or round(),
and then divide by the same power, and you should get something
fairly close to a correct result (and if you're talking about
"rounding in base 10", close is all you're going to get).
Actually, without guarantees about the precision of the results, it can be
very difficult to judge whether approximate rounding is useful for a given
purpose. Suppose you are computing letter grades and the rules require you
to take a weighted average, round to one digit after the decimal point, and
then check whether the result is within, say, [0.7,1.3]. If you do not have
the guarantee that rounding and compiler interpretation of floating point
literals is done according to the same precision and floating point
rounding, you cannot really use floating point arithmetic (unless you
engage in numerical analysis). In those cases, I would be inclined to use a
decimal type. (And even if you have the necessary guarantees about
rounding, you still have to make sure that the excess precision allowed for
internal computations does not get in the way.)
The real problem is the OP who fails to provide enough information to get
meaningful help. As of now, all proposed rounding methods (and also all
suggestions saying that rounding should be put off until output) in this
thread are based on guesses as to what the OP needs the rounding for.
Best
KaiUwe Bux
jacob navia <ja***@nospam.comwrote:
I got tired of #including math.h and finding out that atof
wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C
compatibility. Well done.
Richard
Richard Bos wrote:
jacob navia <ja***@nospam.comwrote:
>I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C
compatibility. Well done.
Richard
It is also in stdlib. It is in BOTH, and nothing
is written about not putting it in math.h

jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique http://www.cs.virginia.edu/~lccwin32
James Kuyper wrote:
jacob navia wrote:
>James Kanze wrote:
>>I'd still leave it up to the implementation to handle the tricky parts. Multiply by a power of 10, floor(), ciel() or round(), and then divide by the same power, and you should get something fairly close to a correct result (and if you're talking about "rounding in base 10", close is all you're going to get). This is exactly what my function does. It is a close implementation of the rounding, and never claimed to be the *exact* since that is impossible!
No, your function does not use floor(), ceil(), or round() to calculate
floating point representations of the intermediate integral value. It
uses conversion to long long for that purpose. As a result it relies
upon nonportable assumptions about the relationship between the
precision of long long and the precision of long double. On
implementations where that assumption is invalid, your algorithm
unnecessarily produces results for some argument values that are less
accurate than they could be with a more appropriate algorithm. Note:
such an algorithm should actually use floorl(), ceill() or roundl(),
rather than floor(), ceil() and round(), for precisely the same reason
that your algorithm uses powl() rather than pow().
That is the case. I changed the function to clean it up, and
its last version was:
#include <stdio.h>
#include <float.h>
#include <math.h>
double roundto(double value, unsigned digits)
{
long double v = value;
long double fv = fabs(value),p = powl(10.0L,digits);
if (fv powl(10.0L,DBL_DIG)  digits DBL_DIG)
return value;
return roundl(p*value)/p;
}

jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique http://www.cs.virginia.edu/~lccwin32
jacob navia said:
<snip>
I claim that this will deliver the best approximation to the
rounding to n decimal places, and I have never claimed otherwise.
If you do not agree, produce a better function.
As you ought to know already, I don't see any point in trying to solve a
problem that is inherently impossible for reasons that I have already
explained.
We need to know why we're rounding. If we're dealing with, say, currency
(or some analogous system), the proper solution is to do calculations in
an integer unit of which all other currency units are a multiple (e.g. for
Sterling, use pennies; in the USA, use cents; in Europe, use Euros), and
to establish a protocol for dealing with calculations that don't fit into
this process (e.g. interest calculations). If we're dealing with
calculations that simply require a neatening off for display purposes, on
the other hand, then the proper solution is to round the text
representation, not the value itself.
#include <stdio.h>
#include <float.h>
#include <math.h>
double roundto(double value, unsigned digits)
{
long double v = value;
long double fv = fabs(value),p = powl(10.0L,digits);
if (fv powl(10.0L,DBL_DIG)  digits DBL_DIG)
return value;
return roundl(p*value)/p;
}
That code is perfectly topical, of course  but it doesn't solve the
problem. Given this fact, the fact that it doesn't cater for those without
C99 compilers is of little consequence.
Data point: on my system, it gives very very very incorrect results (even
within the context that you've adopted: e.g. if I ask it to round 0.33 to
1dp I get 1.3543851449227162220), but then I don't have a C99 compiler,
merely a gcc implementation that provides nonC99conforming extensions 
but clearly this is a separate issue, and one on which the opinions of
reasonable people are divided.
(For those who may well be thinking  and indeed have already expressed the
thought  that I should "get a C99 compiler then  or at least a compiler
that supports many C99 features", my position is this: Many professional C
programmers do not get to choose the implementation they are using. In
many software environments, the decision about which compiler to use was
made long ago for reasons that do not count "having the latest C99 stuff"
as being particularly important when measured against more important
stability criteria. It is therefore unwise to assume that other
programmers have access to C99 features. But the C99ness of Mr Navia's
code is not the reason I think it fails to solve the problem. The reason I
think Mr Navia's code doesn't solve the problem is that I think the
problem as stated is insoluble.)

Richard Heathfield <http://www.cpax.org.uk>
Email: http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place"  dmr 29 July 1999
jacob navia wrote:
Richard Bos wrote:
>jacob navia <ja***@nospam.comwrote:
>>I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C compatibility. Well done.
It is also in stdlib. It is in BOTH, and nothing
is written about not putting it in math.h
It depends on the cleverness of such an implementation.
If such implementation does not also have the magic to forget the
extra declaration of atof when stdlib.h was not included, it
violates 7.1.3. The following program is strictly conforming.
#include <math.h>
static int atof = 4;
int main()
{
return 0;
}
Ralf
Ralf Damaschke wrote:
jacob navia wrote:
>Richard Bos wrote:
>>jacob navia <ja***@nospam.comwrote:
I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin. Ok, so you've found yet another way for your toy to break ISO C compatibility. Well done.
It is also in stdlib. It is in BOTH, and nothing is written about not putting it in math.h
It depends on the cleverness of such an implementation.
If such implementation does not also have the magic to forget the
extra declaration of atof when stdlib.h was not included, it
violates 7.1.3. The following program is strictly conforming.
#include <math.h>
static int atof = 4;
int main()
{
return 0;
}
Ralf
Use lcc ansic. In that case the declaration in math.h disappears

jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique http://www.cs.virginia.edu/~lccwin32
jacob navia wrote:
Richard Bos wrote:
>jacob navia <ja***@nospam.comwrote:
>>I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C compatibility. Well done.
Richard
It is also in stdlib. It is in BOTH, and nothing is written about not
putting it in math.h
So, if I write the following strictly conforming code:
#include <stdlib.h>
int atof = 3;
int *pAtof(void) { return &atof;}
Will it compile correctly under your implementation?
7.1.3p1:
— Each identifier with file scope listed in any of the following
subclauses (including the future library directions) is reserved for
use as a macro name and as an identifier with file scope in the same
name space if any of its associated headers is included.
James Kuyper wrote:
jacob navia wrote:
>Richard Bos wrote:
>>jacob navia <ja***@nospam.comwrote:
I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C compatibility. Well done.
Richard
It is also in stdlib. It is in BOTH, and nothing is written about not putting it in math.h
So, if I write the following strictly conforming code:
#include <stdlib.h>
int atof = 3;
int *pAtof(void) { return &atof;}
Will it compile correctly under your implementation?
7.1.3p1:
>— Each identifier with file scope listed in any of the following subclauses (including the future library directions) is reserved for use as a macro name and as an identifier with file scope in the same name space if any of its associated headers is included.
yes, use
lcc ansic

jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique http://www.cs.virginia.edu/~lccwin32
jacob navia wrote:
James Kuyper wrote:
....
>No, your function does not use floor(), ceil(), or round() to calculate floating point representations of the intermediate integral value. It uses conversion to long long for that purpose. As a result it relies upon nonportable assumptions about the relationship between the precision of long long and the precision of long double. On implementations where that assumption is invalid, your algorithm unnecessarily produces results for some argument values that are less accurate than they could be with a more appropriate algorithm. Note: such an algorithm should actually use floorl(), ceill() or roundl(), rather than floor(), ceil() and round(), for precisely the same reason that your algorithm uses powl() rather than pow().
That is the case. I changed the function to clean it up, and
its last version was:
#include <stdio.h>
#include <float.h>
#include <math.h>
double roundto(double value, unsigned digits)
{
long double v = value;
long double fv = fabs(value),p = powl(10.0L,digits);
if (fv powl(10.0L,DBL_DIG)  digits DBL_DIG)
return value;
return roundl(p*value)/p;
}
That's better. You've still got potential for overflow on the
multiplication, but to be honest that's true of a lot of my code too.
However, I write scientific software where the range of possible values
is known and validated. Therefore, I only need to provide overflow
protection for those operations where overflow is an actual possibility.
As this is a utility program, it should prevent overflow for any pair of
arguments for which overflow is possible and preventable  it doesn't.
Also, providing support for negative values of 'digits' would be
trivial, and would significantly improve what little usefulness this
routine has (while creating a need to prevent denormalization at the
multiplication).
Richard Heathfield wrote:
jacob navia said:
<snip>
>I claim that this will deliver the best approximation to the rounding to n decimal places, and I have never claimed otherwise.
If you do not agree, produce a better function.
As you ought to know already, I don't see any point in trying to solve a
problem that is inherently impossible for reasons that I have already
explained.
Are you asserting that the specification Jacob just described is
inherently impossible to implement? As far as I can see, the reasons
you've already explained do not apply to this specification.
I was under the impression that the only thing you could say against
this specification is that it doesn't match your own unconventionally
strict interpretation of the OP's specification.
"Richard Heathfield" <rj*@see.sig.invalidwrote in message
news:RI******************************@bt.com...
We need to know why we're rounding. If we're dealing with, say, currency
(or some analogous system), the proper solution is to do calculations in
an integer unit of which all other currency units are a multiple (e.g. for
Sterling, use pennies; in the USA, use cents; in Europe, use Euros), and
to establish a protocol for dealing with calculations that don't fit into
this process (e.g. interest calculations). If we're dealing with
calculations that simply require a neatening off for display purposes, on
the other hand, then the proper solution is to round the text
representation, not the value itself.
Not many posting here seem to believe there are real and practical
reasons for rounding values to so many decimals (or rounding to a
nearest fraction, a related problem).
Currency seems the most understood, and storing values in floating
point as dollars/pounds/euros, and rounding intermediate calculations
to the nearest cent/penny (0.01) works perfectly well for a typical
shop or business invoice. For large banks adding up accounts for
millions of customers, government and so on, I'm sure they have their
specialist developers.
Another example is CAD (drawing tools) where input is inherently noisy
(23.423618182 mm) and the usual practice is to round ('snap') to the
nearest aesthetic value, depending on zoom factor and scale and so on,
so 23., 23.5, 23.42 and so on. Otherwise you would get all sorts of
skewy lines.
There is still a noise factor present (the errors we've been
discussing) but you would need to zoom in by factor of a billion to
see them. In typical printouts things look perfect. In fact it's
interesting to zoom in and see these errors come to life on the
screen.
Also often everything is stored as, say, millimetres, while the user
might be using inches, and rounding would need to be as inches (say
hundredths of an inch, which would be the nearest multiple of 0.254),
again an approximation but works well enough (this allows designs
created with different units to be combined).
Actually rounding for printing purposes is probably not done much
outside printf() and such functions. In fact perhaps it's because
printf() does round floating point numbers, and therefore shows a
value that is only an approximation, that gives rise to much
misunderstanding. Maybe it should indicate (with a trailing ? perhaps)
that the value printed is not quite right unless explicitly told to
round.
Bart
James Kuyper wrote:
jacob navia wrote:
>James Kuyper wrote:
...
>>No, your function does not use floor(), ceil(), or round() to calculate floating point representations of the intermediate integral value. It uses conversion to long long for that purpose. As a result it relies upon nonportable assumptions about the relationship between the precision of long long and the precision of long double. On implementations where that assumption is invalid, your algorithm unnecessarily produces results for some argument values that are less accurate than they could be with a more appropriate algorithm. Note: such an algorithm should actually use floorl(), ceill() or roundl(), rather than floor(), ceil() and round(), for precisely the same reason that your algorithm uses powl() rather than pow().
That is the case. I changed the function to clean it up, and its last version was:
#include <stdio.h> #include <float.h> #include <math.h>
double roundto(double value, unsigned digits)
{ long double v = value; long double fv = fabs(value),p = powl(10.0L,digits); if (fv powl(10.0L,DBL_DIG)  digits DBL_DIG) return value; return roundl(p*value)/p; }
That's better. You've still got potential for overflow on the
multiplication, but to be honest that's true of a lot of my code too.
However, I write scientific software where the range of possible values
is known and validated. Therefore, I only need to provide overflow
protection for those operations where overflow is an actual possibility.
As this is a utility program, it should prevent overflow for any pair of
arguments for which overflow is possible and preventable  it doesn't.
I disagree.
The test fv powl(10.0L, DBL_DIG) ensures that the absolute value
of "value" is less than 10 ^ 16. If "digits" is 15, the maximum
value of the multiplication can be 10 ^ 15 * 10 ^ 15 == 10 ^ 30,
a LOT less than the maximum value of double precision that is
DBL_MAX 1.7976931348623157e+308. Note that the calculations are done
in long double precision and LDBL_MAX 1.18973149535723176505e+4932L
so the overflow argument is even less valid in long double precision.
Other systems LDBL_MAX are even higher since they use 128 bits and not
80 bits as the 80x86. For systems where long double is equal to double
precision, the value is well within range anyway.
Hence, the multiplication can't overflow.
If we accepted negative values, the division of 10 ^ 30 by 10 ^ 15 >
10 ^ 45, still LESS than the value of DBL_MAX. Hence the division can't
overflow, and in my function I do not accept negative values so the
division result will be always less than 10 ^ 30.
The rounding of long double to double can't overflow either since it is
always less than 10 ^ 308.
Also, providing support for negative values of 'digits' would be
trivial, and would significantly improve what little usefulness this
routine has (while creating a need to prevent denormalization at the
multiplication).

jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique http://www.cs.virginia.edu/~lccwin32
James Kuyper wrote:
jacob navia wrote:
....
>It is also in stdlib. It is in BOTH, and nothing is written about not putting it in math.h
So, if I write the following strictly conforming code:
#include <stdlib.h>
That should, of course, have been <math.h>. I must not have been fully
awake yet.
jacob navia wrote:
James Kuyper wrote:
....
>That's better. You've still got potential for overflow on the multiplication, but to be honest that's true of a lot of my code too. However, I write scientific software where the range of possible values is known and validated. Therefore, I only need to provide overflow protection for those operations where overflow is an actual possibility. As this is a utility program, it should prevent overflow for any pair of arguments for which overflow is possible and preventable  it doesn't.
I disagree.
The test fv powl(10.0L, DBL_DIG) ensures that the absolute value
of "value" is less than 10 ^ 16. If "digits" is 15, the maximum
value of the multiplication can be 10 ^ 15 * 10 ^ 15 == 10 ^ 30,
Sorry, for some reason I was confusing DBL_DIG with DBL_MAX_10_EXP. I
use neither macro frequently enough to have memorized which one is
which; I should have checked before I said anything.
Bart wrote:
....
Not many posting here seem to believe there are real and practical
reasons for rounding values to so many decimals (or rounding to a
nearest fraction, a related problem).
Incorrect. What I believe is that the real and practical reasons tend to
fall into two categories:
a) Conversion of floating point numbers to digit strings, usually for
output.
b) Calculations that should, properly, be carried out in fixedpoint
arithmetic. In the absence of direct language support for fixedpoint,
it should be emulated by the programmer using, for instance, an integer
to represent 1000 times the actual value, if that value is to be stored
with 3 digits after the decimal place. All of the example you gave
should fall into this second category.
There's probably at least one additional category, but I can't think of
any right now.
On Nov 26, 1:16 pm, James Kuyper <jameskuy...@verizon.netwrote:
Bart wrote:
...
Not many posting here seem to believe there are real and practical
reasons for rounding values to so many decimals (or rounding to a
nearest fraction, a related problem).
Incorrect.
As I said..
>What I believe is that the real and practical reasons tend to
fall into two categories:
a) Conversion of floating point numbers to digit strings, usually for
output.
b) Calculations that should, properly, be carried out in fixedpoint
arithmetic.
....
>All of the example you gave should fall into this second category.
But it isn't necessary. The examples were from actual code that worked
well.
If I invest $1000 at 5.75% for 5 years I will get
$1322.51887874443359375 at the end. If my interest calculating
function rounds that to 1322.51 (rounding down in this case) so that
the user of my function will see 1322.510000.. at most precision
settings he prints at, that seems perfectly acceptable.
But it seems this thread is less concerned about the 0.008878.. cents
he's not seeing, than about the million billionth of a cent that the
51 cents differs from exactly 51 cents.
Bart
James Kuyper <ja*********@verizon.netwrites:
Richard Heathfield wrote:
>jacob navia said:
<snip>
>>I claim that this will deliver the best approximation to the rounding to n decimal places, and I have never claimed otherwise.
If you do not agree, produce a better function.
As you ought to know already, I don't see any point in trying to solve a problem that is inherently impossible for reasons that I have already explained.
Are you asserting that the specification Jacob just described is
inherently impossible to implement? As far as I can see, the reasons
you've already explained do not apply to this specification.
I was under the impression that the only thing you could say against
this specification is that it doesn't match your own unconventionally
strict interpretation of the OP's specification.
"unconventionally strict". Nice. Euphemism prize of the year for
Heathfield's almost surreal hatred of everything Jacob proposes.
In article <qq************@news.individual.net>,
Richard <rg****@gmail.comwrote:
....
>Are you asserting that the specification Jacob just described is inherently impossible to implement? As far as I can see, the reasons you've already explained do not apply to this specification.
I was under the impression that the only thing you could say against this specification is that it doesn't match your own unconventionally strict interpretation of the OP's specification.
"unconventionally strict". Nice. Euphemism prize of the year for Heathfield's almost surreal hatred of everything Jacob proposes.
Yes, very good! And keep in mind that Heathfield claims to never attach
anyone (and never to be attacked, which this post disproves, twice).
James Kuyper said:
Richard Heathfield wrote:
>jacob navia said:
<snip>
>>> If you do not agree, produce a better function.
As you ought to know already, I don't see any point in trying to solve a problem that is inherently impossible for reasons that I have already explained.
Are you asserting that the specification Jacob just described is
inherently impossible to implement?
No, I'm only asserting that the original problem, as stated, is impossible
to solve.
<snip>

Richard Heathfield <http://www.cpax.org.uk>
Email: http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place"  dmr 29 July 1999
James Kuyper wrote:
>#include <stdlib.h>
That should, of course, have been <math.h>. I must not have
been fully awake yet.
Yes, and the atof variable should not have external linkage.
Just above the text you quoted the standard says:
— All identifiers with external linkage in any of the following
subclauses (including the future library directions) are always
reserved for use as identifiers with external linkage.
Ralf
jacob navia wrote:
Ralf Damaschke wrote:
>jacob navia wrote:
[About atof declaration put in math.h in lccwin]
>>It is also in stdlib. It is in BOTH, and nothing is written about not putting it in math.h
It depends on the cleverness of such an implementation. If such implementation does not also have the magic to forget the extra declaration of atof when stdlib.h was not included, it violates 7.1.3.
[...]
Use lcc ansic.
Thanks, I won't. The point was that there actually is something
"written about not putting it in math.h".
Ralf
Richard Heathfield wrote:
James Kuyper said:
>Richard Heathfield wrote:
>>jacob navia said:
<snip>
[Reinstating relevant snipped text]
>>>I claim that this will deliver the best approximation to the rounding to n decimal places, and I have never claimed otherwise.
>>>If you do not agree, produce a better function. As you ought to know already, I don't see any point in trying to solve a problem that is inherently impossible for reasons that I have already explained.
Are you asserting that the specification Jacob just described is inherently impossible to implement?
No, I'm only asserting that the original problem, as stated, is impossible
to solve.
The original problem as stated was, IMO, probably not intended to be
read with the unconventionally strict interpretation you're using. I
believe that Jacob is correctly describing the problem as clearly
expressed by the OP.
Your interpretation of that request is quite different from the one that
Jacob was providing, and when he challenged anyone to suggest a better
function, he clearly was asking for a better solution to his specified
problem, not for a better solution to your overly strict interpretation
of the OP's request.
Therefore, your comment that you "don't see any point in trying to solve
a problem that was inherently impossible" was irrelevant in context. You
could have wasted time explaining yet again that you don't consider his
specification to be a correct interpretation of the OP's request, but
that's not what you choose to do.
Someone who was expecting relevance would reasonably but incorrectly
interpret your comment as asserting that the problem as Jacob specified
it is impossible to solve. If you insist on throwing in irrelevant
comments, you should be more careful about choosing your words to
prevent them from being misunderstood as if they were relevant.
In article <NQB2j.14278$Mg1.2787@trndny03>,
James Kuyper <ja*********@verizon.netwrote:
>The original problem as stated was, IMO, probably not intended to be read with the unconventionally strict interpretation you're using. I believe that Jacob is correctly describing the problem as clearly expressed by the OP.
Hard to say, since the OP has not responded to requests for
clarifications. But for whatever it's worth, in my opinion,
Richard's interpretation is more likely to be the correct one.
I base this partly on the two responses that the OP did make,
which did not acknowledge the impossibility of exact rounding
and instead appeared to repeat the request for exact rounding.

"There are some ideas so wrong that only a very intelligent person
could believe in them."  George Orwell
On Nov 21, 9:39 pm, md <mojtaba.da...@gmail.comwrote:
Hi
Does any body know, how to round a double value with a specific number
of digits after the decimal points?
A function like this:
RoundMyDouble (double &value, short numberOfPrecisions)
It then updates the value with numberOfPrecisions after the decimal
point.
This sort of works most of the time:
#include <math.h>
double fround(const double nn, const unsigned d)
{
const long double n = nn;
const long double ld = (long double) d;
return (double) (floorl(n * powl(10.0L, ld) + 0.5L) / powl(10.0L,
ld));
}
#ifdef UNIT_TEST
#include <stdio.h>
#include <stdlib.h>
#include <float.h>
int main(void)
{
double pi = 3.1415926535897932384626433832795;
unsigned digits;
for (digits = 0; digits <= DBL_DIG; digits++) {
printf("Rounding by printf gives: %20.*f\n",
digits, pi);
printf("Rounding approximation by function gives: %20.*f\n",
DBL_DIG, fround(pi, digits));
}
return 0;
}
#endif
/*
C:\tmp>cl /W4 /Ox /DUNIT_TEST round.c
Microsoft (R) 32bit C/C++ Optimizing Compiler Version 14.00.50727.762
for 80x86
Copyright (C) Microsoft Corporation. All rights reserved.
round.c
Microsoft (R) Incremental Linker Version 8.00.50727.762
Copyright (C) Microsoft Corporation. All rights reserved.
/out:round.exe
round.obj
C:\tmp>round
Rounding by printf gives: 3
Rounding approximation by function gives: 3.000000000000000
Rounding by printf gives: 3.1
Rounding approximation by function gives: 3.100000000000000
Rounding by printf gives: 3.14
Rounding approximation by function gives: 3.140000000000000
Rounding by printf gives: 3.142
Rounding approximation by function gives: 3.142000000000000
Rounding by printf gives: 3.1416
Rounding approximation by function gives: 3.141600000000000
Rounding by printf gives: 3.14159
Rounding approximation by function gives: 3.141590000000000
Rounding by printf gives: 3.141593
Rounding approximation by function gives: 3.141593000000000
Rounding by printf gives: 3.1415927
Rounding approximation by function gives: 3.141592700000000
Rounding by printf gives: 3.14159265
Rounding approximation by function gives: 3.141592650000000
Rounding by printf gives: 3.141592654
Rounding approximation by function gives: 3.141592654000000
Rounding by printf gives: 3.1415926536
Rounding approximation by function gives: 3.141592653600000
Rounding by printf gives: 3.14159265359
Rounding approximation by function gives: 3.141592653590000
Rounding by printf gives: 3.141592653590
Rounding approximation by function gives: 3.141592653590000
Rounding by printf gives: 3.1415926535898
Rounding approximation by function gives: 3.141592653589800
Rounding by printf gives: 3.14159265358979
Rounding approximation by function gives: 3.141592653589790
Rounding by printf gives: 3.141592653589793
Rounding approximation by function gives: 3.141592653589793
*/
On Nov 26, 1:12 am, jacob navia <ja...@nospam.comwrote:
Richard Bos wrote:
jacob navia <ja...@nospam.comwrote:
I got tired of #including math.h and finding out that atof
wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C
compatibility. Well done.
Richard
It is also in stdlib. It is in BOTH, and nothing
is written about not putting it in math.h
Does this text mean anything to you?
``Each identifier with file scope listed in any of the following
subclauses (including the future library directions) is reserved for
use as macro and as an identifier with file scope in the same name
space if any of its associated headers is included.''
The atof name is not reserved for use as a macro if only <math.his
included, because <math.his not associated with atof. So this is the
start of a valid translation unit:
#define atof (
#include <math.h>
Oops!
On Nov 26, 4:06 am, James Kuyper <jameskuy...@verizon.netwrote:
jacob navia wrote:
Richard Bos wrote:
jacob navia <ja...@nospam.comwrote:
>I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin.
Ok, so you've found yet another way for your toy to break ISO C
compatibility. Well done.
Richard
It is also in stdlib. It is in BOTH, and nothing is written about not
putting it in math.h
So, if I write the following strictly conforming code:
#include <stdlib.h>
int atof = 3;
How is this strictly conforming? Because atof is the name of a
standard library function, it's reserved as an identifier with
external linkage (regardless of what header is included).
Kaz Kylheku wrote:
On Nov 26, 4:06 am, James Kuyper <jameskuy...@verizon.netwrote:
>jacob navia wrote:
>>Richard Bos wrote: jacob navia <ja...@nospam.comwrote: I got tired of #including math.h and finding out that atof wasn't there but in stdlib.h. So I put it in math.h in lccwin. Ok, so you've found yet another way for your toy to break ISO C compatibility. Well done. Richard It is also in stdlib. It is in BOTH, and nothing is written about not putting it in math.h
So, if I write the following strictly conforming code:
#include <stdlib.h> int atof = 3;
How is this strictly conforming? Because atof is the name of a
standard library function, it's reserved as an identifier with
external linkage (regardless of what header is included).
I was apparently less than fully awake when I wrote that. It should have
said:
#include <math.h>
static int atof=3;
Ralf Damasche pointed out the same problem more than 5 hours ago. His
own response contained neither mistake.
What makes this mistake even more annoying is that I actually thought
about the external linkage issue, and at least at one point my draft
response contained an identifier with internal linkage. However, I
accidentally dropped that feature during a rewrite.
In article <NQB2j.14278$Mg1.2787@trndny03James Kuyper <ja*********@verizon.netwrites:
Richard Heathfield wrote:
....
No, I'm only asserting that the original problem, as stated, is impossible
to solve.
The original problem as stated was, IMO, probably not intended to be
read with the unconventionally strict interpretation you're using. I
believe that Jacob is correctly describing the problem as clearly
expressed by the OP.
I do not think so. I have seen too many articles posted in this newsgroup
asking why (when 0.33333333 is rounded to two decimals), the result is
printed as 0.3299999999, and not as 0.33000000 (or something similar) ...

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
jacob navia wrote:
Richard Heathfield wrote:
>jacob navia said:
[...]
>>Of course I agree with your short description of these people. And one of their "word games" was precisely to say that it is impossible to write:
double rounto(double x, unsigned places);
No, of course you can *write* it (duh). It just won't do what you claim it does, that's all.
I claim that this will deliver the best approximation to the
rounding to n decimal places, and I have never claimed otherwise.
If you do not agree, produce a better function.
[code snipped]
Provide a better function to do what?
jacob, you persist in missing the point, even as you implicitly
acknowledge it.
The OP did *not* ask for a function to provide *the best approximation
of* a double value rounded to n decimal places. The OP asked for a
function to provide *a double rounded to n decimal places*. You have
simply assumed that, since what he literally asked for is impossible, he
must really be asking for something that's similar but possible.
How do you know what the OP really wants?
When someone asks a question with incompletely specified requirements,
as has happened here, we can ask for clarification (as we've done, but
the OP has so far failed to offer it), or we can make reasonable
assumptions. You've assumed that a close approximation is good enough.
You *might* be correct, but your solution, though it may meet the OP's
requirements when coupled with your assumption, is IMHO not particularly
useful. (Think about it for a moment. Given a function such that
roundto(3.14159, 2) yields *approximately* 3.14, most likely something
like 3.140000000000000124344978758017532527446746826171 875, would *you*
have any real use for such a thing? You might provide it in your
library, but would you really use it in your own code?)
What I and most others here have assumed instead is that what the OP
really wants is something *useful*. Given this assumption (which I
acknowledge is just an assumption), the OP needs to step back a bit and
rethink the problem. He probably really wants a *textual*
representation of "3.14", which is exact, rather than a floatingpoint
representation that can only be approximate, and whose only likely
purpose is to produce the exact textual representation anyway. The
point I suspect he's missing is that the original floatingpoint value
(an approximation of 3.14159) is just as useful as an approximation of
3.14 for the purpose of producing the string or output "3.14".
If you choose to make an assumption about what the OP really wants,
that's fine. You might even be correct. What I find annoying is your
stubborn insistence that no other assumption is possible.
There's another possibility that I don't recall anyone mentioning.
Perhaps this is a homework assignment, and perhaps it's the instructor
who fails to realize that decimal rounding of binary floatingpoint
values is not useful.

Keith Thompson (The_Other_Keith) <ks***@mib.org>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
 Antony Jay and Jonathan Lynn, "Yes Minister"
>jacob, you persist in missing the point, even as you implicitly
>acknowledge it.
The OP did *not* ask for a function to provide *the best approximation of* a double value rounded to n decimal places. The OP asked for a function to provide *a double rounded to n decimal places*. You have simply assumed that, since what he literally asked for is impossible, he must really be asking for something that's similar but possible.
It is possible to provide a rounded double that will have N decimal
places and be rounded exactly (when represented in binary floating
point). Round to the nearest multiple of 2**N . (** is an
exponentiation operator, which C doesn't have and mathematics uses
superscript for, which is a bit difficult in ASCII text.) When
converted to decimal, it will have N decimal places. It will also
be representable exactly provided you've got enough mantissa bits.
However, I seriously doubt that anyone would actually ask for this,
except as a puzzle or to win a bet.
>There's another possibility that I don't recall anyone mentioning. Perhaps this is a homework assignment, and perhaps it's the instructor who fails to realize that decimal rounding of binary floatingpoint values is not useful.
Unfortunately, I think fixing this problem would exhaust the world
supply of cluebats.
On Nov 27, 1:11 am, Keith Thompson <ks...@mib.orgwrote:
>...is IMHO not particularly
useful. (Think about it for a moment. Given a function such that
roundto(3.14159, 2) yields *approximately* 3.14, most likely something
like 3.140000000000000124344978758017532527446746826171 875, would *you*
have any real use for such a thing? You might provide it in your
library, but would you really use it in your own code?)
Definitely. The error is not significant, I can live with it.
There are many reasons why this kind of rounding is useful in the real
world, often to deal with noise reduction. There seems to be a
remarkable lack of imagination from many contributors here who think
rounding is only useful in printing numbers.
Having a round(x) that was magically exact wouldn't change anything
because nobody would know what to do with it!
But round(x) isn't exact, although neither is, say, reciprocal(x), so
that would be banned too.
who fails to realize that decimal rounding of binary floatingpoint
values is not useful.
By rounding, I can compare my noisy data with yours; if the difference
is less than some tolerance, it can be considered the same!
And it can look cleaner when printed without having to depend on
highly specific printf formats. If I know my data doesn't contain more
than 3 decimals of info, but might be printed with 6 decimals,
rounding will clean it up. Maybe I don't even know who will print it
and in what format.
Bart
Bart said:
On Nov 27, 1:11 am, Keith Thompson <ks...@mib.orgwrote:
>>...is IMHO not particularly useful. (Think about it for a moment. Given a function such that roundto(3.14159, 2) yields *approximately* 3.14, most likely something like 3.140000000000000124344978758017532527446746826171 875, would *you* have any real use for such a thing? You might provide it in your library, but would you really use it in your own code?)
Definitely. The error is not significant, I can live with it.
There are many reasons why this kind of rounding is useful in the real
world, often to deal with noise reduction.
We don't call that "rounding", though. We call it "approximating". Nobody
denies its utility. But rounding it ain't.
<snip>

Richard Heathfield <http://www.cpax.org.uk>
Email: http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place"  dmr 29 July 1999 This discussion thread is closed Replies have been disabled for this discussion. Similar topics
3 posts
views
Thread by Dalan 
last post: by

3 posts
views
Thread by Norvin Laudon 
last post: by

5 posts
views
Thread by Jason 
last post: by

29 posts
views
Thread by Marco 
last post: by

13 posts
views
Thread by Shirsoft 
last post: by

6 posts
views
Thread by abcd 
last post: by

5 posts
views
Thread by Spoon 
last post: by
 
20 posts
views
Thread by jacob navia 
last post: by
          