By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
459,458 Members | 1,201 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 459,458 IT Pros & Developers. It's quick & easy.

casting decimal to float to double error

P: n/a
Hi all,
the following c# example baffles me. my understanding of
the managed code of visual .net is that all casts like
this would be safe, but this example seems to contradict
this notion. if you look at the debugger info, the double
dou is half filled w/ garbage bits, instead of those bits
being zeroed out appropriately.

using System;
namespace decimaltodouble {
class Class1 {
[STAThread]
static void Main(string[] args) {
decimal dec = 0.4695M;
float fl = (float) dec;
double dou = (double) fl;
if (dou != 0.4695)
Console.Write("bad cast, garbage bits");
Console.Read();
}
}
}
Nov 15 '05 #1
Share this Question
Share on Google+
8 Replies


P: n/a
> Hi all,
the following c# example baffles me. my understanding of
the managed code of visual .net is that all casts like
this would be safe, but this example seems to contradict
this notion. if you look at the debugger info, the double
dou is half filled w/ garbage bits, instead of those bits
being zeroed out appropriately.


This happens because you are casting a number between types with different
sizes. If you were to read about how you represent numbers in IEEE 32 bit
floating point format, IEEE 64 floating point format and 96 bit decimal
format, you wont find it strange at all.

Basically, you have a finite number of combinations (because there is a
finite number of bits) and an infinite number of, well, Real numbers. So,
you have to "aproxiamte" the number you really want to the closest number
you can represent. And you have to admit, 0.469500005245209 is pretty damn
close to 0.4695 (it's only 0.0000001% larger).

If you need any more detail, feel free to ask, or read it up on the web.
-JG
Nov 15 '05 #2

P: n/a
Quinn Kirsch wrote:
Hi all,
the following c# example baffles me. my understanding of
the managed code of visual .net is that all casts like
this would be safe, but this example seems to contradict
this notion. if you look at the debugger info, the double
dou is half filled w/ garbage bits, instead of those bits
being zeroed out appropriately.

using System;
namespace decimaltodouble {
class Class1 {
[STAThread]
static void Main(string[] args) {
decimal dec = 0.4695M;
float fl = (float) dec;
double dou = (double) fl;
if (dou != 0.4695)
Console.Write("bad cast, garbage bits");
Console.Read();
}
}
}


The bits actually are zeroed out, but that's the basis of your problem.

The problem boils down to the fact that:

((double) (float) 0.4695) != ((double) 0.4695)

The decimal type in your sample program is unnecessary to the problem.

Take a look at the output of this program:

================================================== ==
using System;

namespace ConsoleApplication1 {
public class MainClass {

static void Main(string[] args) {
Console.WriteLine( " 0.4695f is {0:x}",
BitConverter.ToUInt32( BitConverter.GetBytes( 0.4695f), 0));
Console.WriteLine( "(double) 0.4695f is {0:x}",
BitConverter.ToUInt64( BitConverter.GetBytes( ((double)
0.4695f)), 0));
Console.WriteLine( " 0.4695d is {0:x}",
BitConverter.ToUInt64( BitConverter.GetBytes( 0.4695d), 0));
}
}
}
================================================== ==

The output I get is:

================================================== ==
0.4695f is 3ef0624e
(double) 0.4695f is 3fde0c49c0000000
0.4695d is 3fde0c49ba5e353f
================================================== ==

--
mikeb

Nov 15 '05 #3

P: n/a
Quinn Kirsch <an*******@discussions.microsoft.com> wrote:
the following c# example baffles me. my understanding of
the managed code of visual .net is that all casts like
this would be safe, but this example seems to contradict
this notion. if you look at the debugger info, the double
dou is half filled w/ garbage bits, instead of those bits
being zeroed out appropriately.


Basically you're casting the decimal to a float, which gives the
nearest float to 0.4695. You're then casting the float to double, which
will always preserve the value (as every float is exactly representable
in double) - but that's not necessarily the nearest *double* to 0.4695,
which is the thing you're next testing.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 15 '05 #4

P: n/a
I understand why these problems have all occurred in the
past with our friends C and C++, but I thought that these
type of problems were supposed to be corrected Java or
C#. I thought these languages were made to get rid
mistakes like mispointed pointers and uninitialized
memory, but I guess that they did not get it all.
-----Original Message-----
Hi all,
the following c# example baffles me. my understanding of the managed code of visual .net is that all casts like
this would be safe, but this example seems to contradict
this notion. if you look at the debugger info, the double dou is half filled w/ garbage bits, instead of those bits being zeroed out appropriately.
This happens because you are casting a number between

types with differentsizes. If you were to read about how you represent numbers in IEEE 32 bitfloating point format, IEEE 64 floating point format and 96 bit decimalformat, you wont find it strange at all.

Basically, you have a finite number of combinations (because there is afinite number of bits) and an infinite number of, well, Real numbers. So,you have to "aproxiamte" the number you really want to the closest numberyou can represent. And you have to admit, 0.469500005245209 is pretty damnclose to 0.4695 (it's only 0.0000001% larger).

If you need any more detail, feel free to ask, or read it up on the web.-JG
.

Nov 15 '05 #5

P: n/a
Quinn Kirsch <an*******@discussions.microsoft.com> wrote:
I understand why these problems have all occurred in the
past with our friends C and C++, but I thought that these
type of problems were supposed to be corrected Java or
C#. I thought these languages were made to get rid
mistakes like mispointed pointers and uninitialized
memory, but I guess that they did not get it all.


I think you misunderstand either the aims of the language and casting
in particular, or you misunderstand floating point.

The reason you *need* a cast from double or decimal to float is
precisely *because* some data will almost certainly be lost. It's not a
mistake, it's the way it's meant to be.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 15 '05 #6

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Oh, trust me, Java doesn't fix that problem either. What do you think
they added java.math.BigDecimal for? ;) In fact, C#'s solution to this
(decimal) is much nicer IMO than Java's since you don't have to say

(a.add(b)).subtract(c);

to say a + b - c. Not to mention the annoyingly verbose line:

BigDecimal dec = new BigDecimal("1.122132131235346");

Quinn Kirsch wrote:

| I understand why these problems have all occurred in the
| past with our friends C and C++, but I thought that these
| type of problems were supposed to be corrected Java or
| C#. I thought these languages were made to get rid
| mistakes like mispointed pointers and uninitialized
| memory, but I guess that they did not get it all.

- --
Ray Hsieh (Ray Djajadinata) [SCJP, SCWCD]
ray underscore usenet at yahoo dot com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQE/na8rwEwccQ4rWPgRAqbjAJ44MIgMNDAZgFHw6iqVbs3yXVsCmA CfbvY2
eGF9e5LZkSY3gHn4BK9Ib7I=
=ovx0
-----END PGP SIGNATURE-----

Nov 15 '05 #7

P: n/a
Ray Hsieh (Ray Djajadinata) <ch***@my.signature.com> wrote:
Oh, trust me, Java doesn't fix that problem either. What do you think
they added java.math.BigDecimal for? ;) In fact, C#'s solution to this
(decimal) is much nicer IMO than Java's since you don't have to say

(a.add(b)).subtract(c);

to say a + b - c. Not to mention the annoyingly verbose line:

BigDecimal dec = new BigDecimal("1.122132131235346");


BigDecimal and decimal are somewhat different though - BigDecimal has
*arbitrary* precision, rather than the fixed precision of decimal. It
would be nice if both languages had the decimal type and both libraries
had the BigDecimal type (and BigInteger), but there we go...

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 15 '05 #8

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Yeah, true that... although personally I have never needed the arbitrary
~ precision of BigDecimal--I just need something that will compute right
(unlike float or double), so C#'s decimal is a good enough solution for me.

| It would be nice if both languages had the decimal type and both
| libraries had the BigDecimal type (and BigInteger), but there we go...

Yes indeed. It would be the correct thing to do too, since the Big Ones
are meant to take care of the arbitrary-precision part of the deal (the
fact that BigDecimal computes correctly, IMHO, is a "bonus". I totally
agree with you that Java should have its own decimal type.).

This, plus the fact that it is a class, and located in java.math instead
of java.lang, makes it way less obvious than it should be, and every
once in a while you'll see otherwise experienced Java programmers doing
stuff like funds transfer/withdrawal using double :)

- --
Ray Hsieh (Ray Djajadinata) [SCJP, SCWCD]
ray underscore usenet at yahoo dot com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQE/nj9jwEwccQ4rWPgRAmzJAJwMtRqx26TUEBZLcOcjdISJ5B/glACfdNz3
Zs0RtG13WVaT5twINR4DfgM=
=uPjI
-----END PGP SIGNATURE-----

Nov 15 '05 #9

This discussion thread is closed

Replies have been disabled for this discussion.