473,499 Members | 1,903 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

(decimal) 1.1 versus 1.1M


decimal d;

d = 1.1M

OR

d= (decimal) 1.1

Discussioon

That 'M' suffix looks like something ledt over from C. It is not
self-evident what it is. I heard one lecturer say it means money.
WRONG.

d = (decimal) 1.1 is self documenting.

Is there some standard that indicates which is the best way to code??

Paul S.
Nov 17 '05 #1
10 8100
1.1M says that the 1.1 is a decimal value.
Similar to 1.1f (float value). Thus no casting is required to store it in a
decimal variable.

Just 1.1 - I believe is a double value, which has to be casted to decimal.

--
Adam Clauss
ca*****@tamu.edu

"Paul Sullivan" <pa*********************@worldnet.att.net> wrote in message
news:8c********************************@4ax.com...

decimal d;

d = 1.1M

OR

d= (decimal) 1.1

Discussioon

That 'M' suffix looks like something ledt over from C. It is not
self-evident what it is. I heard one lecturer say it means money.
WRONG.

d = (decimal) 1.1 is self documenting.

Is there some standard that indicates which is the best way to code??

Paul S.

Nov 17 '05 #2
I haven't checked but I would hope that the optimizer would produce the same
code for both. If it doesn't, then the expression
d = 1.1M;
will produce more efficient code, since the constant will be stored as a
decimal.

Using
d = (decimal) 1.1;
will cause the constant to be stored as a double and converted to decimal
when it is used.

Like I said: I haven't checked if the optimizer takes care of this or not.
Hope so. If it does, then there is no effective difference.

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.
--
"Paul Sullivan" <pa*********************@worldnet.att.net> wrote in message
news:8c********************************@4ax.com...

decimal d;

d = 1.1M

OR

d= (decimal) 1.1

Discussioon

That 'M' suffix looks like something ledt over from C. It is not
self-evident what it is. I heard one lecturer say it means money.
WRONG.

d = (decimal) 1.1 is self documenting.

Is there some standard that indicates which is the best way to code??

Paul S.

Nov 17 '05 #3
I would think that the code would be different, depending upon whether
1.1 can be exactly represented in a double. Remember that

double d = 1.1;

does not guarantee that d will equal exactly 1.1. It will equal the
closest approximation available in the floating point representation
for the value 1.1. I would think that this would then mean that

decimal e = (decimal)1.1;

would mean that e should be set to the best approximation of the double
value, which is the best approximation of 1.1 in double format. In
other words

decimal e = (decimal)1.1;
if (e == 1.1M) ...

does not, in my opinion, guarantee that the target of the "if"
statement will execute.

Of course, I'm open to being shown wrong. :)

Nov 17 '05 #4
Okay, I have understood (or perhaps have been laboring under a
misconception) that decimal was, at the least, far superior to double
regarding representing all possible values without any sort of precision
errors like you describe. If not in fact perfect. In fact I assumed
decimal was just a BCD implementation, sort of like a bigint with an implied
decimal. Takes up more memory, calculatuions are slower, but you can count
on it being correct.

Can someone definitively point me to info that says I'm right or wrong about
this? I don't remember where I picked it up to be honest.

--Bob

"Bruce Wood" <br*******@canada.com> wrote in message
news:11**********************@z14g2000cwz.googlegr oups.com...
I would think that the code would be different, depending upon whether
1.1 can be exactly represented in a double. Remember that

double d = 1.1;

does not guarantee that d will equal exactly 1.1. It will equal the
closest approximation available in the floating point representation
for the value 1.1. I would think that this would then mean that

decimal e = (decimal)1.1;

would mean that e should be set to the best approximation of the double
value, which is the best approximation of 1.1 in double format. In
other words

decimal e = (decimal)1.1;
if (e == 1.1M) ...

does not, in my opinion, guarantee that the target of the "if"
statement will execute.

Of course, I'm open to being shown wrong. :)

Nov 17 '05 #5
Nick Malik [Microsoft] <ni*******@hotmail.nospam.com> wrote:
I haven't checked but I would hope that the optimizer would produce the same
code for both. If it doesn't, then the expression
d = 1.1M;
will produce more efficient code, since the constant will be stored as a
decimal.

Using
d = (decimal) 1.1;
will cause the constant to be stored as a double and converted to decimal
when it is used.


I'd have thought that too, but using ildasm shows that the C# compiler
(rather than the JITter) converts it to a decimal at compile-time.

Compile the following code:

class Test
{
static void Main()
{
decimal d1 = (decimal)1.1;
decimal d2 = 1.1m;
}
}

and then run ildasm on it. Both decimals are loaded using the following
code:

IL_0000: ldc.i4.s 11
IL_0002: ldc.i4.0
IL_0003: ldc.i4.0
IL_0004: ldc.i4.0
IL_0005: ldc.i4.1
IL_0006: newobj instance void [mscorlib]System.Decimal::.ctor
(int32, int32, int32, bool, unsigned int8)

This surprises me somewhat - I haven't gone into whether or not it can
make any semantic difference, but I wouldn't be surprised if it did.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #6
I don't think that you can talk about decimal or double being more or
less accurate one than the other. They're very different
representations. My point was not that decimal is less accurate than
double, or vice versa, but that they represent values differently and
so conversions between the two may lose information.

To answer your question, the C# language spec states that decimal is a
128-bit value, so its precision is limited. In particular, with
decimal, the larger the value you try to store the fewer digits you
have after the decimal place. This is not the case with double.

Double, however, has less precision overall than decimal. A double is
only a 64-bit value, so it runs out of digits of precision much more
quickly. However, you can represent huge values and still have the same
number of digits of precision as with values near 1.

For example, I concocted a little sample program that demonstrates some
double-to-decimal loss. I had to use a value with lots of decimal
places. My statements about 1.1, in the previous post, were of course
not directly related to that value. Both decimal and double can
represent 1.1 quite nicely, and there is no loss between the two.
Instead, I used (what I remember as) PI. My apologies to mathematicians
if my memory has faded over the years. (Yes, I was too lazy to look it
up. :)

public static void Main(string[] argc)
{
decimal pi = 3.141592653589793238462643383279M;
double d = 3.141592653589793238462643383279;
decimal de = (decimal)d;
Console.WriteLine(String.Format("The decimal value is {0}, PI is {1}",
de, pi));
}

The output from this is:

The decimal value is 3.14159265358979, PI is
3.1415926535897932384626433833

As you can see the double converted to decimal lost a bunch of
precision off the end of the value. So, I ran another test:

public static void Main(string[] argc)
{
decimal pi = 3.141592653589793238462643383279M;
decimal de = (decimal)3.141592653589793238462643383279;
Console.WriteLine(String.Format("The decimal value is {0}, PI is {1}",
de, pi));
}

The results were, predictably, identical. On the line that starts
"decimal de =", the compiler first converts the 3.14159... value to a
double format, since it has no "M" suffix. The cast then converts the
double value to a decimal format and stores it in de. However, in
converting the literal to a double, a lot of precision was lost.

Again, this won't matter except for values with a lot of precision, or
very large values.

Nov 17 '05 #7
I think I can explain this. I ran ildasm on my second sample program.
The results look like this:

IL_0000: ldc.i4 0x41b65f29
IL_0005: ldc.i4 0xb143885
IL_000a: ldc.i4 0x6582a536
IL_000f: ldc.i4.0
IL_0010: ldc.i4.s 28
IL_0012: newobj instance void
[mscorlib]System.Decimal::.ctor(int32,

int32,

int32,

bool,

unsigned int8)
IL_0017: stloc.0
IL_0018: ldc.i4 0xe76a2483
IL_001d: ldc.i4 0x11db9
IL_0022: ldc.i4.0
IL_0023: ldc.i4.0
IL_0024: ldc.i4.s 14
IL_0026: newobj instance void
[mscorlib]System.Decimal::.ctor(int32,

int32,

int32,

bool,

unsigned int8)

Notice that in both cases the compiler uses the decimal constructor,
but it passes two different values into the constructor. The second
value is obviously truncated.

This means that it must be the compiler that converts the literal to a
double and then converts that double to a decimal, in order to get the
initial value for the decimal "de" in my code. This is a logical
optimization, since the compiler is quite capable of doing those
conversions itself rather than leaving them to the runtime.

In your case, Jon, all that happened was that 1.1M and 1.1 converted
from double to decimal yielded the same bit patterns.

Nov 17 '05 #8
Bruce Wood <br*******@canada.com> wrote:
I don't think that you can talk about decimal or double being more or
less accurate one than the other. They're very different
representations. My point was not that decimal is less accurate than
double, or vice versa, but that they represent values differently and
so conversions between the two may lose information.
I agree that they're different, but not quite as different as you seem
to think.

Thinking about it a bit, I *suspect* that all doubles within the
decimal range can be exactly represented with a decimal (due to the
base of decimal including the base of double as a factor), but I'd need
to go through the maths to check.
To answer your question, the C# language spec states that decimal is a
128-bit value, so its precision is limited. In particular, with
decimal, the larger the value you try to store the fewer digits you
have after the decimal place. This is not the case with double.
Yes it is - if you store a very large number in a double, you'll get
very little precision in absolute terms. When you get to *really* large
numbers, you don't even get *integer* precision.
Double, however, has less precision overall than decimal. A double is
only a 64-bit value, so it runs out of digits of precision much more
quickly. However, you can represent huge values and still have the same
number of digits of precision as with values near 1.


Same number of digits of precision, but not the same number of digits
*after the decimal place*. Big difference. (Decimal still has the same
number of digits of precision with large numbers as with small numbers
too.)

The same is true for decimal though - it always has 28/29 digits of
precision, however large the number is. Double will always have 15/16
digits of precision (IIRC - around that, anyway).

This shouldn't be surprising, as the size of the mantissa stays the
same throughout the range - 52 bits for double, 96 bits for decimal.
(Normalisation gives double an extra implicit bit of precision for most
double values, but that's a bit of a side issue.)

The big difference between the two types is the range of exponents
which are available - decimal keeps the decimal point within the
integer represented by the mantissa, or *just* to one end of it. Double
allows it to be miles away (in either direction), letting you represent
much bigger and much smaller numbers - but with that smaller mantissa.
(There's no particular reason why decimal couldn't represent exponents
with 7 full bits, rather than just most of 5 bits, but it's probably
not appropriate for most uses of decimal.)

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #9
> Yes it is - if you store a very large number in a double, you'll get
very little precision in absolute terms. When you get to *really* large numbers, you don't even get *integer* precision.


Yes, of course you're right. Monday sluggishness in the grey cells, The
problem is more in the range of values than in the precision of said
values: double has a greater range but less precision; decimal has
greater precision but a smaller range, as you pointed out.

Nov 17 '05 #10
Jon and Bruce,

Thanks for the skinny on this issue. I appreciate it.

--Bob
Nov 17 '05 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

21
4489
by: Batista, Facundo | last post by:
Here I send it. Suggestions and all kinds of recomendations are more than welcomed. If it all goes ok, it'll be a PEP when I finish writing/modifying the code. Thank you. .. Facundo
17
6101
by: John Bentley | last post by:
John Bentley: INTRO The phrase "decimal number" within a programming context is ambiguous. It could refer to the decimal datatype or the related but separate concept of a generic decimal number....
0
7134
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
7012
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
7392
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
5479
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
1
4920
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
3101
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
0
1429
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...
1
667
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
0
307
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.