By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
457,696 Members | 1,522 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 457,696 IT Pros & Developers. It's quick & easy.

Why does Math.Sqrt not take a decimal?

P: n/a
Hi,
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.
Thanks!
Ethan

Ethan Strauss Ph.D.
Bioinformatics Scientist
Promega Corporation
2800 Woods Hollow Rd.
Madison, WI 53711
608-274-4330
800-356-9526
et***********@promega.com
Jun 27 '08 #1
Share this Question
Share on Google+
13 Replies


P: n/a
Ethan Strauss <Et**********@discussions.microsoft.comwrote:
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.
Well, decimals are typically used when you're dealing with numbers
which are naturally and *exactly* represented as decimals - currency
being the most common example.

The kind of number you're likely to take the square root of is the kind
of number you should probably be using double for instead.

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #2

P: n/a
Ethan Strauss wrote:
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.
Usually that type of functions returns a value of the same
type as itself.

Sqrt of a decimal will not return a true decimal (decimal is
supposed to be exact).

Sqrt of an int will definitely not return an int.

double Sqrt(decimal) and double Sqrt(int) will not follow
practice for functions

decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct

Besides: when do you need to take the square root of 87 dollars ??

:-)

Arne
Jun 27 '08 #3

P: n/a
All of the variables used by the Math class are primitive types.

The CLR does not consider a Decimal to be a primitive type (the CLR does not
contain special IL instructions to handle decimal types). I would imagine
that is one of the reason…. maybe???

You can check out the definition of a Decimal and you will see that it
implements/overrides just about all the math operators on it including but
not limited to +,-,*,/,Round, Ceiling, Floor etc, many of this are also
included on the Math class but the Decimal type provides its own
implementation.

"Ethan Strauss" <Et**********@discussions.microsoft.comwrote in message
news:77**********************************@microsof t.com...
Hi,
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I
can
easily convert back and forth, but I am interested in what is going on
behind
the scenes and if there is some aspect of decimals that keep them from
being
used in this calculation.
Thanks!
Ethan

Ethan Strauss Ph.D.
Bioinformatics Scientist
Promega Corporation
2800 Woods Hollow Rd.
Madison, WI 53711
608-274-4330
800-356-9526
et***********@promega.com
Jun 27 '08 #4

P: n/a
Ethan,

Why did you think they ever created a double (or value representations like
that with other names).

(As in the beginning all data was stored only binary in bytes and a while
later as so called binary decimals).

In my idea it was to do things like Math.Sqrt()

Cor

"Ethan Strauss" <Et**********@discussions.microsoft.comschreef in bericht
news:77**********************************@microsof t.com...
Hi,
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I
can
easily convert back and forth, but I am interested in what is going on
behind
the scenes and if there is some aspect of decimals that keep them from
being
used in this calculation.
Thanks!
Ethan

Ethan Strauss Ph.D.
Bioinformatics Scientist
Promega Corporation
2800 Woods Hollow Rd.
Madison, WI 53711
608-274-4330
800-356-9526
et***********@promega.com
Jun 27 '08 #5

P: n/a
On Jun 5, 1:44 am, Arne Vajhøj <a...@vajhoej.dkwrote:
Ethan Strauss wrote:
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.

Usually that type of functions returns a value of the same
type as itself.

Sqrt of a decimal will not return a true decimal (decimal is
supposed to be exact).
Decimal operations are exact/accurate in certain well-defined
circumstances.
There are plenty of operations on decimal which won't give exact
results:
1m / 3 for example. Likewise even addition - both 1e25 and 1e-25 are
exactly
representable as decimals, but their sum isn't.
Sqrt of an int will definitely not return an int.
Indeed, but then you wouldn't really want to.
double Sqrt(decimal) and double Sqrt(int) will not follow
practice for functions
decimal Sqrt(decimal) and
double Sqrt(int)
would be fine in my view, but the first is useless for typical
encouraged uses of decimal, and the second is already available
through an implicit conversion from int to double.
decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct
Does decimal division give you that impression as well? How about
integer division?
Besides: when do you need to take the square root of 87 dollars ??
And *that* is the real reason, IMO. The encouraged uses of decimal are
for the kinds of quantity one just doesn't take square roots of (like
money, as per your example).

Jon
Jun 27 '08 #6

P: n/a
On Jun 5, 4:08 am, "Rene" <a...@b.comwrote:
All of the variables used by the Math class are primitive types.
Round, Floor, Ceiling, Max, Min, Truncate, Sign and Abs all have
overloads which take decimals.
The CLR does not consider a Decimal to be a primitive type (the CLR does not
contain special IL instructions to handle decimal types). I would imagine
that is one of the reason…. maybe???
Well, doing a square root properly (as opposed to converting to
double, taking the square root and then converting back, which would
be a horrible way to go) would certainly be rather slower than when
using double due to the lack of hardware support. More importantly
though, it just wouldn't be useful for the intended uses of decimal.

Jon
Jun 27 '08 #7

P: n/a
Round, Floor, Ceiling, Max, Min, Truncate, Sign and Abs all have
overloads which take decimals.
Yes, but if you look at the implementation of these overloads, they are
nothing more that a wrapper to the Decimal class. For example: Math.Round
will internally call Decimal.Round
>Well, doing a square root properly (as opposed to converting to
double, taking the square root and then converting back, which would
be a horrible way to go) would certainly be rather slower than when
using double due to the lack of hardware support. More importantly
though, it just wouldn't be useful for the intended uses of decimal.
Exactly, my point was that since there are no special IL instructions for
Decimals, getting the square root of a decimal using decimal context wouldn’t
be very efficient.

Jun 27 '08 #8

P: n/a
Rene <a@b.comwrote:
Round, Floor, Ceiling, Max, Min, Truncate, Sign and Abs all have
overloads which take decimals.

Yes, but if you look at the implementation of these overloads, they are
nothing more that a wrapper to the Decimal class. For example: Math.Round
will internally call Decimal.Round
Sure, but that doesn't change my point at all. You claimed that all the
methods in Math took primitive types (a parameter is a variable, after
all), and the methods above are counterexamples.

The implementation should be irrelevant to the discussion - it could
just as easily have been the other way round, with Decimal.Round
calling Math.Round.
Well, doing a square root properly (as opposed to converting to
double, taking the square root and then converting back, which would
be a horrible way to go) would certainly be rather slower than when
using double due to the lack of hardware support. More importantly
though, it just wouldn't be useful for the intended uses of decimal.

Exactly, my point was that since there are no special IL instructions for
Decimals, getting the square root of a decimal using decimal context wouldn=3Ft
be very efficient.
And *my* point was that efficiency isn't the main issue here. Just
because it wouldn't be efficient to take the square root doesn't make
it undesirable per se. It's the uses of decimal which make it
undesirable.

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #9

P: n/a
Jon Skeet [C# MVP] wrote:
On Jun 5, 1:44 am, Arne Vajhøj <a...@vajhoej.dkwrote:
>Ethan Strauss wrote:
>> Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.
Usually that type of functions returns a value of the same
type as itself.

Sqrt of a decimal will not return a true decimal (decimal is
supposed to be exact).

Decimal operations are exact/accurate in certain well-defined
circumstances.
There are plenty of operations on decimal which won't give exact
results:
1m / 3 for example.
True, but division would be missed if it was not there.

I would not have a problem if decimal division gave an exception
if the result was not exact, but that would probably be too
inefficient to test for that.
Likewise even addition - both 1e25 and 1e-25 are
exactly
representable as decimals, but their sum isn't.
Which is not good either.

But I see your point.

Decimal is not exact in other contexts either.

I love the concept of decimal, but I hate the implementation chosen.
>Sqrt of an int will definitely not return an int.

Indeed, but then you wouldn't really want to.
>double Sqrt(decimal) and double Sqrt(int) will not follow
practice for functions

decimal Sqrt(decimal) and
double Sqrt(int)
would be fine in my view, but the first is useless for typical
encouraged uses of decimal, and the second is already available
through an implicit conversion from int to double.
>decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct

Does decimal division give you that impression as well?
It can.
How about
integer division?
No, integer division works just as expected. I prefer the Pascal style
with one operator for floating point division and another for integer
division to emphasize that it is two different operators.

Arne
Jun 27 '08 #10

P: n/a
On Jun 6, 3:33 am, Arne Vajhøj <a...@vajhoej.dkwrote:
Decimal operations are exact/accurate in certain well-defined
circumstances.
There are plenty of operations on decimal which won't give exact
results:
1m / 3 for example.

True, but division would be missed if it was not there.
And that's *exactly* my point. Division is a natural thing to want to
do on a decimal - there are times when you need to divide amounts that
are best represented as decimals, even if you might lose some
information.

Taking the square root of a decimal is *not* a natural thing to want
to do with the type of information represented as decimals, hence its
absence.
I would not have a problem if decimal division gave an exception
if the result was not exact, but that would probably be too
inefficient to test for that.
I'd have a massive problem with that, to be honest. I suspect that
there are many, many times when you don't mind decimal losing some
information, because you're going to round anyway. However, other
operations *do* need to be precise, and the input will typically
ensure that's the case anyway.
Likewise even addition - both 1e25 and 1e-25 are
exactly
representable as decimals, but their sum isn't.

Which is not good either.

But I see your point.

Decimal is not exact in other contexts either.

I love the concept of decimal, but I hate the implementation chosen.
I'd like to see BigDecimal in the framework at some point, but decimal
has its advantages too (bounded space being the most obvious one).
decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct
Does decimal division give you that impression as well?

It can.
Why? Surely it's a matter of common sense that decimal can't
accurately represent all rational numbers exactly.
How about integer division?

No, integer division works just as expected.
So what's the difference? It's information loss either way, and should
be expected. Division is an inherently lossy operation in computing
unless you actually keep both operands. I don't see why losing
information in decimal is a problem, but losing information with
integers isn't.
I prefer the Pascal style
with one operator for floating point division and another for integer
division to emphasize that it is two different operators.
Occasionally that would be useful, but mostly I prefer the
consistency.

Jon
Jun 27 '08 #11

P: n/a
Jon Skeet [C# MVP] wrote:
On Jun 6, 3:33 am, Arne Vajhøj <a...@vajhoej.dkwrote:
>I would not have a problem if decimal division gave an exception
if the result was not exact, but that would probably be too
inefficient to test for that.

I'd have a massive problem with that, to be honest. I suspect that
there are many, many times when you don't mind decimal losing some
information, because you're going to round anyway. However, other
operations *do* need to be precise, and the input will typically
ensure that's the case anyway.
I guess I prefer either pure approximative or pure precise.
>But I see your point.

Decimal is not exact in other contexts either.

I love the concept of decimal, but I hate the implementation chosen.

I'd like to see BigDecimal in the framework at some point, but decimal
has its advantages too (bounded space being the most obvious one).
There is usually always a pro and a con.
>>>decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct
Does decimal division give you that impression as well?
It can.

Why? Surely it's a matter of common sense that decimal can't
accurately represent all rational numbers exactly.
True, but it can give the results that does not follow
accounting practices.
> How about integer division?

No, integer division works just as expected.

So what's the difference? It's information loss either way, and should
be expected. Division is an inherently lossy operation in computing
unless you actually keep both operands. I don't see why losing
information in decimal is a problem, but losing information with
integers isn't.
Integer division is not loosing information.

One just need to remember that integer division is not
a "normal" division.

Decimal division tries do a normal division.
>I prefer the Pascal style
with one operator for floating point division and another for integer
division to emphasize that it is two different operators.

Occasionally that would be useful, but mostly I prefer the
consistency.
I look at it differently - I find it inconsistent to use the
same operator for two fundamentally different operations.

Arne
Jun 27 '08 #12

P: n/a
Arne Vajhøj <ar**@vajhoej.dkwrote:
More seriously, I'm sure we both understand each other and agree on the
technicalities - but we treat information loss slightly differently.
And integer math.
Fair enough :)
Now, although there *is* an intuitive inverse of division
(multiplication) it's not a "full" inverse (not trying to claim that as
a technical term!) in that given (x, x / y) you can't get back to y (or
given (x, y / x) you can't get back to y). I think of that as
information loss.
I see it as integer division and integer multiplication not
being inverse.

Modulus/remainder exist due to that.
Right - so integer division has no inverse operation, which makes it a
lossy operation in my view. But it's fine for us to disagree on that.
But again this is not a difference in understanding of how
it works just a difference in the English terms we use to
label it.
Yup, fair enough.
Integer division is a fundamental operation not just a
lossy division.
It's a fundamental operation which loses information.

Of course, now that you mention the range issue, addition is also lossy
in one way: if I start with x, add 1 a number of times (y) you can't
tell me afterwards the size of y - only the size of y mod 2^32 (or
whatever, depending on the type) :)
Depends on checked switch.

:-)
Nice, had forgotten that.
But yes this is a problem for all fixed size data types trying
to represent an infinite math set.

BTW, I believe that C# will be the last major language to use
fixed size integers. I am expecting the language being invented
in the next decade to hide that type of implementation detail from
the programmers.
I wouldn't go that far. I think C# *may* be the last major language not
to have an "arbitrary length" integer type (such as BigInteger in Java)
with built-in language support. I think we'll be using fixed size
integers for many things for a long time though.
I think we may have drifter a little from the original topic by now,
mind you...
A lot. But it is not the first time that has happen on usenet.
I'm shocked! ;)

--
Jon Skeet - <sk***@pobox.com>
Web site: http://www.pobox.com/~skeet
Blog: http://www.msmvps.com/jon.skeet
C# in Depth: http://csharpindepth.com
Jun 27 '08 #13

P: n/a
Jon Skeet [C# MVP] wrote:
Arne Vajhøj <ar**@vajhoej.dkwrote:
>Depends on checked switch.

:-)

Nice, had forgotten that.
I forget about that all the time.

But System.OverflowException keeps reminding me !

Arne
Jun 27 '08 #14

This discussion thread is closed

Replies have been disabled for this discussion.