Did I mess something along the way, or is there no function in Standard C++
to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried to
do it, and discovered <cmath> std::pow() only takes floating point types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell 15 11540
* Steven T. Hatton: Did I mess something along the way, or is there no function in Standard C++ to raise an (unsigned) int to a power of (unsigned) int returning (unsigned) int? I never gave this a second thought until today. I tried to do it, and discovered <cmath> std::pow() only takes floating point types for the first argument. Sure I could write one. I could have written at least 3 fundamentally differnet versions in the time it took to discover there isn't such a function.
There isn't such a function. There is the C library pow, the C++ library
valarray::pow and the C++ library complex::pow. Writing an unsigned integer
version should, as you state, be fairly trivial; why not just wrap the C pow?
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Steven T. Hatton wrote: Did I mess something along the way, or is there no function in Standard C++ to raise an (unsigned) int to a power of (unsigned) int returning (unsigned) int? I never gave this a second thought until today. I tried to do it, and discovered <cmath> std::pow() only takes floating point types for the first argument. Sure I could write one. I could have written at least 3 fundamentally differnet versions in the time it took to discover there isn't such a function.
Correct, there isn't one. There just isn't that much call for it
I would suspect. You don't have to raise a number to a very high
power before it starts overflowing.
You either code fast, or you need to learn how to read the docs faster.
It only took me about 20 seconds to load up the PDF of the Standard and
find in 26.5 where it says that these are the overloads for pow:
pow(float, float);
pow(float, int)
pow(double, double)
pow(double, int)
pow(long double, long double)
pow(long double, int)
I suspect your "integer pow" just iterates to do the exponentiation.
Pow typically uses a log (which is the only real way to do a fractional
exponent anyhow) which gives a more constant time.
Ron Natalie wrote: Steven T. Hatton wrote: Did I mess something along the way, or is there no function in Standard C++ to raise an (unsigned) int to a power of (unsigned) int returning (unsigned) int? I never gave this a second thought until today. I tried to do it, and discovered <cmath> std::pow() only takes floating point types for the first argument. Sure I could write one. I could have written at least 3 fundamentally differnet versions in the time it took to discover there isn't such a function. Correct, there isn't one. There just isn't that much call for it I would suspect. You don't have to raise a number to a very high power before it starts overflowing.
What I'm doing is strictly integer math. I seriously doubt I will get into
tensors of sufficient rank and order to overflow long int with the index
range.
You either code fast, or you need to learn how to read the docs faster. It only took me about 20 seconds to load up the PDF of the Standard and find in 26.5 where it says that these are the overloads for pow: pow(float, float); pow(float, int) pow(double, double) pow(double, int) pow(long double, long double) pow(long double, int)
I read that from the error output. I didn't need to grep the standard, but
I did look it up there as well. It tells me what /is/ there, but not what
isn't. Sometimes the Standard is a useful reference, and othertimes I run
into the parts where is redefines 'definition' and proceeds with the
redefined meaning in a less than consistent manner.
I suspect your "integer pow" just iterates to do the exponentiation.
No, I actually used compile time recursion.
Pow typically uses a log (which is the only real way to do a fractional exponent anyhow) which gives a more constant time.
That's because you can add exponents in one operation rather than performing
n + m individual operations needed to do the brute force calculation.
Floating point numbers are represented in hardware in terms of significand
(base) and exponent. Addition and subtraction are actually more expensive
than multiplication and division.
I'm not sure what the cost of converting between floating point and 2's
complement is, but if I use pow() to do my integer math, I suspect it can
add up. I really have to test it before I can make a determination. This
is probably hardware sensitive more so than compiler or OS sensitive.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell
> > I suspect your "integer pow" just iterates to do the exponentiation. No, I actually used compile time recursion.
Really? How interesting, can you please post the code for
the function?
Keith
Keith H Duggar wrote: > I suspect your "integer pow" just iterates to do the exponentiation.
No, I actually used compile time recursion.
Really? How interesting, can you please post the code for the function?
Keith
This is based on ideas from _C++_Templates_The_Complet_Guide_ http://www.josuttis.com/tmplbook/
I really don't know if this has any advantages, nor do I know if it is even
correct. I ran it through a few simple tests, but haven't revisited it
since. As I am working on my testing infrastructure.
/************************************************** *************************
* Copyright (C) 2004 by Steven T. Hatton *
* ha*****@globalsymmetry.com *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
************************************************** *************************/
#ifndef STH_TMATHPOWER_H
#define STH_TMATHPOWER_H
#include<stdexcept>
namespace sth {
namespace tmath {
/**
@author Steven T. Hatton
*/
template <size_t Exponent_S, typename T>
class PowerOf
{
public:
static T eval(const T& base)
{
return base * PowerOf<Exponent_S - 1, T>::eval(base);
}
};
template <typename T>
class PowerOf<1, T>
{
public:
static T eval(const T& base)
{
return base;
}
};
template <typename T>
class PowerOf<0, T>
{
public:
static T eval(const T& base)
{
if(!base)
{
throw std::logic_error(
"sth::tmath::PowerOf<0>(0) is an indeterminate form.");
}
return 1;
}
};
template <size_t Exponent_S, typename T>
T power(const T& base)
{
return PowerOf<Exponent_S, T>::eval(base);
}
};
}
#endif
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell
Steven T. Hatton wrote:
This is for the user to call instead of the member functions:
template <size_t Exponent_S, typename T>
T power(const T& base)
{
return PowerOf<Exponent_S, T>::eval(base);
}
It's used like this:
size_t thirtytwo = power<5>::eval(2);
It has the advantage of deducing template parameters.
NOTE: I should probably force the return type to be size_t. It may not be
as general, but for my purposes, it't more reasonable.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell
> Floating point numbers are represented in hardware in terms of significand (base) and exponent.
I'm sure he appreciated that most elementary lesson. I'm
sure anyone who knows exponentiation is done using
logarithms knows how floating point numbers are stored.
Also, I forgot to point out earlier that
Addition and subtraction are actually more expensive than multiplication and division.
is not correct. I know of no platform on which floating
point addition is slower than multiplication. The fastest
multiplications still take about 1.3 times the time of
addition for operations in isolation.
Now, under some caching conditions multiplication can
approach and even equal the speed of addition. That is why
many programmers simply consider floating addition and
multiplication to be about the same speed.
However, it is simply false to claim that addition is flatly
more expensive.
If however I'm wrong, and you have evidence to the contrary
please pass it along (along with your code for a compile
time recursive integer power function).
Keith H Duggar wrote: Floating point numbers are represented in hardware in terms of significand (base) and exponent. I'm sure he appreciated that most elementary lesson. I'm sure anyone who knows exponentiation is done using logarithms knows how floating point numbers are stored.
I was making the distinction clear.
Also, I forgot to point out earlier that
Addition and subtraction are actually more expensive than multiplication and division. is not correct. I know of no platform on which floating point addition is slower than multiplication. The fastest multiplications still take about 1.3 times the time of addition for operations in isolation.
Now, under some caching conditions multiplication can approach and even equal the speed of addition. That is why many programmers simply consider floating addition and multiplication to be about the same speed.
However, it is simply false to claim that addition is flatly more expensive.
My reason for saying that was based on what I learned a decade ago. Perhaps
I should have worded my statement more clearly. The way Stallings put it
in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
multiplication and division are much simpler processes than addition and
subtraction, as the following discussion indicates. ..."
If however I'm wrong, and you have evidence to the contrary please pass it along (along with your code for a compile time recursive integer power function).
I already posted that.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell
"Steven T. Hatton" wrote:
My reason for saying that was based on what I learned a decade ago.
???
A decade ago?
A decade ago on most (if not all) CPU's the situation was:
multiplication was much more expensive then addition.
Perhaps I should have worded my statement more clearly. The way Stallings put it in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point multiplication and division are much simpler processes than addition and subtraction, as the following discussion indicates. ..."
That doesn't sound right. It has the smell of some nasty and weird
argumentation.
For one: All schemes I know for multiplcation require some additions.
So in a sense addition is a building block for multiplication. How can
addition then be slower the multiplication?
On the other hand: I might not know all possible schemes for multiplication.
Hmm. Is there a way to study the entire section?
If not, what is the main point the above is based on?
The only thing I can think of is:
'simpler' is not used to mean: in terms of runtime efficiency
but is meant in terms of: creates less problems.
And the reason for this is: while in add/subtract one has to
first bring the exponents to the same number (which creates a
problem if there is a large difference) no such thing is necc.
in multiplication.
--
Karl Heinz Buchegger kb******@gascad.at
> > Really? How interesting, can you please post the code for the function?
Keith
This is based on ideas from _C++_Templates_The_Complet_Guide_
http://www.josuttis.com/tmplbook/
....
Not to quibble, but this isn't really a "function" is? It is
a class implementation that happens to have a function you
call. That's why I asked you to post the code because had you
built a compile time recursive function that would have been
most unique. However, had you said you had a compile time
recursive class that implements pow I would have pointed you
to search the google group archives for
"pow() with integer exponents"
which would take you to http://groups.google.com/groups?hl=e...5252B%25252B.*
where they discuss a few solutions and the tradeoffs. In
addition, Cy Edmunds posted a generic solution which is
actually faster (in terms of multiplication count) than the
one you posted. Your solution does not utilize repeated
squaring (an often used concept) to reduce the number of
multiplications form linear (as yours is) to something closer
to logarithmic. Cy's solution doesn't handle the general
case of repeated squaring but he does optimize for some of
the small powers.
The iterative solution posted by Nejat Aydin implements
repeated squaring and will be far faster than either generic
solution for large powers (and probably even small powers).
You could implement a similar scheme using compile time
recursion if you wish but for large powers this will lead to
serious code bloat and of course will be limited to compile
time powers only.
Karl Heinz Buchegger wrote: "Steven T. Hatton" wrote: Perhaps I should have worded my statement more clearly. The way Stallings put it in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point multiplication and division are much simpler processes than addition and subtraction, as the following discussion indicates. ..."
That doesn't sound right. It has the smell of some nasty and weird argumentation. For one: All schemes I know for multiplcation require some additions. So in a sense addition is a building block for multiplication. How can addition then be slower the multiplication?
With fp multiplication you're adding exponent with 2's complement (or
similar) addition. With fp addition, you have to play around with the
exponents and the significands (more).
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell
"Steven T. Hatton" <su******@setidava.kushan.aa> wrote in message
news:u8********************@speakeasy.net... Karl Heinz Buchegger wrote:
"Steven T. Hatton" wrote:
Perhaps I should have worded my statement more clearly. The way Stallings put it in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point multiplication and division are much simpler processes than addition and subtraction, as the following discussion indicates. ..."
That doesn't sound right. It has the smell of some nasty and weird argumentation. For one: All schemes I know for multiplcation require some additions. So in a sense addition is a building block for multiplication. How can addition then be slower the multiplication?
With fp multiplication you're adding exponent with 2's complement (or similar) addition.
But that's not *all* you're doing! That's just how the adjustment for
differing exponents is made. You still have to multiply the mantissas,
which in general involves a series of additions. Assuming for simplicity
that the operations were done in decimal instead of binary, consider
multiplying 3.0 (3e0) and 5.0 (5e0) versus 3.0 (3e0) and 0.5 (5e-1). The
exponent addition trick is simply how you determine the placement of the
final decimal point (resulting in 15.0 (15e0) versus 1.5 (15e-1)). The FPU
still has to multiply the 3 and the 5, regardless.
Addition only requires the one addition for the mantissa, although it may
have to first adjust the exponents.
Even though the multiplication is likely done in the FPU hardware, it
probably still involves a loop of the output back to the input of the
registers, and repeating of additions of the contents. And that repeated
addition is what will eat your clock cycles.
Granted, for some specific values, multiplying two values may (possibly) end
up faster than adding those same two values. But in the general case,
that's not going to be true.
-Howard
Keith H Duggar wrote: > Really? How interesting, can you please post the code for > the function? > > Keith
This is based on ideas from _C++_Templates_The_Complet_Guide_
http://www.josuttis.com/tmplbook/
...
Not to quibble, but this isn't really a "function" is? It is a class implementation that happens to have a function you call. That's why I asked you to post the code because had you built a compile time recursive function that would have been most unique.
I never said it was a function. But I believe what you end up with is
pretty much a function built at compile-time using recursion. It only uses
static members, and I suspect it's pretty much loaded and executed in once
chunk. I haven't read that discussion yet.
In this version I basically used a binary tree, but reused the calculations
which would have been identical. If you compile and run the program, you
will have done as much testing as I have.
/************************************************** *************************
* Copyright (C) 2004 by Steven T. Hatton *
* ha*****@globalsymmetry.com *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
************************************************** *************************/
#include <iostream>
#include <iomanip>
#include <stdexcept>
using std::cout;
using std::logic_error;
template<size_t S>
struct Pwr{
inline static size_t pwr(const size_t& b)
{
size_t p = Pwr<S/2>::pwr(b);
return (S & 1) ? p * p * b : p * p;
}
};
template<>
struct Pwr<1>{
inline static size_t pwr(const size_t& b) { return b; }
};
// NOTE: I'm not handling Pwr<0>, but it's trivial.
template<size_t S>
struct Powers{
inline static size_t power(const size_t& b)
{
cout << b << "^" << S << " = " << std::setw(8) << Pwr<S>::pwr(b) << "
";
return Powers<S-1>::power(b);
}
};
template<>
struct Powers<0>{
inline static size_t power(const size_t& b)
{
if(!b){ throw logic_error("ERROR: indeterminate form 0^0 "); }
cout << "\n";
return 0;
}
};
int main()
{
for(size_t s = 1; s < 10; s++)
{
Powers<5>::power(s);
}
return 0;
}
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell
On Wed, 13 Oct 2004 11:38:45 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote: Did I mess something along the way, or is there no function in Standard C++ to raise an (unsigned) int to a power of (unsigned) int returning (unsigned) int? I never gave this a second thought until today. I tried to do it, and discovered <cmath> std::pow() only takes floating point types for the first argument. Sure I could write one. I could have written at least 3 fundamentally differnet versions in the time it took to discover there isn't such a function.
Are you pressed for execution speed of this function?
J.
JXStern wrote: On Wed, 13 Oct 2004 11:38:45 -0400, "Steven T. Hatton" <su******@setidava.kushan.aa> wrote:Did I mess something along the way, or is there no function in Standard C++ to raise an (unsigned) int to a power of (unsigned) int returning (unsigned) int? I never gave this a second thought until today. I tried to do it, and discovered <cmath> std::pow() only takes floating point types for the first argument. Sure I could write one. I could have written at least 3 fundamentally differnet versions in the time it took to discover there isn't such a function.
Are you pressed for execution speed of this function?
J.
I've already far surpassed what my immediate needs are. But the exercise of
trying to optimize an unsigned int exponentiation function has paid off in
a big way. I really had no intention of getting into this problem so
deeply. But since it was interesting, and it allowed me to explore many
aspects of template metaprogramming, I pursued it.
As for why I originally wanted the function, I simply don't like using type
conversions that are potentially lossy, nor do I like casting with anything
but a dynamic cast. I was working with unsigned int - size_t to be more
specific. I didn't like the idea of converting back and forth between
integer and floating point representations.
I've discovered that I can store all my calculations in a single static base
class, and reuse the results, so speed of an int power function has become
nothing more than an academic exercise at this point.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Jeff Davis |
last post by:
I was doing some thinking about exponentiation algorithms along with a
friend, and I happened upon some interesting results. Particularly, I was...
|
by: PeteCresswell |
last post by:
-----------------------------------------------------------------
Sub Sheesh()
Dim myYears As Double
Dim myRawCumulative As...
|
by: David Laub |
last post by:
I know there is no C# exponentiation operator. But since
the double class is sealed, there seems no way to add the
operator override without...
|
by: James McGivney |
last post by:
What is happening here ?
long longg = 5;
longg = longg + (2 ^ 8);
the answer should be 5 + 256 or 261
but at the end of the above...
|
by: carlos |
last post by:
Curious:
Why wasnt a primitive exponentiation operator not added to C99?
And, are there requests to do so in the next std revision?
...
|
by: elventear |
last post by:
Hi,
I am the in the need to do some numerical calculations that involve
real numbers that are larger than what the native float can handle.
...
|
by: grahamhow424 |
last post by:
Hi
I am trying to figure out how to duplicate a, financial, calculation
that uses the caret, Exponentiation.
Here's the formula...
A =...
|
by: concettolabs |
last post by:
In today's business world, businesses are increasingly turning to PowerApps to develop custom business applications. PowerApps is a powerful tool...
|
by: better678 |
last post by:
Question:
Discuss your understanding of the Java platform. Is the statement "Java is interpreted" correct?
Answer:
Java is an object-oriented...
|
by: CD Tom |
last post by:
This happens in runtime 2013 and 2016. When a report is run and then closed a toolbar shows up and the only way to get it to go away is to right...
|
by: CD Tom |
last post by:
This only shows up in access runtime. When a user select a report from my report menu when they close the report they get a menu I've called Add-ins...
|
by: jalbright99669 |
last post by:
Am having a bit of a time with URL Rewrite. I need to incorporate http to https redirect with a reverse proxy. I have the URL Rewrite rules made...
|
by: antdb |
last post by:
Ⅰ. Advantage of AntDB: hyper-convergence + streaming processing engine
In the overall architecture, a new "hyper-convergence" concept was...
|
by: Matthew3360 |
last post by:
Hi, I have a python app that i want to be able to get variables from a php page on my webserver. My python app is on my computer. How would I make it...
|
by: AndyPSV |
last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...
|
by: WisdomUfot |
last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific...
| |