By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,838 Members | 1,623 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,838 IT Pros & Developers. It's quick & easy.

An exponentiation function for int?

P: n/a
Did I mess something along the way, or is there no function in Standard C++
to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried to
do it, and discovered <cmath> std::pow() only takes floating point types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #1
Share this Question
Share on Google+
15 Replies


P: n/a
* Steven T. Hatton:
Did I mess something along the way, or is there no function in Standard C++
to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried to
do it, and discovered <cmath> std::pow() only takes floating point types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.


There isn't such a function. There is the C library pow, the C++ library
valarray::pow and the C++ library complex::pow. Writing an unsigned integer
version should, as you state, be fairly trivial; why not just wrap the C pow?

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Jul 22 '05 #2

P: n/a
Steven T. Hatton wrote:
Did I mess something along the way, or is there no function in Standard C++
to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried to
do it, and discovered <cmath> std::pow() only takes floating point types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.


Correct, there isn't one. There just isn't that much call for it
I would suspect. You don't have to raise a number to a very high
power before it starts overflowing.

You either code fast, or you need to learn how to read the docs faster.
It only took me about 20 seconds to load up the PDF of the Standard and
find in 26.5 where it says that these are the overloads for pow:
pow(float, float);
pow(float, int)
pow(double, double)
pow(double, int)
pow(long double, long double)
pow(long double, int)

I suspect your "integer pow" just iterates to do the exponentiation.
Pow typically uses a log (which is the only real way to do a fractional
exponent anyhow) which gives a more constant time.
Jul 22 '05 #3

P: n/a
Ron Natalie wrote:
Steven T. Hatton wrote:
Did I mess something along the way, or is there no function in Standard
C++ to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried
to do it, and discovered <cmath> std::pow() only takes floating point
types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.
Correct, there isn't one. There just isn't that much call for it
I would suspect. You don't have to raise a number to a very high
power before it starts overflowing.


What I'm doing is strictly integer math. I seriously doubt I will get into
tensors of sufficient rank and order to overflow long int with the index
range.
You either code fast, or you need to learn how to read the docs faster.
It only took me about 20 seconds to load up the PDF of the Standard and
find in 26.5 where it says that these are the overloads for pow:
pow(float, float);
pow(float, int)
pow(double, double)
pow(double, int)
pow(long double, long double)
pow(long double, int)
I read that from the error output. I didn't need to grep the standard, but
I did look it up there as well. It tells me what /is/ there, but not what
isn't. Sometimes the Standard is a useful reference, and othertimes I run
into the parts where is redefines 'definition' and proceeds with the
redefined meaning in a less than consistent manner.
I suspect your "integer pow" just iterates to do the exponentiation.
No, I actually used compile time recursion.
Pow typically uses a log (which is the only real way to do a fractional
exponent anyhow) which gives a more constant time.


That's because you can add exponents in one operation rather than performing
n + m individual operations needed to do the brute force calculation.
Floating point numbers are represented in hardware in terms of significand
(base) and exponent. Addition and subtraction are actually more expensive
than multiplication and division.

I'm not sure what the cost of converting between floating point and 2's
complement is, but if I use pow() to do my integer math, I suspect it can
add up. I really have to test it before I can make a determination. This
is probably hardware sensitive more so than compiler or OS sensitive.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #4

P: n/a
> > I suspect your "integer pow" just iterates to do the exponentiation.

No, I actually used compile time recursion.


Really? How interesting, can you please post the code for
the function?

Keith
Jul 22 '05 #5

P: n/a
Keith H Duggar wrote:
> I suspect your "integer pow" just iterates to do the exponentiation.


No, I actually used compile time recursion.


Really? How interesting, can you please post the code for
the function?

Keith


This is based on ideas from _C++_Templates_The_Complet_Guide_

http://www.josuttis.com/tmplbook/

I really don't know if this has any advantages, nor do I know if it is even
correct. I ran it through a few simple tests, but haven't revisited it
since. As I am working on my testing infrastructure.
/************************************************** *************************
* Copyright (C) 2004 by Steven T. Hatton *
* ha*****@globalsymmetry.com *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
************************************************** *************************/
#ifndef STH_TMATHPOWER_H
#define STH_TMATHPOWER_H
#include<stdexcept>

namespace sth {
namespace tmath {

/**
@author Steven T. Hatton
*/
template <size_t Exponent_S, typename T>
class PowerOf
{
public:
static T eval(const T& base)
{
return base * PowerOf<Exponent_S - 1, T>::eval(base);
}
};

template <typename T>
class PowerOf<1, T>
{
public:
static T eval(const T& base)
{
return base;
}
};

template <typename T>
class PowerOf<0, T>
{
public:
static T eval(const T& base)
{
if(!base)
{
throw std::logic_error(
"sth::tmath::PowerOf<0>(0) is an indeterminate form.");
}
return 1;
}
};

template <size_t Exponent_S, typename T>
T power(const T& base)
{
return PowerOf<Exponent_S, T>::eval(base);
}
};
}

#endif

--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #6

P: n/a
Steven T. Hatton wrote:

This is for the user to call instead of the member functions:
template <size_t Exponent_S, typename T>
T power(const T& base)
{
return PowerOf<Exponent_S, T>::eval(base);
}

It's used like this:

size_t thirtytwo = power<5>::eval(2);

It has the advantage of deducing template parameters.

NOTE: I should probably force the return type to be size_t. It may not be
as general, but for my purposes, it't more reasonable.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #7

P: n/a
> Floating point numbers are represented in hardware in terms of significand
(base) and exponent.
I'm sure he appreciated that most elementary lesson. I'm
sure anyone who knows exponentiation is done using
logarithms knows how floating point numbers are stored.

Also, I forgot to point out earlier that
Addition and subtraction are actually more expensive
than multiplication and division.


is not correct. I know of no platform on which floating
point addition is slower than multiplication. The fastest
multiplications still take about 1.3 times the time of
addition for operations in isolation.

Now, under some caching conditions multiplication can
approach and even equal the speed of addition. That is why
many programmers simply consider floating addition and
multiplication to be about the same speed.

However, it is simply false to claim that addition is flatly
more expensive.

If however I'm wrong, and you have evidence to the contrary
please pass it along (along with your code for a compile
time recursive integer power function).
Jul 22 '05 #8

P: n/a
Keith H Duggar wrote:
Floating point numbers are represented in hardware in terms of
significand (base) and exponent.
I'm sure he appreciated that most elementary lesson. I'm
sure anyone who knows exponentiation is done using
logarithms knows how floating point numbers are stored.


I was making the distinction clear.
Also, I forgot to point out earlier that
Addition and subtraction are actually more expensive
than multiplication and division.
is not correct. I know of no platform on which floating
point addition is slower than multiplication. The fastest
multiplications still take about 1.3 times the time of
addition for operations in isolation.

Now, under some caching conditions multiplication can
approach and even equal the speed of addition. That is why
many programmers simply consider floating addition and
multiplication to be about the same speed.

However, it is simply false to claim that addition is flatly
more expensive.


My reason for saying that was based on what I learned a decade ago. Perhaps
I should have worded my statement more clearly. The way Stallings put it
in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
multiplication and division are much simpler processes than addition and
subtraction, as the following discussion indicates. ..."
If however I'm wrong, and you have evidence to the contrary
please pass it along (along with your code for a compile
time recursive integer power function).


I already posted that.
--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #9

P: n/a
"Steven T. Hatton" wrote:


My reason for saying that was based on what I learned a decade ago.
???
A decade ago?

A decade ago on most (if not all) CPU's the situation was:
multiplication was much more expensive then addition.
Perhaps
I should have worded my statement more clearly. The way Stallings put it
in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
multiplication and division are much simpler processes than addition and
subtraction, as the following discussion indicates. ..."


That doesn't sound right. It has the smell of some nasty and weird
argumentation.
For one: All schemes I know for multiplcation require some additions.
So in a sense addition is a building block for multiplication. How can
addition then be slower the multiplication?
On the other hand: I might not know all possible schemes for multiplication.

Hmm. Is there a way to study the entire section?
If not, what is the main point the above is based on?

The only thing I can think of is:
'simpler' is not used to mean: in terms of runtime efficiency
but is meant in terms of: creates less problems.
And the reason for this is: while in add/subtract one has to
first bring the exponents to the same number (which creates a
problem if there is a large difference) no such thing is necc.
in multiplication.
--
Karl Heinz Buchegger
kb******@gascad.at
Jul 22 '05 #10

P: n/a
> > Really? How interesting, can you please post the code for
the function?

Keith


This is based on ideas from _C++_Templates_The_Complet_Guide_

http://www.josuttis.com/tmplbook/

....

Not to quibble, but this isn't really a "function" is? It is
a class implementation that happens to have a function you
call. That's why I asked you to post the code because had you
built a compile time recursive function that would have been
most unique. However, had you said you had a compile time
recursive class that implements pow I would have pointed you
to search the google group archives for

"pow() with integer exponents"

which would take you to

http://groups.google.com/groups?hl=e...5252B%25252B.*

where they discuss a few solutions and the tradeoffs. In
addition, Cy Edmunds posted a generic solution which is
actually faster (in terms of multiplication count) than the
one you posted. Your solution does not utilize repeated
squaring (an often used concept) to reduce the number of
multiplications form linear (as yours is) to something closer
to logarithmic. Cy's solution doesn't handle the general
case of repeated squaring but he does optimize for some of
the small powers.

The iterative solution posted by Nejat Aydin implements
repeated squaring and will be far faster than either generic
solution for large powers (and probably even small powers).
You could implement a similar scheme using compile time
recursion if you wish but for large powers this will lead to
serious code bloat and of course will be limited to compile
time powers only.
Jul 22 '05 #11

P: n/a
Karl Heinz Buchegger wrote:
"Steven T. Hatton" wrote:

Perhaps
I should have worded my statement more clearly. The way Stallings put it
in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
multiplication and division are much simpler processes than addition and
subtraction, as the following discussion indicates. ..."


That doesn't sound right. It has the smell of some nasty and weird
argumentation.
For one: All schemes I know for multiplcation require some additions.
So in a sense addition is a building block for multiplication. How can
addition then be slower the multiplication?


With fp multiplication you're adding exponent with 2's complement (or
similar) addition. With fp addition, you have to play around with the
exponents and the significands (more).

--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #12

P: n/a

"Steven T. Hatton" <su******@setidava.kushan.aa> wrote in message
news:u8********************@speakeasy.net...
Karl Heinz Buchegger wrote:
"Steven T. Hatton" wrote:

Perhaps
I should have worded my statement more clearly. The way Stallings put
it
in his _Computer_Organization_and_Architecture_3rd_Ed_ is
"Floating-point
multiplication and division are much simpler processes than addition and
subtraction, as the following discussion indicates. ..."


That doesn't sound right. It has the smell of some nasty and weird
argumentation.
For one: All schemes I know for multiplcation require some additions.
So in a sense addition is a building block for multiplication. How can
addition then be slower the multiplication?


With fp multiplication you're adding exponent with 2's complement (or
similar) addition.


But that's not *all* you're doing! That's just how the adjustment for
differing exponents is made. You still have to multiply the mantissas,
which in general involves a series of additions. Assuming for simplicity
that the operations were done in decimal instead of binary, consider
multiplying 3.0 (3e0) and 5.0 (5e0) versus 3.0 (3e0) and 0.5 (5e-1). The
exponent addition trick is simply how you determine the placement of the
final decimal point (resulting in 15.0 (15e0) versus 1.5 (15e-1)). The FPU
still has to multiply the 3 and the 5, regardless.

Addition only requires the one addition for the mantissa, although it may
have to first adjust the exponents.

Even though the multiplication is likely done in the FPU hardware, it
probably still involves a loop of the output back to the input of the
registers, and repeating of additions of the contents. And that repeated
addition is what will eat your clock cycles.

Granted, for some specific values, multiplying two values may (possibly) end
up faster than adding those same two values. But in the general case,
that's not going to be true.
-Howard
Jul 22 '05 #13

P: n/a
Keith H Duggar wrote:
> Really? How interesting, can you please post the code for
> the function?
>
> Keith


This is based on ideas from _C++_Templates_The_Complet_Guide_

http://www.josuttis.com/tmplbook/

...

Not to quibble, but this isn't really a "function" is? It is
a class implementation that happens to have a function you
call. That's why I asked you to post the code because had you
built a compile time recursive function that would have been
most unique.


I never said it was a function. But I believe what you end up with is
pretty much a function built at compile-time using recursion. It only uses
static members, and I suspect it's pretty much loaded and executed in once
chunk. I haven't read that discussion yet.

In this version I basically used a binary tree, but reused the calculations
which would have been identical. If you compile and run the program, you
will have done as much testing as I have.

/************************************************** *************************
* Copyright (C) 2004 by Steven T. Hatton *
* ha*****@globalsymmetry.com *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
************************************************** *************************/
#include <iostream>
#include <iomanip>
#include <stdexcept>

using std::cout;
using std::logic_error;

template<size_t S>
struct Pwr{
inline static size_t pwr(const size_t& b)
{
size_t p = Pwr<S/2>::pwr(b);
return (S & 1) ? p * p * b : p * p;
}
};

template<>
struct Pwr<1>{
inline static size_t pwr(const size_t& b) { return b; }
};

// NOTE: I'm not handling Pwr<0>, but it's trivial.

template<size_t S>
struct Powers{
inline static size_t power(const size_t& b)
{
cout << b << "^" << S << " = " << std::setw(8) << Pwr<S>::pwr(b) << "
";
return Powers<S-1>::power(b);
}
};

template<>
struct Powers<0>{
inline static size_t power(const size_t& b)
{
if(!b){ throw logic_error("ERROR: indeterminate form 0^0 "); }
cout << "\n";
return 0;
}
};
int main()
{
for(size_t s = 1; s < 10; s++)
{
Powers<5>::power(s);
}

return 0;
}

--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #14

P: n/a
On Wed, 13 Oct 2004 11:38:45 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote:
Did I mess something along the way, or is there no function in Standard C++
to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried to
do it, and discovered <cmath> std::pow() only takes floating point types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.


Are you pressed for execution speed of this function?

J.

Jul 22 '05 #15

P: n/a
JXStern wrote:
On Wed, 13 Oct 2004 11:38:45 -0400, "Steven T. Hatton"
<su******@setidava.kushan.aa> wrote:
Did I mess something along the way, or is there no function in Standard
C++ to raise an (unsigned) int to a power of (unsigned) int returning
(unsigned) int? I never gave this a second thought until today. I tried
to do it, and discovered <cmath> std::pow() only takes floating point
types
for the first argument. Sure I could write one. I could have written at
least 3 fundamentally differnet versions in the time it took to discover
there isn't such a function.


Are you pressed for execution speed of this function?

J.

I've already far surpassed what my immediate needs are. But the exercise of
trying to optimize an unsigned int exponentiation function has paid off in
a big way. I really had no intention of getting into this problem so
deeply. But since it was interesting, and it allowed me to explore many
aspects of template metaprogramming, I pursued it.

As for why I originally wanted the function, I simply don't like using type
conversions that are potentially lossy, nor do I like casting with anything
but a dynamic cast. I was working with unsigned int - size_t to be more
specific. I didn't like the idea of converting back and forth between
integer and floating point representations.

I've discovered that I can store all my calculations in a single static base
class, and reuse the results, so speed of an int power function has become
nothing more than an academic exercise at this point.

--
"If our hypothesis is about anything and not about some one or more
particular things, then our deductions constitute mathematics. Thus
mathematics may be defined as the subject in which we never know what we
are talking about, nor whether what we are saying is true." - Bertrand
Russell

Jul 22 '05 #16

This discussion thread is closed

Replies have been disabled for this discussion.