Alan Johnson wrote:

Using map to store the history bothers me a little, just because it

makes the runtime different (worse) than what many would expect for

some algorithms. Using Fibonacci as an example, you should be able to

calculate each successive member of the sequence in constant time

(assuming constant time operations on integers), so computing the first

N members of the sequence should require O(N) time.

While a tail recursive or iterative fibonacci can be written to run in

O(n) time, that is not the point of this.

The point is that there are some algorithms which are much easily

expressed in a recursive fashion and this technique helps to express

the solution in a recursive paradigm and still get solutions which do

not have exponential space and time requirements.

In such cases, the invocation of the function with argument x would

depend on N number of recursive invocations with x1 to xN.

In such situations, memoization is the only way to beat the clock.

Haskell is one language where functions are memoized automatically.

Take for example this function to calculate the number of possible k

digit numbers which consist of the digits 1 to n and where the digits

are all in non-ascending order :

eg for k = 2 , n = 3

11

21

22

31

32

33

are all non ascending.

Heres the raw recursive implementation

int getNonDescCombos(int k, int n)

{

if(k == 1)

return n;

int total = 0;

for(int i = 1; i <= n; ++i)

{

total += getNonDescCombos(k - 1, i);

}

return total;

}

While its working is clear, No prizes for guessing that this will

overflow the stack for even modest values of n and k.

now look at the memoized version :

map<pair<int, int>, intnonDescCombos;

int getNonDescCombos(pair<int, intarg)

{

int k = arg.first;

int n = arg.second;

if(k == 1)

return n;

map<pair<int, int>, int>::iterator mi = nonDescCombos.find(arg);

if(mi == nonDescCombos.end())

{

int total = 0;

for(int i = 1; i <= n; ++i)

{

total += getNonDescCombos(make_pair(k - 1, i));

}

return (nonDescCombos[arg] = total);

}

else

{

return mi->second;

}

}

getNonDescCombos( make_pair(100, 9) );

Runs with good space and time charecteristics, but is much more

complicated and the logic is more opaque.

now generalized with memoizer template...

class GetDescCombosCalc

{

public:

void apply(pair<int, int&p, pair<bool, int>& ret)

{

ret.first = p.second == 1;

ret.second = p.first;

}

};

class GetDescCombosGetNextIterArgs

{

public:

void apply(pair<int, int>& arg, vector<pair<int, int& rets)

{

for(int i = 1; i <= arg.first; ++i)

{

rets.push_back(make_pair(i, arg.second - 1));

}

}

};

class GetDescCombosCalcNextIter

{

public:

int apply(vector<int>& rets)

{

int total = 0;

for(size_t i = 0; i < rets.size(); ++i)

{

total += rets[i];

}

return total;

}

};

Definitely more verbose, but the logic of the recursive version is

preserved...

the first class does the job of calculating the simplest cases..

the second gives a collection of values (x1 to xN) for which the

function has to be computed before computing f(x).

The last one computes f(x) given f(x1) to f(xN) computed from the

previous results.

A large number of algorithms can readily be genralized to this pattern

with clear logic.

Finally the invocation becomes as simple as....

Memoize<int, pair<int, int>, GetDescCombosCalc,

GetDescCombosGetNextIterArgs, GetDescCombosCalcNextIterm;

int n = 100;

int k = 29;

m.apply(make_pair(k, n));

Storing the results in a map, though, only gives you a guarantee that

you'll be able to lookup elements in O(lg N) time, so computing the

first N members of the sequence has time in O(N lg N).

I assume you chose a map to conserve space in algorithms where not

every index is used. I would suggest somehow expanding your template

to allow the client to decide the space/speed trade off, perhaps by

supplying a container type as a template parameter.

One option would be to use a hash_map maybe, and anyway the Memoize

class is opaque so changing the implementation would make no difference

to the code for the computation, as long as the logic were same the

intermediate results could be stored any which way.

Regards

Vivek