473,836 Members | 1,586 Online

# Iteration vs. Recursion

Can every problem that has an iterative solution also be expressed in
terms of a recursive solution?

I tried one example, and am in the process of trying out more examples,
increasing their complexity as I go. Here's a simple one I tried out:

#include<stdio. h>
/* To compare the the time and space cost of iteration against
recursion*/

const int SOME_CONST = 10;
void ifoo();
void rfoo(int, int, int);
int main(void)
{
ifoo();
rfoo(0, 0, 5);
printf("\n");
return 0;
}

void ifoo()
{
int i = 0, x = 0, y = 5;
for(; i < SOME_CONST; i++)
{
x += y;
printf("%d\t%d\ t%d\t\n", i, x, y);
}
}
void rfoo(int i, int x, int y)
{
x += y;
printf("\n%d\t% d\t%d", i, x, y);

if (i < SOME_CONST - 1)
rfoo(++i, x, y);
}
I noted the following:

a. While iteration is linear in time and constant in space, recursion
is exponential in space.

b. Iteration preserves state, recursion does not.

May 8 '06
75 5649
From: Keith Thompson <k...@mib.org >
Indeed. I would rather criticise someone who calls himself a
programmer and who isn't able to program a sub-quadratic sorting
algorithm without having to consult a textbook (or use a library
routine).
I would criticize any programmer who *bothers* to implement a
sub-quadratic sorting algorithm without consulting a textbook or
using a library routine.
There's a difference between proving you're competant at writing
algorithms, by programming a famous old task without looking in a
book, vs. proving you're a wise manager of software who knows when
to use libraries and when to write your own code. Both should be
taught to any prospective software programmer.
I seem to recall that there was a gap of a decade or so between
the first implementation of a binary search algorithm, and the
first *correct* implementation of a binary search algorithm.
Assuming that's really true, not an "urban legend", I don't
understand why they had so much trouble getting the algorithm
correct. I did my first binary search so long ago I don't remember
which language I used, but I remember the algorithm was rather
obvious:
You have an array, as a global or passed as an argument.
Elements in that array are in ascending sequence already.
You have two indexes, for the first and last elements in the
array where you want to search. I called them LO and HI.
bottomed out without finding the target, so you immediately
return a not-found indication, for example -1.
Otherwise you compute MID = (LO+HI)/2 (doesn't matter which way
you round, you can even be inconsistent from moment to moment,
the algorithm works in any case).
Test target against element at MID.
- If target smaller, tail-recurse with LO, MID-1, or if you're
doing it iteratively then do assignment HI := MID-1.
- If target larger, tail-recurse with MID+1, HI, or if you're
doing it iteratively then do assignment LO := MID+1.
- Else (target exactly matches) you've found it, return MID.
The nice thing about my algorithm is it's symmetric, MID-1 or
MID+1, so it's easy at a glance to check you got it right.
Dec 14 '06 #71
From: Flash Gordon <s...@flash-gordon.me.uk>
The best way to make your sort Quick is to use the standard library's
sorting routine. That cuts down on writing time, debugging time, and run
time.
I agree completely. In addition, if after doing it you find your
libraries quicksort implementation isn't fast enough the best
thing to do is see if you can find a better one that someone else
has implemented and use that.
Nowadays you have such choice, with a whole InterNet and Web for
efficiently locating such alternates to the library that was
bundled with your programming system. But long ago it would have
been prohibitive to search for alternatives that you didn't code
yourself.

When I started programming in 1964 (IBM 1620, with Fortran with
Format, later Fortran IId, and with SPS = assemler), and still in
1969 (IBM 1130 with Fortran IV), the bundled FORTRAN library was
cruddy, at least the one routine I needed, random number generator,
was horrible.

In 1969 I became suspicious, and decided to test it. I didn't trust
numerical tests such as chi-squared where you're supposed to
compute this meaningless statistic and then look in a book to see
if it's good enough. So I devised a graphic presentation of
randomness. If the data was sufficiently random, I'd see a
statistically uniform filling of a rectangle. If not, I'd see some
apparent pattern to the filling.

I tested the built-in FORTRAN random-number generator. What I got
out was was horribly non-random: Images like chess pieces (rook or
bishop), or like that two-horned electric monster on the Mexican
beach in the sci-fi movie (or equivalently that ugly black
skyscraper in Chicago).

Then I wrote my own crock of a random number generator. I didn't
know anything about linear congruential or CRC generators, so I
invented my own messy algorithm for essentially hashing the
contents of memory on the computer and reading out partial hash
results. When I tested it, I got statistically uniform-density
rectangles, no apparent pattern, like snow on a TV screen. Success!

The punchcards containing my software burned up in a warehouse fire
in 1972, but I think I still have the CalComp 565 plotter output in
storage, if anybody wants to see the comparison.
Dec 14 '06 #72
In article <RE************ ***@Yahoo.Com>,
Robert Maas, see http://tinyurl.com/uh3t <re*****@Yahoo. Comwrote:
>From: Keith Thompson <k...@mib.org >
I seem to recall that there was a gap of a decade or so between
the first implementation of a binary search algorithm, and the
first *correct* implementation of a binary search algorithm.
Assuming that's really true, not an "urban legend",
It's certainly an urban legend. Scarcely any of the code
in those days was ever published, hundreds of people must have
written that algorithm during that decade, many of them were
quite bright .... Basically, there's *no way* of knowing when
the first "correct" implementation was written, only when the
first correct implementation was published.

In any case, correctness is somewhat in the eye of the
beholder. If in a particular application it is known that the
array is not empty, or that the sought-for element is present,
then you can write slightly simpler code that would fail some
of the stress testing that a more general code needs. People
in those days got jobs done, they weren't concerned with formal
correctness according to some abstract spec. [I don't claim
that as good, bad or indifferent, it was just the way things
were.]
> I don't
understand why they had so much trouble getting the algorithm
correct.
before the first undergraduate CS courses, and long before
anyone but the odd fanatic worried about "correctnes s", as
opposed to "working". Even the notion of an algorithm was
little discussed. You learned programming by attending a
two-day course on the Autocode of a particular computer, and
then just doing it. There were no books about computing, no
published code, no standards; and professors got very uppity
if some spotty oik from the Computing Service tried to tell
them how to write programs.

Nowadays, that particular algorithm is one of the
common early tasks in an algorithms module. You tell the
class what you want their code to do, and give them a few
minutes to write it. Then you find out whether their code
works if the array is empty, if there is only one element,
if the target is larger/smaller than the last/first element,
and so on. It doesn't, so that gives you the chance to talk
about invariants, pre- and post-conditions, program structure
in general, blah-de-blah, *after* which:
[...] I remember the algorithm was rather
obvious: [...]
Quite so! Lots of science in general, and maths/CS in
particular, is rather obvious. The job of education is to make
it so. *Before* you've been taught how to make things obvious,
even the simplest concepts are obscure.
bottomed out without finding the target, so you immediately
return a not-found indication, for example -1.
You've already hit a design problem! What will you do if
-1 is a permitted index? Do you want the complexity of returning
a structure consisting of a Boolean and an integer [if your chosen
language permits that]? Or do you want the caller to have to test
whether the returned index is in range?
Otherwise you compute MID = (LO+HI)/2 (doesn't matter which way
you round, you can even be inconsistent from moment to moment,
the algorithm works in any case).
[The algorithm works, of course, as long as LO <= MID <= HI,
it's just more efficient if MID is somewhere around the middle. But
if we're picking nits, then you're in trouble if LO+HI Maxint, or
in various other similar potential overflow cases.]
Test target against element at MID.
- If target smaller, tail-recurse with LO, MID-1, or if you're
doing it iteratively then do assignment HI := MID-1. [...]
In the old days, this wasn't necessarily as easy as you
make it seem. For integer elements, OK. For reals, there was the
before the mid-'60s, so if you had an application like trying to
see if a word was in a list, there was a lot of encoding and
decoding to do here. Many languages didn't permit recursion;
and even "iteration" was discouraged -- "while (lo >= hi) {...}"
is a relatively late invention -- so you got spaghetti code using
"goto" to emulate the iteration.

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.
an*@maths.nott. ac.uk
Dec 14 '06 #73
In comp.edu Keith Thompson <ks***@mib.orgw rote:
to*****@app-4.diku.dk (Torben Ægidius Mogensen) writes:
[...]
Indeed. I would rather criticise someone who calls himself a
programmer and who isn't able to program a sub-quadratic sorting
algorithm without having to consult a textbook (or use a library
routine).

I would criticize any programmer who *bothers* to implement a
sub-quadratic sorting algorithm without consulting a textbook or using
a library routine.
In fairness to Torben, he was talking about the *ability* to do this,
not whether it was a good idea from a software development sense.

Sorting is a nice, clean, well-defined problem, and efficient
algorithms for it (either mergesort or quicksort) are very
straightforward uses of the basic concept of divide-and-conquer.

It just so happens that this *specific* problem is such a useful thing
that there are many good implementations that can be used, so a good
programmer should take advantage of that. However, if they *can't*
roll their own on such a clean problem, what are the odds that they
can come up with good solutions to other, not-so-clean, and
not-so-well-studied problems that they might encounter for which they
*can't* rely on pre-written libraries?
I don't remember where I read this, but I seem to recall that there
was a gap of a decade or so between the first implementation of a
binary search algorithm, and the first *correct* implementation of a
binary search algorithm.
I seem to remember reading that too. If I had to bet on a source, I'd
suggest Bentley's "Programmin g Pearls" - he has quite the obsession
with binary search... :-) Unfortunately, I'm at home, and my copy is
at work, so I can't verify this now.

--

Steve Stringer
si*********@gma il.com

Dec 18 '06 #74
<si*********@gm ail.comwrote:
Sorting is a nice, clean, well-defined problem, and efficient
algorithms for it (either mergesort or quicksort) are very
straightforward uses of the basic concept of divide-and-conquer.
I'd expect someone with a fair practical knowledge of programming to
come up with merge sort or even heap sort on their own; but quicksort is
completely non-obvious to me. In particular, it's a little non-obvious
both that partitioning can be done in linear time, and it's completely
non-obvious that choosing an appropriate pivot can be done well without
prior knowledge about the distribution of values and without negatively
impacting the asymptotic complexity of the routine.
I don't remember where I read this, but I seem to recall that there
was a gap of a decade or so between the first implementation of a
binary search algorithm, and the first *correct* implementation of a
binary search algorithm.

I seem to remember reading that too. If I had to bet on a source, I'd
suggest Bentley's "Programmin g Pearls" - he has quite the obsession
with binary search... :-)
I do. On page 34, Bentley quotes Knuth as saying that the first binary
search was published in 1946, but the first correct binary search was
published in 1962. If you're as pedantic as me, note the word
"published" rather than "devised" or "implemente d".

--
Chris Smith
Dec 18 '06 #75
Chris Smith wrote:
<si*********@gm ail.comwrote:
>I seem to remember reading that too. If I had to bet on a source, I'd
suggest Bentley's "Programmin g Pearls" - he has quite the obsession
with binary search... :-)

I do. On page 34, Bentley quotes Knuth as saying that the first binary
search was published in 1946, but the first correct binary search was
published in 1962. If you're as pedantic as me, note the word
"published" rather than "devised" or "implemente d".
Binary search has probably been used for thousands of years. I expect it was
used before calculus-based optimisation methods like Newton-Raphson. I'd be
amazed if it wasn't first published hundreds of years ago as well. It seems
like an odd thing to claim to be so modern, given how old much more
sophisticated concepts like calculus are...

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/product...ex.html?usenet
Dec 18 '06 #76

This thread has been closed and replies have been disabled. Please start a new discussion.