By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
464,603 Members | 1,088 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 464,603 IT Pros & Developers. It's quick & easy.

Why is recursion so slow?

P: n/a
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?

is the recursive definition counting fib 1 to fib x-1 for every x? is
that what lazy evaluation in functional languages avoids thus making
recursive versions much faster?
is recursive fibonacci in haskell as fast as an imperative solution in
a procedural language?

def fibr(nbr):
if nbr 1:
return fibr(nbr-1) + fibr(nbr-2)
if nbr == 1:
return 1
if nbr == 0:
return 0

def fibf(n):
sum=0
a=1
b=1
if n<=2: return 1
for i in range(3,n+1):
sum=a+b
a=b
b=sum
return sum
i found a version in Clojure that is superfast:
(def fib-seq
(concat
[0 1]
((fn rfib [a b]
(lazy-cons (+ a b) (rfib b (+ a b)))) 0 1)))

(defn fibx [x]
(last (take (+ x 1) fib-seq)))

(fibx 12000) is delivered instantly. is it using lazy evaluation?
Jun 29 '08 #1
Share this Question
Share on Google+
10 Replies

P: n/a

Quoting slix <no**********@yahoo.se>:
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?
The problem is not with 'recursion' itself, but with the algorithm:

def fibr(nbr):
if nbr 1:
return fibr(nbr-1) + fibr(nbr-2)
if nbr == 1:
return 1
if nbr == 0:
return 0
If you trace, say, fibr(5), you'll find that your code needs to compute fibr(4)
and fibr(3), and to compute fibr(4), it needs to compute fibr(3) and fibr(2). As
you can see, fibr(3), and it whole subtree, is computed twice. That is enough to
make it an exponential algorithm, and thus, untractable. Luckily, the iterative
form is pretty readable and efficient. If you must insist on recursion (say,
perhaps the problem you are solving cannot be solved iteratively with ease), I'd
suggest you to take a look at 'dynamic programming', or (easier but not
necesarily better), the 'memoize' disgn pattern.

is the recursive definition counting fib 1 to fib x-1 for every x?
Yes - that's what the algorithm says. (Well, actually, the algorithm saysto
count more than once, hence the exponential behaviour). The memoize patter could
help in this case.
is
that what lazy evaluation in functional languages avoids thus making
recursive versions much faster?
Not exactly... Functional languages are (or should be) optimized for recursion,
but if the algorithm you write is still exponential, it will still take along time.
is recursive fibonacci in haskell as fast as an imperative solution in
a procedural language?
[...]
i found a version in Clojure that is superfast:
I've seen the haskell implementation (quite impressive). I don't know Clojure
(is it a dialect of Lisp?), but that code seems similar to the haskell one. If
you look closely, there is no recursion on that code (no function calls).The
haskell code works by defining a list "fib" as "the list that starts with0,1,
and from there, each element is the sum of the element on 'fib' plus the element
on 'tail fib'). The lazy evaluation there means that you can define a list based
on itself, but there is no recursive function call.

Cheers,

(I'm sleepy... I hope I made some sense)

--
Luis Zarrabeitia
Facultad de Matemática y Computación, UH
http://profesores.matcom.uh.cu/~kyrie
Jun 29 '08 #2

P: n/a


slix wrote:
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?
The comparison below has nothing to do with recursion versus iteration.
(It is a common myth.) You (as have others) are comparing an
exponential, O(1.6**n), algorithm with a linear, O(n), algorithm.

def fibr(nbr):
if nbr 1:
return fibr(nbr-1) + fibr(nbr-2)
if nbr == 1:
return 1
if nbr == 0:
return 0
This is exponential due to calculating fib(n-2) twice, fib(n-3) thrice,
fib(n-4) 5 times, fir(n-5) 8 times, etc. (If you notice an interesting
pattern, you are probably correct!) (It returns None for n < 0.)
If rewritten as iteratively, with a stack, it is still exponential.
def fibf(n):
sum=0
a=1
b=1
if n<=2: return 1
for i in range(3,n+1):
sum=a+b
a=b
b=sum
return sum
This is a different, linear algorithm. fib(i), 0<=i<n, is calculated
just once. (It returns 1 for n < 0.) If rewritten (tail) recursively,
it is still linear.

In Python, an algorithm written with iteration is faster than the same
algorithm written with recursion because of the cost of function calls.
But the difference should be a multiplicative factor that is nearly
constant for different n. (I plan to do experiments to pin this down
better.) Consequently, algorithms that can easily be written
iteratively, especially using for loops, usually are in Python programs.

Terry Jan Reedy

Jun 29 '08 #3

P: n/a
On Sun, Jun 29, 2008 at 1:27 AM, Terry Reedy <tj*****@udel.eduwrote:
>

slix wrote:
>>
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?

The comparison below has nothing to do with recursion versus iteration. (It
is a common myth.) You (as have others) are comparing an exponential,
O(1.6**n), algorithm with a linear, O(n), algorithm.
FWIW, though, it's entirely possible for a recursive algorithm with
the same asymptotic runtime to be wall-clock slower, just because of
all the extra work involved in setting up and tearing down stack
frames and executing call/return instructions. (If the function is
tail-recursive you can get around this, though I don't know exactly
how CPython is implemented and whether it optimizes that case.)
Jun 29 '08 #4

P: n/a
In case anyone is interested...

# Retrieved from: http://en.literateprograms.org/Fibon...n)?oldid=10746

# Recursion with memoization
memo = {0:0, 1:1}
def fib(n):
if not n in memo:
memo[n] = fib(n-1) + fib(n-2)
return memo[n]

# Quick exact computation of large individual Fibonacci numbers

def powLF(n):
if n == 1: return (1, 1)
L, F = powLF(n/2)
L, F = (L**2 + 5*F**2) >1, L*F
if n & 1:
return ((L + 5*F)>>1, (L + F) >>1)
else:
return (L, F)

def fib(n):
return powLF(n)[1]
Jun 30 '08 #5

P: n/a
Dan Upton a écrit :
On Sun, Jun 29, 2008 at 1:27 AM, Terry Reedy <tj*****@udel.eduwrote:
>>
slix wrote:
>>Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?
The comparison below has nothing to do with recursion versus iteration. (It
is a common myth.) You (as have others) are comparing an exponential,
O(1.6**n), algorithm with a linear, O(n), algorithm.

FWIW, though, it's entirely possible for a recursive algorithm with
the same asymptotic runtime to be wall-clock slower, just because of
all the extra work involved in setting up and tearing down stack
frames and executing call/return instructions. (If the function is
tail-recursive you can get around this, though I don't know exactly
how CPython is implemented and whether it optimizes that case.)
By decision of the BDFL, based on the argument that it makes debugging
harder, CPython doesn't optimize tail-recursive calls.
Jun 30 '08 #6

P: n/a
On 2008-06-29, slix <no**********@yahoo.sewrote:
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?
Try the following recursive function:

def fib(n):

def gfib(a, b , n):
if n == 0:
return a
else:
return gfib(b, a + b, n - 1)

return gfib(0, 1, n)

--
Antoon Pardon
Jul 1 '08 #7

P: n/a
Luis Zarrabeitia <ky***@uh.cuwrote:
is that what lazy evaluation in functional languages avoids thus
making recursive versions much faster?

Not exactly... Functional languages are (or should be) optimized for recursion,
but if the algorithm you write is still exponential, it will still
take a long time.
Actually I think functional language are likely to perform
memoization. By definition any function in a functional language will
always produce the same result if given the same arguments, so you can
memoize any function.

See here for a python memoize which makes the recursive algorithm run
fast...

http://aspn.activestate.com/ASPN/Coo...n/Recipe/52201

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Jul 1 '08 #8

P: n/a
On 1 juil, 22:46, Nick Craig-Wood <n...@craig-wood.comwrote:
(snip)
By definition any function in a functional language will
always produce the same result if given the same arguments,
This is only true for pure functional languages.

I know you know it, but someone might think it also applies to unpure
FPLs like Common Lisp.

(snip)
Jul 1 '08 #9

P: n/a
Nick Craig-Wood wrote:
[snip]
By definition any function in a functional language will
always produce the same result if given the same arguments, so you can
memoize any function.
Ah, so that's why time.time() seems to be stuck... ;)

Rich

Jul 1 '08 #10

P: n/a
Rich Harkins <rh******@nettrekker.comwrote:
Nick Craig-Wood wrote:
[snip]
By definition any function in a functional language will
always produce the same result if given the same arguments, so you can
memoize any function.

Ah, so that's why time.time() seems to be stuck... ;)
;-)

As Bruno noted I should have said "pure functional language" above.

As for how you deal with IO in a pure functional language, well,
either you make it impure (eg Erlang or Lisp) - functions can have
side effects, or you do other complicated things like monads in
Haskell!

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Jul 2 '08 #11

This discussion thread is closed

Replies have been disabled for this discussion.