By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,621 Members | 1,104 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,621 IT Pros & Developers. It's quick & easy.

Recursive generators and backtracking search

P: n/a
I've been using generators to implement backtracking search for a while
now. Unfortunately, my code is large and complex enough (doing
unification on math expressions) that its hard to post a simple
example. So I decided to look for a simpler problem that could be used
to demonstrate the technique that I am talking about.

I noticed that PEP 255 (Simple Generators) refers to an implementation
of the "8 Queens" problem in the lib/test directory. Looking at the
code, I see that while it does use generators, it doesn't use them
recursively.

As an alternative, I'd like to present the following implementation. If
you compare this one with the one in lib/test/test_generator.py you
will agree (I hope) that by using recursive generators to implement
backtracking, the resulting code is a little more straightforward and
intuitive:

# Example of using generators to implement backtracking search.
# The example below is an implementation of the classic "N queens"
# problem (place N queens on an N x N chessboard so that none are
# threatened by the others.)
#
# Board representation: Since no two queens can be one the same
# row, the board is represented as a tuple of N length, where
# each element is the column occupied by the queen on that row.

def queens( bsize ):

# Function to test if a queen is threatened by any previously
# placed queen.
def threaten( qarray, newpos ):
# Now check the diagonals
dist = len( qarray ) # Distance between rows
for q in qarray:
if q == newpos: return True # Same column
if q + dist == newpos: return True # diagonal
if q - dist == newpos: return True # diagonal
dist -= 1
return False

def qsearch( qarray = () ):
for q in range( 0, bsize ): # Try each position
if not threaten( qarray, q ): # If not threatened
pos = qarray + ( q, ); # Append to the pos array

if len( pos ) >= bsize: # Are we done?
yield pos # Yield the answer
else: # recursively call new generator
for pos in qsearch( pos ):
yield pos

print "Queens problem for", bsize, "x", bsize, "board."
for ans in qsearch():
` # Print out the board
print "+" + "---+" * bsize;
for q in ans:
print "|" + " |" * q + " Q |" + " |" * (bsize - q - 1)
print "+" + "---+" * bsize;
print

queens( 8 )

Now, you may be wondering what is my point? Well, first, I want to
encourage people to think about using Python as a language for complex
heuristic search problems. Traditionally, LISP and Prolog have been the
language of choices for "AI" type programming, however there is a clear
advantage to the readability and maintainability of Python, as well as
being much more integrated into modern computing environments (in terms
of available interpreters, IDEs, libraries, etc.)

Secondly, I want to lobby for additional support in the language and
standard libraries for handling such problems. There are a number of
fairly obvious language enhancements which would make the above example
even simpler - for examle, the idea of being able to return the output
of one generator directly from another instead of having to iterate
through all of the results and then re-yield them has already been
discussed in this forum.

Oct 29 '05 #1
Share this Question
Share on Google+
6 Replies


P: n/a
Talin <vi*****@gmail.com> wrote:
even simpler - for examle, the idea of being able to return the output
of one generator directly from another instead of having to iterate
through all of the results and then re-yield them has already been
discussed in this forum.


I missed those discussions, having been away from the group for awhile.
To me, the simplification of changing, e.g.,

for x in whatever_other_iterable: yield x

into (say)

yield from whatever_other_iterable

is minute and not worth changing the syntax (even though something like
'yield from' would mean no keywords would need to be added).
Alex
Oct 29 '05 #2

P: n/a
"Talin" <vi*****@gmail.com> writes:
As an alternative, I'd like to present the following implementation. If
you compare this one with the one in lib/test/test_generator.py you
will agree (I hope) that by using recursive generators to implement
backtracking, the resulting code is a little more straightforward and
intuitive:
I'd propose one change to that... def qsearch( qarray = () ):
for q in range( 0, bsize ): # Try each position
if not threaten( qarray, q ): # If not threatened
pos = qarray + ( q, ); # Append to the pos array

if len( pos ) >= bsize: # Are we done?
yield pos # Yield the answer
else: # recursively call new generator
for pos in qsearch( pos ):
yield pos


Um - do you really want to reuse the variable pos here? Yeah, it
works, but this strikes me as very confusing. I'm not sure that it
might not be implementation dependent.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 29 '05 #3

P: n/a
"Talin" <viri...@gmail.com> wrote:
I've been using generators to implement backtracking search for a while
now. Unfortunately, my code is large and complex enough (doing
unification on math expressions) that its hard to post a simple
example. So I decided to look for a simpler problem that could be used
to demonstrate the technique that I am talking about.


Here are two even simpler problems that are solved elegantly (though
not necessarily efficiently) with recursive generators:

def cartesian_product(*sequences):
'''Iterate over the elements of the cartesian product of zero or
more sequences.
for x in cartesian_product(range(3), 'a', (3.25,-1.2)): ... print x
(0, 'a', 3.25)
(0, 'a', -1.2)
(1, 'a', 3.25)
(1, 'a', -1.2)
(2, 'a', 3.25)
(2, 'a', -1.2)
'''
if not sequences:
yield ()
else:
for item in sequences[0]:
head = (item,)
for tail in cartesian_product(*sequences[1:]):
yield head + tail
def powerset(iterable):
'''Iterate over all subsets of an iterable.
for s in powerset('abc'):

... print s
frozenset([])
frozenset(['a'])
frozenset(['b'])
frozenset(['a', 'b'])
frozenset(['c'])
frozenset(['a', 'c'])
frozenset(['c', 'b'])
frozenset(['a', 'c', 'b'])
'''
yield frozenset()
for s in _powerset(iter(iterable)):
yield s

def _powerset(iterator):
first = frozenset([iterator.next()])
yield first
for s in _powerset(iterator):
yield s
yield s | first
George

Oct 30 '05 #4

P: n/a
>> for pos in qsearch( pos ):
yield pos
Um - do you really want to reuse the variable pos here? Yeah, it
works, but this strikes me as very confusing. I'm not sure that it
might not be implementation dependent.


Certainly not. pos is - and that is standard python semantics - just a
name. Passing the bound _value_ of pos to some function and rebinding
the the name afterwards is perfectly legal and will work in all
implementations.

The question of style though remains. I wouldn't do that either, but
what I frequntly do is something like this:
pos = "10"
pos = int(pos)

Thus when I'm sort of transforming a value, I think it's ok, as the name
still refers to the same conceptual entity.

Regards,

Diez
Oct 30 '05 #5

P: n/a

Alex Martelli wrote:
for x in whatever_other_iterable: yield x

into (say)

yield from whatever_other_iterable

is minute and not worth changing the syntax (even though something like
'yield from' would mean no keywords would need to be added).


I agree that the improvement is minor, but I'll take what I can get.
Although, I can think that perhaps there could be a potential
efficiency improvement as well - right now, each result has to crawl
its way up the stack of yields. For an 8 x 8 board, each final result
gets yielded 8 times. A 'yield from' might potentially be able to
splice the results directly into the output of the generator using some
kind of iterator chain logic. I'm not sure how feasible this is,
however.

A more compelling benefit would be some means of yielding a value
across multiple stack frames. Currently the generator semantics are
limited, in that they only allow co-routine behavior between a caller
and its directly called subroutine.

What I am more interested in, however, is the use of backtracking
search in Python, and its application to more complex search problems.
The main benefit of the generator approach is in the elimination of
callbacks, allowing the consumer of the search results to retain its
local state in a convenient way, (Anyone who's ever struggled with the
SAX API for XML parsing knows what I am talking about.)

One difference between these more complex problems and the simple
example I posted is that it isn't just simple recursion at work. Each
level of search may invoke a variety of different search strategies for
each sub-part of the problem, which themselves will do the same with
their own portion.

Here's a thought experiment: What would it take to make Python the
premier language for AI research? (Not that I am proposing that this
should be the Python community's goal, this is more in the way of a
brainstorm session.)

Oct 30 '05 #6

P: n/a
[Talin]
I've been using generators to implement backtracking search for a while
now. Unfortunately, my code is large and complex enough (doing
unification on math expressions) that its hard to post a simple
example. So I decided to look for a simpler problem that could be used
to demonstrate the technique that I am talking about.

I noticed that PEP 255 (Simple Generators) refers to an implementation
of the "8 Queens" problem in the lib/test directory. Looking at the
code, I see that while it does use generators, it doesn't use them
recursively.
In context, the N-Queens and MxN Knight's Tour solvers in
test_generators.py are exercising the conjoin() generators in that
file. That's a different approach to backtracking search, with some
nice features too: (1) it uses heap space instead of stack space;
and, (2) it's easy to run entirely different code at different levels
of the search. #2 isn't well-illustrated by the N-Queens solver
because the problem is so symmetric, although it is used to give the
code for each row its own local table of the board lines used by the
squares in that row. That in turn is a major efficiency win. The
Knight's Tour solver makes more obvious use of #2, by, e.g., running
different code for "the first" square than for "the second" square
than for "the last" square than for "all the other" squares. That
doesn't require any runtime test-and-branch'ing in the search code,
it's set up once at the start in the list of M*N generators passed to
conjoin() (each square gets its own generator, which can be customized
in arbitrary ways, in advance, for that square).
As an alternative, I'd like to present the following implementation. If
you compare this one with the one in lib/test/test_generator.py you
will agree (I hope) that by using recursive generators to implement
backtracking, the resulting code is a little more straightforward and
intuitive:
Since "straightfoward and intuitive" weren't the goals of the
test_generators.py implementations, that's not too surprising ;-)
...

Oct 31 '05 #7

This discussion thread is closed

Replies have been disabled for this discussion.