By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,016 Members | 2,297 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,016 IT Pros & Developers. It's quick & easy.

Regular Expressions: large amount of or's

P: n/a

Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?

André

Jul 18 '05 #1
Share this Question
Share on Google+
22 Replies


P: n/a
[André Sřreng]
Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?


Put the words you're looking for into a set (or as the keys of a dict
in older Pythons; the values in the dict are irrelevant).

I don't know what you mean by "word", so write something that breaks
your string into what you mean by words. Then:

for word in something_that_produces_words(the_string):
if word in set_of_words:
# found one

This takes expected-case time proportional to the number of words in
the string, + setup time proportional to the number of "interesting"
words (the time needed to create a set or dict from them). 10,000
interesting words won't even start to strain it.
Jul 18 '05 #2

P: n/a
André Sřreng wrote:

Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?
If you can split some_string into individual words, you could look them up in a set of known words:

known_words = set("word1 word2 word3 ....... wordN".split())
found_words = [ word for word in some_string.split() if word in known_words ]

Kent

André

Jul 18 '05 #3

P: n/a
This does not sound like a job for a single regex.

Using a list and listcomp (say your words are in a list called "mywordlist")
you can make this quite terse. Of course I have a way of writing algorithms
that have very large exp when people tell me the O(N^exp).

try this:
myregexlist = [re.compile(aword) for aword in mywordlist]
myoccurrences = [argx.findall(some_string) for argx in myregexlist]
Now you should have a 1:1 mapping of the mywordlist and myoccurrences. Of
course you can fill mywordlist with real regular expressions instead of just
words. If you want to count the words, you may just want to use the string
count method:
myoccurrences = [some_string.count(aword) for aword in mywordlist]
This may make more sense if you are not using true regexes.

James

On Tuesday 01 March 2005 11:46 am, André Sřreng wrote:
Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?

André


--
James Stroud, Ph.D.
UCLA-DOE Institute for Genomics and Proteomics
Box 951570
Los Angeles, CA 90095
Jul 18 '05 #4

P: n/a
Kent Johnson wrote:
André Sřreng wrote:

Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?

If you can split some_string into individual words, you could look them
up in a set of known words:

known_words = set("word1 word2 word3 ....... wordN".split())
found_words = [ word for word in some_string.split() if word in
known_words ]

Kent

André


That is not exactly what I want. It should discover if some of
the predefined words appear as substrings, not only as equal
words. For instance, after matching "word2sgjoisejfisaword1yguyg", word2
and word1 should be detected.
Jul 18 '05 #5

P: n/a
Le mardi 1 Mars 2005 22:04, André Sřreng a écrit*:
That is not exactly what I want. It should discover if some of
the predefined words appear as substrings, not only as equal
words. For instance, after matching "word2sgjoisejfisaword1yguyg", word2
and word1 should be detected.


Hi,

A lexer producing a DFA like the one in pyggy (see
http://www.lava.net/~newsham/pyggy/) might be what you're looking for.

Regards,

Francis Girard

Jul 18 '05 #6

P: n/a
On Tue, 01 Mar 2005 22:04:15 +0100, André Sřreng <ws******@tiscali.no> wrote:
Kent Johnson wrote:
André Sřreng wrote:

Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?

If you can split some_string into individual words, you could look them
up in a set of known words:

known_words = set("word1 word2 word3 ....... wordN".split())
found_words = [ word for word in some_string.split() if word in
known_words ]

Kent

André

That is not exactly what I want. It should discover if some of
the predefined words appear as substrings, not only as equal
words. For instance, after matching "word2sgjoisejfisaword1yguyg", word2
and word1 should be detected.


Show some initiative, man!
known_words = set(["word1", "word2"])
found_words = [word for word in known_words if word in "word2sgjoisejfisawo rd1yguyg"] found_words

['word1', 'word2']

Peace
Bill Mill
bill.mill at gmail.com
Jul 18 '05 #7

P: n/a
André Sřreng wrote:

Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).


What error do you get? What version of Python are you using? re was changed in Python 2.4 to avoid
recursion, so if you are getting a stack overflow in Python 2.3 you should try 2.4.

Kent
Jul 18 '05 #8

P: n/a
André Sřreng <ws******@tiscali.no> wrote:
Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but
slow).


I wrote a regexp optimiser for exactly this case.

Eg a regexp for all 5 letter words starting with re

$ grep -c '^re' /usr/share/dict/words
2727

$ grep '^re' /usr/share/dict/words | ./words-to-regexp.pl 5

re|re's|reac[ht]|rea(?:d|d[sy]|l|lm|m|ms|p|ps|r|r[ms])|reb(?:el|u[st])|rec(?:ap|ta|ur)|red|red's|red(?:id|o|s)|ree(?:d| ds|dy|f|fs|k|ks|l|ls|ve)|ref|ref's|refe[dr]|ref(?:it|s)|re(?:gal|hab|(?:ig|i)n|ins|lax|lay|li c|ly|mit|nal|nd|nds|new|nt|nts|p)|rep's|rep(?:ay|e l|ly|s)|rer(?:an|un)|res(?:et|in|t|ts)|ret(?:ch|ry )|re(?:use|v)|rev's|rev(?:el|s|ue)

As you can see its not perfect.

Find it in http://www.craig-wood.com/nick/pub/words-to-regexp.pl

Yes its perl and rather cludgy but may give you ideas!

--
Nick Craig-Wood <ni**@craig-wood.com> -- http://www.craig-wood.com/nick
Jul 18 '05 #9

P: n/a
Kent Johnson <ke****@tds.net> wrote:

:> Given a string, I want to find all ocurrences of
:> certain predefined words in that string. Problem is, the list of
:> words that should be detected can be in the order of thousands.
:>
:> With the re module, this can be solved something like this:
:>
:> import re
:>
:> r = re.compile("word1|word2|word3|.......|wordN")
:> r.findall(some_string)

The internal data structure that encodes that set of keywords is
probably humongous. An alternative approach to this problem is to
tokenize your string into words, and then check to see if each word is
in a defined list of "keywords". This works if your keywords are
single words:

###
keywords = set([word1, word2, ...])
matchingWords = set(re.findall(r'\w+')).intersection(keywords)
###

Would this approach work for you?

Otherwise, you may want to look at a specialized data structure for
doing mutiple keyword matching; I had an older module that wrapped
around a suffix tree:

http://hkn.eecs.berkeley.edu/~dyoo/python/suffix_trees/

It looks like other folks, thankfully, have written other
implementations of suffix trees:

http://cs.haifa.ac.il/~shlomo/suffix_tree/

Another approach is something called the Aho-Corasick algorithm:

http://portal.acm.org/citation.cfm?doid=360825.360855

though I haven't been able to find a nice Python module for this yet.
Best of wishes to you!
Jul 18 '05 #10

P: n/a
Bill Mill wrote:
On Tue, 01 Mar 2005 22:04:15 +0100, André Sřreng <ws******@tiscali.no> wrote:
Kent Johnson wrote:
André Sřreng wrote:
Hi!

Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?
If you can split some_string into individual words, you could look them
up in a set of known words:

known_words = set("word1 word2 word3 ....... wordN".split())
found_words = [ word for word in some_string.split() if word in
known_words ]

Kent
André


That is not exactly what I want. It should discover if some of
the predefined words appear as substrings, not only as equal
words. For instance, after matching "word2sgjoisejfisaword1yguyg", word2
and word1 should be detected.

Show some initiative, man!

known_words = set(["word1", "word2"])
found_words = [word for word in known_words if word in "word2sgjoisejfisawo
rd1yguyg"]
found_words


['word1', 'word2']

Peace
Bill Mill
bill.mill at gmail.com


Yes, but I was looking for a solution which would scale. Searching
through the same string 10000+++ times does not seem like a suitable
solution.

André
Jul 18 '05 #11

P: n/a
Daniel Yoo wrote:
Kent Johnson <ke****@tds.net> wrote:

:> Given a string, I want to find all ocurrences of
:> certain predefined words in that string. Problem is, the list of
:> words that should be detected can be in the order of thousands.
:>
:> With the re module, this can be solved something like this:
:>
:> import re
:>
:> r = re.compile("word1|word2|word3|.......|wordN")
:> r.findall(some_string)

The internal data structure that encodes that set of keywords is
probably humongous. An alternative approach to this problem is to
tokenize your string into words, and then check to see if each word is
in a defined list of "keywords". This works if your keywords are
single words:

###
keywords = set([word1, word2, ...])
matchingWords = set(re.findall(r'\w+')).intersection(keywords)
###

Would this approach work for you?

Otherwise, you may want to look at a specialized data structure for
doing mutiple keyword matching; I had an older module that wrapped
around a suffix tree:

http://hkn.eecs.berkeley.edu/~dyoo/python/suffix_trees/

It looks like other folks, thankfully, have written other
implementations of suffix trees:

http://cs.haifa.ac.il/~shlomo/suffix_tree/

Another approach is something called the Aho-Corasick algorithm:

http://portal.acm.org/citation.cfm?doid=360825.360855

though I haven't been able to find a nice Python module for this yet.
Best of wishes to you!


Thanks, seems like the Aho-Corasick algorithm is along the lines of
what I was looking for, but have not read the article completely yet.

Also:
http://alexandria.tue.nl/extra1/wskr...tml/200407.pdf

provided several alternative algorithms.

André

Jul 18 '05 #12

P: n/a
André Sřreng wrote:


Yes, but I was looking for a solution which would scale. Searching
through the same string 10000+++ times does not seem like a suitable
solution.

André


Just for curiosity, what would a regexp do? Perhaps it's a clue in how
you could do this in the way regexp's are executed.

ola

--
--------------------------------------
Ola Natvig <ol********@infosense.no>
infoSense AS / development
Jul 18 '05 #13

P: n/a
Ola Natvig wrote:
André Sřreng wrote:


Yes, but I was looking for a solution which would scale. Searching
through the same string 10000+++ times does not seem like a suitable
solution.

André

Just for curiosity, what would a regexp do? Perhaps it's a clue in how
you could do this in the way regexp's are executed.

ola


I think this article provides me with what I was looking for:

http://alexandria.tue.nl/extra1/wskr...tml/200407.pdf

Enough info there to keep me going for some while.
Jul 18 '05 #14

P: n/a
Can divide the regex on the bases of alphabets they are starting with
or can iterate on the list.

Regards,
Garry

http://garrythegambler.blogspot.com/
On Wed, 02 Mar 2005 12:50:01 +0100, André Sřreng <an*****@stud.cs.uit.no> wrote:
Ola Natvig wrote:
André Sřreng wrote:


Yes, but I was looking for a solution which would scale. Searching
through the same string 10000+++ times does not seem like a suitable
solution.

André

Just for curiosity, what would a regexp do? Perhaps it's a clue in how
you could do this in the way regexp's are executed.

ola


I think this article provides me with what I was looking for:

http://alexandria.tue.nl/extra1/wskr...tml/200407.pdf

Enough info there to keep me going for some while.
--
http://mail.python.org/mailman/listinfo/python-list

--
Thanks and Regards,
GSS
Jul 18 '05 #15

P: n/a
On Tue, 1 Mar 2005 15:03:50 -0500, Tim Peters <ti********@gmail.com>
wrote:
[André Sřreng]
Given a string, I want to find all ocurrences of
certain predefined words in that string. Problem is, the list of
words that should be detected can be in the order of thousands.

With the re module, this can be solved something like this:

import re

r = re.compile("word1|word2|word3|.......|wordN")
r.findall(some_string)

Unfortunately, when having more than about 10 000 words in
the regexp, I get a regular expression runtime error when
trying to execute the findall function (compile works fine, but slow).

I don't know if using the re module is the right solution here, any
suggestions on alternative solutions or data structures which could
be used to solve the problem?


Put the words you're looking for into a set (or as the keys of a dict
in older Pythons; the values in the dict are irrelevant).

I don't know what you mean by "word", so write something that breaks
your string into what you mean by words. Then:

for word in something_that_produces_words(the_string):
if word in set_of_words:
# found one

I have the same problem.
Unfortunately the meaning of a "word" depends on the word.
As an example I would like to count the number of occurrences of
movies titles in some text.

Maybe lex is more optimized?
Unfortunately is seems that there are no lex versions that generate
python (or PyRex) code.

Thanks and regards Manlio Perillo

Jul 18 '05 #16

P: n/a
Hi.

Python allows to subclass builtin classes but the Python Interpreter
uses builtin types.
As an example keyword arguments are inserted in a dict but I would
like to use an user defined SortedDict.

There are plans to add such a feature in a future version?
Thanks and regards Manlio Perillo
Jul 18 '05 #17

P: n/a
Hi.

Python allows to subclass builtin classes but the Python Interpreter
uses builtin types.
As an example keyword arguments are inserted in a dict but I would
like to use an user defined SortedDict.

There are plans to add such a feature in a future version?
Thanks and regards Manlio Perillo
Jul 18 '05 #18

P: n/a
: Otherwise, you may want to look at a specialized data structure for
: doing mutiple keyword matching; I had an older module that wrapped
: around a suffix tree:

: http://hkn.eecs.berkeley.edu/~dyoo/python/suffix_trees/

: It looks like other folks, thankfully, have written other
: implementations of suffix trees:

: http://cs.haifa.ac.il/~shlomo/suffix_tree/

: Another approach is something called the Aho-Corasick algorithm:

: http://portal.acm.org/citation.cfm?doid=360825.360855

: though I haven't been able to find a nice Python module for this yet.
Followup on this: I haven't been able to find one, so I took someone
else's implementation and adapted it. *grin*

Here you go:

http://hkn.eecs.berkeley.edu/~dyoo/python/ahocorasick/

This provides an 'ahocorasick' Python C extension module for doing
matching on a set of keywords. I'll start writing out the package
announcements tomorrow.
I hope this helps!
Jul 18 '05 #19

P: n/a

Daniel Yoo wrote:

Here you go:

http://hkn.eecs.berkeley.edu/~dyoo/python/ahocorasick/

This provides an 'ahocorasick' Python C extension module for doing
matching on a set of keywords. I'll start writing out the package
announcements tomorrow.


Looks good.

However:

tree.search("I went to alpha beta the other day to pick up some spam")

could use a startpos (default=0) argument for efficiently restarting
the search after finding the first match

Jul 18 '05 #20

P: n/a
Daniel Yoo <dy**@hkn.eecs.berkeley.edu> wrote:
: John Machin <sj******@lexicon.net> wrote:
: : tree.search("I went to alpha beta the other day to pick up some spam")

: : could use a startpos (default=0) argument for efficiently restarting
: : the search after finding the first match

: Ok, that's easy to fix. I'll do that tonight.

Done. 'startpos' and other bug fixes are in Release 0.7:

http://hkn.eecs.berkeley.edu/~dyoo/p...ick-0.7.tar.gz

But I think I'd better hold off adding the ahocorasick package to PyPI
until it stabilizes for longer than a day... *grin*
Jul 18 '05 #21

P: n/a
Scott David Daniels <Sc***********@acm.org> wrote:

: I have a (very high speed) modified Aho-Corasick machine that I sell.
: The calling model that I found works well is:

: def chases(self, sourcestream, ...):
: '''A generator taking a generator of source blocks,
: yielding (matches, position) pairs where position is an
: offset within the "current" block.
: '''

: You might consider taking a look at providing that form.
Hi Scott,

No problem, I'll be happy to do this.

I need some clarification on the calling model though. Would this be
an accurate test case?

######
def testChasesInterface(self):
self.tree.add("python")
self.tree.add("is")
self.tree.make()
sourceStream = iter(("python programming is fun",
"how much is that python in the window"))
self.assertEqual([
(sourceBlocks[0], (0, 6)),
(sourceBlocks[0], (19, 21)),
(sourceBlocks[1], (9, 11)),
(sourceBlocks[1], (17, 23)),
],
list(self.tree.chases(sourceStream))
######

Here, I'm assuming that chases() takes in a 'sourceStream', which is
an iterator of text blocks., and that the return value is itself an
iterator.
Best of wishes!
Jul 18 '05 #22

P: n/a
: Done. 'startpos' and other bug fixes are in Release 0.7:

: http://hkn.eecs.berkeley.edu/~dyoo/p...ick-0.7.tar.gz

Ok, I stopped working on the Aho-Corasick module for a while, so I've
just bumped the version number to 0.8 and posted it up on PyPI.

I did add some preliminary code to use graphviz to emit DOT files, but
it's very untested code. I also added an undocumented api for
inspecting the states and their transitions.

I hope that the original poster finds it useful, even though it's
probably a bit late.
Hope this helps!
Jul 18 '05 #23

This discussion thread is closed

Replies have been disabled for this discussion.