By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,847 Members | 2,392 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,847 IT Pros & Developers. It's quick & easy.

String search vs regexp search

P: n/a
To search a word in a group of words, say a paragraph or a web page,
would a string search or a regexp search be faster?

The string search would of course be,

if str.find(substr) != -1:
domything()

And the regexp search assuming no case restriction would be,

strre=re.compile(substr, re.IGNORECASE)

m=strre.search(str)
if m:
domything()

I was about to do a test, then I thought someone here might have
some data on this already.

Thanks folks!

-Anan
Jul 18 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a
py*******@Hotpop.com (Anand Pillai) wrote in
news:84*************************@posting.google.co m:
To search a word in a group of words, say a paragraph or a web page,
would a string search or a regexp search be faster?

The string search would of course be,

if str.find(substr) != -1:
domything()

And the regexp search assuming no case restriction would be,

strre=re.compile(substr, re.IGNORECASE)

m=strre.search(str)
if m:
domything()

I was about to do a test, then I thought someone here might have
some data on this already.

Yes. The answer is 'it all depends'.

Things it depends on include:

Your two bits of code do different things, one is case sensitive, one
ignores case. Which did you need?

How long is the string you are searching? How long is the substring?

Is the substring the same every time, or are you always searching for
different strings. Can the substring contain characters with special
meanings for regular expressions?

The regular expression code has a startup penalty since it has to compile
the regular expression at least once, however the actual searching may be
faster than the naive str.find. If the time spent doing the search is
sufficiently long compared with the time doing the compile, the regular
expression may win out.

Bottom line: write the code so it is as clean and maintainable as possible.
Only worry about optimising this if you have timed it and know that your
searches are a bottleneck.

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #2

P: n/a
Sorry for being too brief!

I was talking about a function which 'counts' the number
of occurences using string & regexp.

I wrote the code for the regexp search as well as the function
search and tested it on a rather large file (800 KB) for
occurences of a certain word. I find that the string search
is at least 2 times faster than the one with regexp, excluding
the time for the regexp.compile() method. This is particularly
noticeable when the file becomes quite large and the word is
spread out.

I also thought the regexp would beat string thumbs down and I
am suprised at the result that it is the other way around.

Here is the code. Note that I am using the 'count' methods that
count the number of occurences rather than the 'find' methods.

# Test to find out whether string search in a data
# is faster than regexp search.

# Results: String search is much faster when it comes
# to many occurences of the sub string.

import time

def strsearch1(s, substr):

t1 = time.time()
print 'Count 1 =>', s.count(substr)
t2 = time.time()
print 'Searching using string, Time taken => ', t2 - t1

def strsearch2(s, substr):

import re

r=re.compile(substr, re.IGNORECASE)
t1 = time.time()
print 'Count 2 =>', len(r.findall(s))
t2 = time.time()
print 'Searching using regexp, Time taken => ', t2 - t1
data=open("test.html", "r").read()
strsearch1(data, "Miriam")
strsearch2(data, "Miriam")

# Output here...

D:\Programming\python>python strsearch.py
Count 1 => 45
Searching using string, Time taken => 0.0599999427795
Count 2 => 45
Searching using regexp, Time taken => 0.110000014305

Test was done on a windows 98 machine using Python 2.3, running
on 248 MB RAM, Intel 1.7 GHz chipset.

I was thinking of using regexp searches in my code, but this convinces
me to stick on to the good old string search.

Thanks for the replies.

-Anand

Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message news:<Xn***************************@127.0.0.1>...
py*******@Hotpop.com (Anand Pillai) wrote in
news:84*************************@posting.google.co m:
To search a word in a group of words, say a paragraph or a web page,
would a string search or a regexp search be faster?

The string search would of course be,

if str.find(substr) != -1:
domything()

And the regexp search assuming no case restriction would be,

strre=re.compile(substr, re.IGNORECASE)

m=strre.search(str)
if m:
domything()

I was about to do a test, then I thought someone here might have
some data on this already.

Yes. The answer is 'it all depends'.

Things it depends on include:

Your two bits of code do different things, one is case sensitive, one
ignores case. Which did you need?

How long is the string you are searching? How long is the substring?

Is the substring the same every time, or are you always searching for
different strings. Can the substring contain characters with special
meanings for regular expressions?

The regular expression code has a startup penalty since it has to compile
the regular expression at least once, however the actual searching may be
faster than the naive str.find. If the time spent doing the search is
sufficiently long compared with the time doing the compile, the regular
expression may win out.

Bottom line: write the code so it is as clean and maintainable as possible.
Only worry about optimising this if you have timed it and know that your
searches are a bottleneck.

Jul 18 '05 #3

P: n/a
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message news:<Xn***************************@127.0.0.1>...
The regular expression code has a startup penalty since it has to compile
the regular expression at least once, however the actual searching may be
faster than the naive str.find. If the time spent doing the search is
sufficiently long compared with the time doing the compile, the regular
expression may win out.


Both regular expression searching and string.find will do searching
one character at a time; given that, it seems impossible to me that
the hand-coded-in-C "naive" string.find could be slower than the
machine-translated-coded-in-Python regular expression search.
Compilation time only serves to further increase string.find's
advantage.

Jeremy
Jul 18 '05 #4

P: n/a
> I find that the string search
is at least 2 times faster than the one with regexp
r=re.compile(substr, re.IGNORECASE)


Off the top, maybe the regex is taking twice the time because it is doing
twice the work looking for both the lower case character and the upper case
character. It does not seem a fair test because the string find is case
sensitive.
--

Dennis Reinhardt

De*****@dair.com http://www.spamai.com?ng_py
Jul 18 '05 #5

P: n/a
tw*********@hotmail.com (Jeremy Fincher) wrote in
news:69**************************@posting.google.c om:
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message
news:<Xn***************************@127.0.0.1>...
The regular expression code has a startup penalty since it has to
compile the regular expression at least once, however the actual
searching may be faster than the naive str.find. If the time spent
doing the search is sufficiently long compared with the time doing
the compile, the regular expression may win out.


Both regular expression searching and string.find will do searching
one character at a time; given that, it seems impossible to me that
the hand-coded-in-C "naive" string.find could be slower than the
machine-translated-coded-in-Python regular expression search.
Compilation time only serves to further increase string.find's
advantage.

I may have misremembered, but I thought there was a thread discussing this
a little while back which claimed that the regular expression library
looked for constant strings at the start of the regex, and if it found one
used Boyer-Moore to do the search. If it does, then regular expressions
searching for a constant string certainly ought to be much faster than a
plain string.find (as the length of the searched string tends towards
infinity).

If it doesn't, then it should.

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #6

P: n/a
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in
news:Xn***************************@127.0.0.1:
Both regular expression searching and string.find will do searching
one character at a time; given that, it seems impossible to me that
the hand-coded-in-C "naive" string.find could be slower than the
machine-translated-coded-in-Python regular expression search.
Compilation time only serves to further increase string.find's
advantage.

I may have misremembered, but I thought there was a thread discussing
this a little while back which claimed that the regular expression
library looked for constant strings at the start of the regex, and if
it found one used Boyer-Moore to do the search. If it does, then
regular expressions searching for a constant string certainly ought to
be much faster than a plain string.find (as the length of the searched
string tends towards infinity).

If it doesn't, then it should.


Ok, found the code. Regular expression searches do indeed use a form of
Boyer-Moore, but not if you are ignoring case. So by specifying
re.IGNORECASE the OP got a double hit, not only does the code have to do
case insensitive comparisons, but it also has to crawl along looking at
every character in the search string instead of skipping most of them.

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #7

P: n/a
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message news:<Xn***************************@127.0.0.1>...
Ok, found the code. Regular expression searches do indeed use a form of
Boyer-Moore, but not if you are ignoring case. So by specifying
re.IGNORECASE the OP got a double hit, not only does the code have to do
case insensitive comparisons, but it also has to crawl along looking at
every character in the search string instead of skipping most of them.


That's cool! Where'd you find the code?

Jeremy
Jul 18 '05 #8

P: n/a
Jeremy Fincher wrote:
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message
news:<Xn***************************@127.0.0.1>...
Ok, found the code. Regular expression searches do indeed use a form of
Boyer-Moore, but not if you are ignoring case. So by specifying
re.IGNORECASE the OP got a double hit, not only does the code have to do
case insensitive comparisons, but it also has to crawl along looking at
every character in the search string instead of skipping most of them.


That's cool! Where'd you find the code?


Hmmm, dist/src/Modules/_sre.c in the Python CVS tree? Or Modules/_sre.c in
a standard source distribution?
Alex

Jul 18 '05 #9

P: n/a
Alex Martelli <al***@aleax.it> wrote in
news:pC**********************@news2.tin.it:
Jeremy Fincher wrote:
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message
news:<Xn***************************@127.0.0.1>...
Ok, found the code. Regular expression searches do indeed use a form
of Boyer-Moore, but not if you are ignoring case. So by specifying
re.IGNORECASE the OP got a double hit, not only does the code have
to do case insensitive comparisons, but it also has to crawl along
looking at every character in the search string instead of skipping
most of them.


That's cool! Where'd you find the code?


Hmmm, dist/src/Modules/_sre.c in the Python CVS tree? Or
Modules/_sre.c in a standard source distribution?

Close, but it is actually a little bit complicated. _sre.c has the code
that does the search, including the skipping forward using an overlap
table, but the bit which checks for a literal prefix and builds the overlap
table is in lib/src_compile.py. So its in the Python code you have to look
to find that it ignores literal prefixes when ignoring case.

--
Duncan Booth du****@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?
Jul 18 '05 #10

P: n/a
Small point. If you notice the code, compilation time is
not included in the actual time calculated for regexp.

-Anand

tw*********@hotmail.com (Jeremy Fincher) wrote in message news:<69**************************@posting.google. com>...
Duncan Booth <du****@NOSPAMrcp.co.uk> wrote in message news:<Xn***************************@127.0.0.1>...
The regular expression code has a startup penalty since it has to compile
the regular expression at least once, however the actual searching may be
faster than the naive str.find. If the time spent doing the search is
sufficiently long compared with the time doing the compile, the regular
expression may win out.


Both regular expression searching and string.find will do searching
one character at a time; given that, it seems impossible to me that
the hand-coded-in-C "naive" string.find could be slower than the
machine-translated-coded-in-Python regular expression search.
Compilation time only serves to further increase string.find's
advantage.

Jeremy

Jul 18 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.