469,282 Members | 1,701 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,282 developers. It's quick & easy.

Pattern match over mutiple files is slow - Help Needed !

RV
Hi:

Am having a huge performance problem in one of my scripts.

I have an array containing some reference keys. ( about 1000 entries
or so ).
I also have a list of files ( about 100 or so ) and I need to locate
occurence of these keys in all of the files and replace with some
value ( lets say the key-value hash is also given ).

My code looks something like this:

#Note: %keyval --> holds the key-value mapping
# @keylist - is the array with the 1000 keys ( like keys %keyval )
# @files - holds the list of files ( about 100 or so ).

foreach $f ( @files )
{
#open file - validate etc - assume it is opened as <FH>
while(<FH>) #each line
{
$line=$_ ;
foreach $k (@keylist)
{
$line =~ s/$k/$keyval{$k}/ig ; #replace key with value
} #key loop
}
close(FH);
} #foreach

This code works - but its too slow ! -- Obviously I run the inner loop
1000 times for each line in the file.
Constraints being that multiple keys may occur on the same line ( and
even the same key will occur multiple times on the same line ).

I tried globbing the file into a scalar ( unsetting $/ ) - no big
difference in timing.

Can someone help me here ? - If you can give some ideas that I can
look into, I'll greatly appreciate it.
Pseudocode is fine as well.

If you can include a courtesy CC: that would be great !

Thanks - hope I've conveyed my problem accurately ( this among my
first posts - am a frequent "reader" though ! ).

-RV.
Jul 19 '05 #1
1 2466


RV wrote:
Hi:

Am having a huge performance problem in one of my scripts.

I have an array containing some reference keys. ( about 1000 entries
or so ).
I also have a list of files ( about 100 or so ) and I need to locate
occurence of these keys in all of the files and replace with some
value ( lets say the key-value hash is also given ).

My code looks something like this:

#Note: %keyval --> holds the key-value mapping
# @keylist - is the array with the 1000 keys ( like keys %keyval )
# @files - holds the list of files ( about 100 or so ).

foreach $f ( @files )
{
#open file - validate etc - assume it is opened as <FH>
while(<FH>) #each line
{
$line=$_ ;
foreach $k (@keylist)
{
$line =~ s/$k/$keyval{$k}/ig ; #replace key with value
} #key loop
}
close(FH);
} #foreach

This code works - but its too slow ! -- Obviously I run the inner loop
1000 times for each line in the file.
Constraints being that multiple keys may occur on the same line ( and
even the same key will occur multiple times on the same line ).

I tried globbing the file into a scalar ( unsetting $/ ) - no big
difference in timing.

Can someone help me here ? - If you can give some ideas that I can
look into, I'll greatly appreciate it.
Pseudocode is fine as well.

If you can include a courtesy CC: that would be great !

Thanks - hope I've conveyed my problem accurately ( this among my
first posts - am a frequent "reader" though ! ).

-RV.

You could read each file in turn into one string then apply your
thousand replacements to the whole file (not line by line).
If files are too big then you could apply your replacements to, say, one
hundred thousand lines at a time.

Cheers, Pad.

Jul 19 '05 #2

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

9 posts views Thread by Xah Lee | last post: by
2 posts views Thread by Alphonse Giambrone | last post: by
11 posts views Thread by donet programmer | last post: by
19 posts views Thread by konrad Krupa | last post: by
3 posts views Thread by konrad Krupa | last post: by
reply views Thread by zhoujie | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.