473,324 Members | 2,268 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,324 software developers and data experts.

Scanning a file

I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100. I've tried with this:

# start
import sys

numChars = 0
startCode = 0
count = 0

inputFile = sys.stdin

while True:
ch = inputFile.read(1)
numChars += 1

if len(ch) < 1: break

startCode = ((startCode << 8) & 0xffffffff) | (ord(ch))
if numChars < 4: continue

if startCode == 0x00000100:
count = count + 1

print count
# end

But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?

/David

Oct 28 '05 #1
79 5154
pi************@gmail.com wrote:
I want to scan a file byte for byte [...]
while True:
ch = inputFile.read(1)
[...] But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?


Read in blocks, not byte for byte. I had good experiences with block
sizes like 4096 or 8192.

-- Gerhard

Oct 28 '05 #2
Okay, how do I do this?

Also, if you look at the code, I build a 32-bit unsigned integer from
the bytes I read. And the 32-bit pattern I am looking for can start on
_any_ byte boundary in the file. It would be nice if I could somehow
just scan for that pattern explicitly, without having to build a 32-bit
integer first. If I could tell python "scan this file for the bytes 0,
0, 1, 0 in succession. How many 0, 0, 1, 0 did you find?"

/David

Oct 28 '05 #3
pi************@gmail.com writes:
I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100. I've tried with this:


use re.search or string.find. The simplest way is just read the whole
file into memory first. If the file is too big, you have to read it in
chunks and include some hair to notice if the four byte pattern straddles
two adjoining chunks.
Oct 28 '05 #4
I'm now down to:

f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count

Which is quite fast. The only problems is that the file might be huge.
I really have no need for reading the entire file into a string as I am
doing here. All I want is to count occurences this substring. Can I
somehow count occurences in a file without reading it into a string
first?

/David

Oct 28 '05 #5
"pi************@gmail.com" <pi************@gmail.com> writes:
f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count


That's a lot of lines. This is a bit off topic, but I just can't stand
unnecessary local variables.

print file("filename", "rb").read().count("\x00\x00\x01\x00")

--
Björn Lindström <bk**@stp.lingfil.uu.se>
Student of computational linguistics, Uppsala University, Sweden
Oct 28 '05 #6
"pi************@gmail.com" <pi************@gmail.com> writes:
Which is quite fast. The only problems is that the file might be huge.
I really have no need for reading the entire file into a string as I am
doing here. All I want is to count occurences this substring. Can I
somehow count occurences in a file without reading it into a string
first?


How about iterating through the file? You can read it line by line, two lines
at a time. Pseudocode follows:

line1 = read_line
while line2 = read_line:
line_to_check = ''.join([line1, line2])
check_for_desired_string
line1 = line2

With that you always have two lines in the buffer and you can check all of
them for your desired string, no matter what the size of the file is.
Be seeing you,
--
Jorge Godoy <go***@ieee.org>
Oct 28 '05 #7
Jorge Godoy <go***@ieee.org> writes:
How about iterating through the file? You can read it line by line, two lines
at a time. Pseudocode follows:

line1 = read_line
while line2 = read_line:
line_to_check = ''.join([line1, line2])
check_for_desired_string
line1 = line2

With that you always have two lines in the buffer and you can check all of
them for your desired string, no matter what the size of the file is.


This will fail if the string to search for is e.g. "\n\n\n\n" and it
actually occcurs in the file.

Bernhard

--
Intevation GmbH http://intevation.de/
Skencil http://skencil.org/
Thuban http://thuban.intevation.org/
Oct 28 '05 #8
First of all, this isn't a text file, it is a binary file. Secondly,
substrings can overlap. In the sequence 0010010 the substring 0010
occurs twice.

/David

Oct 28 '05 #9
On 2005-10-28, pi************@gmail.com <pi************@gmail.com> wrote:
I'm now down to:

f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count

Which is quite fast. The only problems is that the file might be huge.
I really have no need for reading the entire file into a string as I am
doing here. All I want is to count occurences this substring. Can I
somehow count occurences in a file without reading it into a string
first?


Yes - use memory mapping (the mmap module). An mmap object is like a
cross between a file and a string, but the data is only read into RAM
when, and for as long as, necessary. An mmap object doesn't have a
count() method, but you can just use find() in a while loop instead.

Andrew
Oct 28 '05 #10
Gerhard Häring wrote:
pi************@gmail.com wrote:
I want to scan a file byte for byte [...]
while True:
ch = inputFile.read(1)
[...] But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?


Read in blocks, not byte for byte. I had good experiences with block
sizes like 4096 or 8192.


It's difficult to handle overlaps. The four byte sequence may occur at the
end of one block and beginning of the next. You'd need to check for these
special cases.

Jeremy

--
Jeremy Sanders
http://www.jeremysanders.net/
Oct 28 '05 #11
<pi************@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100. I've tried with this:

# start
import sys

numChars = 0
startCode = 0
count = 0

inputFile = sys.stdin

while True:
ch = inputFile.read(1)
numChars += 1

if len(ch) < 1: break

startCode = ((startCode << 8) & 0xffffffff) | (ord(ch))
if numChars < 4: continue

if startCode == 0x00000100:
count = count + 1

print count
# end

But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?

/David


How about something like:

#!/usr/bin/env python

import sys

fn = 't.dat'
ss = '\x00\x00\x01\x00'

be = len(ss) - 1 # length of overlap to check
blocksize = 4 * 1024 # need to ensure that blocksize > overlap

fp = open(fn, 'rb')
b = fp.read(blocksize)
found = 0
while len(b) > be:
if b.find(ss) != -1:
found = 1
break
b = b[-be:] + fp.read(blocksize)
fp.close()
sys.exit(found)
Oct 28 '05 #12
On Friday 28 October 2005 06:29, Björn Lindström wrote:
"pi************@gmail.com" <pi************@gmail.com> writes:
f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count


That's a lot of lines. This is a bit off topic, but I just can't stand
unnecessary local variables.

print file("filename", "rb").read().count("\x00\x00\x01\x00")

The "f" is not terribly unnecessary, because the part of the code you didn't
see was

f.close()

Which would be considered good practice.

James

--
James Stroud
UCLA-DOE Institute for Genomics and Proteomics
Box 951570
Los Angeles, CA 90095

http://www.jamesstroud.com/
Oct 28 '05 #13
Andrew McCarthy <a_********@hotmail.com> writes:
On 2005-10-28, pi************@gmail.com <pi************@gmail.com> wrote:
I'm now down to:

f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count

Which is quite fast. The only problems is that the file might be huge.
I really have no need for reading the entire file into a string as I am
doing here. All I want is to count occurences this substring. Can I
somehow count occurences in a file without reading it into a string
first?


Yes - use memory mapping (the mmap module). An mmap object is like a
cross between a file and a string, but the data is only read into RAM
when, and for as long as, necessary. An mmap object doesn't have a
count() method, but you can just use find() in a while loop instead.


Except if you can't read the file into memory because it's to large,
there's a pretty good chance you won't be able to mmap it either. To
deal with huge files, the only option is to read the file in in
chunks, count the occurences in each chunk, and then do some fiddling
to deal with the pattern landing on a boundary.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 28 '05 #14
pi************@gmail.com wrote:
I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100.


data = sys.stdin.read()
print data.count('\x00\x00\x01\x00')

Kent
Oct 28 '05 #15
<pi************@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100. I've tried with this:

# start
import sys

numChars = 0
startCode = 0
count = 0

inputFile = sys.stdin

while True:
ch = inputFile.read(1)
numChars += 1

if len(ch) < 1: break

startCode = ((startCode << 8) & 0xffffffff) | (ord(ch))
if numChars < 4: continue

if startCode == 0x00000100:
count = count + 1

print count
# end

But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?

/David


Here is an attempt at counting and using the mmap facility. There appear to
be some serious backward compatibility issues. I tried Python 2.1 on
Windows and AIX and had some odd results. If you are 2.4.1 or higher that
should not be a problem.

#!/usr/bin/env python
import sys
import os
import mmap

fn = 't.dat'
ss = '\x00\x00\x01\x00'

fp = open(fn, 'rb')
b = mmap.mmap(fp.fileno(), os.stat(fp.name).st_size,
access=mmap.ACCESS_READ)

count = 0
foundpoint = b.find(ss, 0)
while foundpoint != -1 and (foundpoint + 1) < b.size():
count = count + 1
foundpoint = b.find(ss, foundpoint + 1)
b.close()

print count

fp.close()
sys.exit(0)
Oct 28 '05 #16
On Fri, 28 Oct 2005 06:22:11 -0700, pi************@gmail.com wrote:
Which is quite fast. The only problems is that the file might be huge.
What *you* call huge and what *Python* calls huge may be very different
indeed. What are you calling huge?
I really have no need for reading the entire file into a string as I am
doing here. All I want is to count occurences this substring. Can I
somehow count occurences in a file without reading it into a string
first?


Magic?

You have to read the file into memory at some stage, otherwise how can you
see what value the bytes are? The only question is, can you read it all
into one big string (in which case, your solution is unlikely to be
beaten), or do you have to read the file in chunks and deal with the
boundary cases (which is harder)?

Here is another thought. What are you going to do with the count when you
are done? That sounds to me like a pretty pointless result: "Hi user, the
file XYZ has 27 occurrences of bitpattern \x00\x00\x01\x00. Would you like
to do another file?"

If you are planning to use this count to do something, perhaps there is a
more efficient way to combine the two steps into one -- especially
valuable if your files really are huge.
--
Steven.

Oct 28 '05 #17
On Fri, 28 Oct 2005 15:29:46 +0200, Björn Lindström wrote:
"pi************@gmail.com" <pi************@gmail.com> writes:
f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count


That's a lot of lines. This is a bit off topic, but I just can't stand
unnecessary local variables.

print file("filename", "rb").read().count("\x00\x00\x01\x00")


Funny you should say that, because I can't stand unnecessary one-liners.

In any case, you are assuming that Python will automagically close the
file when you are done. That's good enough for a script, but not best
practice.

f = open("filename", "rb")
print f.read().count("\x00\x00\x01\x00")
f.close()

is safer, has no unnecessary local variables, and is not unnecessarily
terse. Unfortunately, it doesn't solve the original poster's problem,
because his file is too big to read into memory all at once -- or so he
tells us.
--
Steven.

Oct 28 '05 #18
Mike Meyer <mw*@mired.org> wrote:
...
Except if you can't read the file into memory because it's to large,
there's a pretty good chance you won't be able to mmap it either. To
deal with huge files, the only option is to read the file in in
chunks, count the occurences in each chunk, and then do some fiddling
to deal with the pattern landing on a boundary.


That's the kind of things generators are for...:

def byblocks(f, blocksize, overlap):
block = f.read(blocksize)
yield block
while block:
block = block[-overlap:] + f.read(blocksize-overlap)
if block: yield block

Now, to look for a substring of length N in an open binary file f:

f = open(whatever, 'b')
count = 0
for block in byblocks(f, 1024*1024, len(subst)-1):
count += block.count(subst)
f.close()

not much "fiddling" needed, as you can see, and what little "fiddling"
is needed is entirely encompassed by the generator...
Alex
Oct 29 '05 #19
"Paul Watson" <pw*****@redlinepy.com> wrote in message
news:3s************@individual.net...
<pi************@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100. I've tried with this:

# start
import sys

numChars = 0
startCode = 0
count = 0

inputFile = sys.stdin

while True:
ch = inputFile.read(1)
numChars += 1

if len(ch) < 1: break

startCode = ((startCode << 8) & 0xffffffff) | (ord(ch))
if numChars < 4: continue

if startCode == 0x00000100:
count = count + 1

print count
# end

But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?

/David


Here is a better one that counts, and not just detects, the substring. This
is -much- faster than using mmap; especially for a large file that may cause
paging to start. Using mmap can be -very- slow.

#!/usr/bin/env python
import sys

fn = 't2.dat'
ss = '\x00\x00\x01\x00'

be = len(ss) - 1 # length of overlap to check
blocksize = 64 * 1024 # need to ensure that blocksize > overlap

fp = open(fn, 'rb')
b = fp.read(blocksize)
count = 0
while len(b) > be:
count += b.count(ss)
b = b[-be:] + fp.read(blocksize)
fp.close()

print count
sys.exit(0)
Oct 29 '05 #20
"Paul Watson" <pw*****@redlinepy.com> writes:
Here is a better one that counts, and not just detects, the substring. This
is -much- faster than using mmap; especially for a large file that may cause
paging to start. Using mmap can be -very- slow.

#!/usr/bin/env python
import sys

fn = 't2.dat'
ss = '\x00\x00\x01\x00'

be = len(ss) - 1 # length of overlap to check
blocksize = 64 * 1024 # need to ensure that blocksize > overlap

fp = open(fn, 'rb')
b = fp.read(blocksize)
count = 0
while len(b) > be:
count += b.count(ss)
b = b[-be:] + fp.read(blocksize)
fp.close()

print count
sys.exit(0)


Did you do timings on it vs. mmap? Having to copy the data multiple
times to deal with the overlap - thanks to strings being immutable -
would seem to be a lose, and makes me wonder how it could be faster
than mmap in general.

Thanks,
<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 29 '05 #21
Mike Meyer wrote:
Did you do timings on it vs. mmap? Having to copy the data multiple
times to deal with the overlap - thanks to strings being immutable -
would seem to be a lose, and makes me wonder how it could be faster
than mmap in general.


if you use "mmap" to read large files sequentially, without calling "madvise",
the system may try to keep old pages around just in case, and won't do as
much read-ahead as it can do. if you're low on memory, that means that
the system may waste some time swapping out data for other applications,
rather than throwing away data that you know that you will never look at
again.

if you have reasonably large files and you're not running on an overcrowded
machine, this is usually not a problem.

(does Python's mmap module support madvise, btw? doesn't look like it does...)

</F>

Oct 29 '05 #22
On 28 Oct 2005 06:51:36 -0700, "pi************@gmail.com" <pi************@gmail.com> wrote:
First of all, this isn't a text file, it is a binary file. Secondly,
substrings can overlap. In the sequence 0010010 the substring 0010
occurs twice.

ISTM you better let others know exactly what you mean by this, before
you use the various solutions suggested or your own ;-)

a) Are you talking about bit strings or byte strings?
b) Do you want to _count_ overlapping substrings??!!
Note result of s.count on your example:
s = '0010010'
s.count('0010') 1

vs. brute force counting overlapped substrings (not tested beyond what you see ;-)
def ovcount(s, sub): ... start = count = 0
... while True:
... start = s.find(sub, start) + 1
... if start==0: break
... count += 1
... return count
... ovcount(s, '0010')

2

Regards,
Bengt Richter
Oct 29 '05 #23
On Fri, 28 Oct 2005 20:03:17 -0700, al*****@yahoo.com (Alex Martelli) wrote:
Mike Meyer <mw*@mired.org> wrote:
...
Except if you can't read the file into memory because it's to large,
there's a pretty good chance you won't be able to mmap it either. To
deal with huge files, the only option is to read the file in in
chunks, count the occurences in each chunk, and then do some fiddling
to deal with the pattern landing on a boundary.


That's the kind of things generators are for...:

def byblocks(f, blocksize, overlap):
block = f.read(blocksize)
yield block
while block:
block = block[-overlap:] + f.read(blocksize-overlap)
if block: yield block

Now, to look for a substring of length N in an open binary file f:

f = open(whatever, 'b')
count = 0
for block in byblocks(f, 1024*1024, len(subst)-1):
count += block.count(subst)
f.close()

not much "fiddling" needed, as you can see, and what little "fiddling"
is needed is entirely encompassed by the generator...

Do I get a job at google if I find something wrong with the above? ;-)

Regards,
Bengt Richter
Oct 29 '05 #24
Bengt Richter wrote:
On Fri, 28 Oct 2005 20:03:17 -0700, al*****@yahoo.com (Alex Martelli)
wrote:
Mike Meyer <mw*@mired.org> wrote:
...
Except if you can't read the file into memory because it's to large,
there's a pretty good chance you won't be able to mmap it either. To
deal with huge files, the only option is to read the file in in
chunks, count the occurences in each chunk, and then do some fiddling
to deal with the pattern landing on a boundary.


That's the kind of things generators are for...:

def byblocks(f, blocksize, overlap):
block = f.read(blocksize)
yield block
while block:
block = block[-overlap:] + f.read(blocksize-overlap)
if block: yield block

Now, to look for a substring of length N in an open binary file f:

f = open(whatever, 'b')
count = 0
for block in byblocks(f, 1024*1024, len(subst)-1):
count += block.count(subst)
f.close()

not much "fiddling" needed, as you can see, and what little "fiddling"
is needed is entirely encompassed by the generator...

Do I get a job at google if I find something wrong with the above? ;-)


Try it with a subst of length 1. Seems like you missed an opportunity :-)

Peter

Oct 29 '05 #25
On Sat, 29 Oct 2005 10:34:24 +0200, Peter Otten <__*******@web.de> wrote:
Bengt Richter wrote:
On Fri, 28 Oct 2005 20:03:17 -0700, al*****@yahoo.com (Alex Martelli)
wrote:
Mike Meyer <mw*@mired.org> wrote:
...
Except if you can't read the file into memory because it's to large,
there's a pretty good chance you won't be able to mmap it either. To
deal with huge files, the only option is to read the file in in
chunks, count the occurences in each chunk, and then do some fiddling
to deal with the pattern landing on a boundary.

That's the kind of things generators are for...:

def byblocks(f, blocksize, overlap):
block = f.read(blocksize)
yield block
while block:
block = block[-overlap:] + f.read(blocksize-overlap)
if block: yield block

Now, to look for a substring of length N in an open binary file f:

f = open(whatever, 'b')
count = 0
for block in byblocks(f, 1024*1024, len(subst)-1):
count += block.count(subst)
f.close()

not much "fiddling" needed, as you can see, and what little "fiddling"
is needed is entirely encompassed by the generator...

Do I get a job at google if I find something wrong with the above? ;-)


Try it with a subst of length 1. Seems like you missed an opportunity :-)

I was thinking this was an example a la Alex's previous discussion
of interviewee code challenges ;-)

What struck me was
gen = byblocks(StringIO.StringIO('no'),1024,len('end?')-1)
[gen.next() for i in xrange(10)]

['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']

Regards,
Bengt Richter
Oct 29 '05 #26
Bengt Richter <bo**@oz.net> wrote:
...
while block:
block = block[-overlap:] + f.read(blocksize-overlap)
if block: yield block
...
I was thinking this was an example a la Alex's previous discussion
of interviewee code challenges ;-)

What struck me was
>>> gen = byblocks(StringIO.StringIO('no'),1024,len('end?')-1)
>>> [gen.next() for i in xrange(10)]

['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']


Heh, OK, I should get back into the habit of adding a "warning: untested
code" when I post code (particularly when it's late and I'm
jetlagged;-). The code I posted will never exit, since block always
keeps the last overlap bytes; it needs to be changed into something like
(warning -- untested code!-)

if overlap>0:
while True:
next = f.read(blocksize-overlap)
if not next: break
block = block[-overlap:] + next
yield block
else:
while True:
next = f.read(blocksize)
if not next: break
yield next

(the if/else is needed to handle requests for overlaps <= 0, if desired;
I think it's clearer to split the cases rather than to test inside the
loop's body).
Alex

Oct 29 '05 #27
Bengt Richter wrote:
What struck me was
*gen*=*byblocks(StringIO.StringIO('no'),1024,le n('end?')-1)
*[gen.next()*for*i*in*xrange(10)]

['no',*'no',*'no',*'no',*'no',*'no',*'no',*'no',*'n o',*'no']


Ouch. Seems like I spotted the subtle cornercase error and missed the big
one.

Peter

Oct 29 '05 #28
"Mike Meyer" <mw*@mired.org> wrote in message
news:86************@bhuda.mired.org...
"Paul Watson" <pw*****@redlinepy.com> writes: .... Did you do timings on it vs. mmap? Having to copy the data multiple
times to deal with the overlap - thanks to strings being immutable -
would seem to be a lose, and makes me wonder how it could be faster
than mmap in general.


The only thing copied is a string one byte less than the search string for
each block.

I did not do due dilligence with respect to timings. Here is a small
dataset read sequentially and using mmap.

$ ls -lgG t.dat
-rw-r--r-- 1 16777216 Oct 28 16:32 t.dat
$ time ./scanfile.py
1048576
0.80s real 0.64s user 0.15s system
$ time ./scanfilemmap.py
1048576
20.33s real 6.09s user 14.24s system

With a larger file, the system time skyrockets. I assume that to be the
paging mechanism in the OS. This is Cyngwin on Windows XP.

$ ls -lgG t2.dat
-rw-r--r-- 1 268435456 Oct 28 16:33 t2.dat
$ time ./scanfile.py
16777216
28.85s real 16.37s user 0.93s system
$ time ./scanfilemmap.py
16777216
323.45s real 94.45s user 227.74s system
Oct 29 '05 #29
I think implementing a finite state automaton would be a good (best?)
solution. I have drawn a FSM for you (try viewing the following in
fixed width font). Just increment the count when you reach state 5.

<---------------|
| |
0 0 | 1 0 |0
-->[1]--->[2]--->[3]--->[4]--->[5]-|
^ | | ^ | | |
1| |<---| | | |1 |1
|_| 1 |_| | |
^ 0 | |
|---------------------|<-----|

If you don't understand FSM's, try getting a book on computational
theory (the book by Hopcroft & Ullman is great.)

Here you don't have special cases whether reading in blocks or reading
whole at once (as you only need one byte at a time).

Vaibhav

Oct 29 '05 #30
Peter Otten wrote:
Bengt Richter wrote:

What struck me was

> gen = byblocks(StringIO.StringIO('no'),1024,len('end?')-1)
> [gen.next() for i in xrange(10)]


['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']

Ouch. Seems like I spotted the subtle cornercase error and missed the big
one.


No, you just realised subconsciously that we'd all spot the obvious one
and decided to point out the bug that would remain after the obvious one
had been fixed.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 29 '05 #31
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:

On Fri, 28 Oct 2005 15:29:46 +0200, Björn Lindström wrote:
"pi************@gmail.com" <pi************@gmail.com> writes:
f = open("filename", "rb")
s = f.read()
sub = "\x00\x00\x01\x00"
count = s.count(sub)
print count


That's a lot of lines. This is a bit off topic, but I just can't stand
unnecessary local variables.

print file("filename", "rb").read().count("\x00\x00\x01\x00")


Funny you should say that, because I can't stand unnecessary one-liners.

In any case, you are assuming that Python will automagically close the
file when you are done.


Nonsense. This behavior is deterministic. At the end of that line, the
anonymous file object out of scope, the object is deleted, and the file is
closed.
--
- Tim Roberts, ti**@probo.com
Providenza & Boekelheide, Inc.
Oct 29 '05 #32
Paul Watson wrote:
Here is a better one that counts, and not just detects, the substring. This
is -much- faster than using mmap; especially for a large file that may cause
paging to start. Using mmap can be -very- slow.

<ss = pattern, be = len(ss) - 1>
...
b = fp.read(blocksize)
count = 0
while len(b) > be:
count += b.count(ss)
b = b[-be:] + fp.read(blocksize)
...

In cases where that one wins and blocksize is big,
this should do even better:
...
block = fp.read(blocksize)
count = 0
while len(block) > be:
count += block.count(ss)
lead = block[-be :]
block = fp.read(blocksize)
count += (lead + block[: be]).count(ss)
...
--
-Scott David Daniels
sc***********@acm.org
Oct 29 '05 #33
Tim Roberts <ti**@probo.com> wrote:
...
print file("filename", "rb").read().count("\x00\x00\x01\x00")


Funny you should say that, because I can't stand unnecessary one-liners.

In any case, you are assuming that Python will automagically close the
file when you are done.


Nonsense. This behavior is deterministic. At the end of that line, the
anonymous file object out of scope, the object is deleted, and the file is
closed.


In today's implementations of Classic Python, yes. In other equally
valid implementations of the language, such as Jython, IronPython, or,
for all we know, some future implementation of Classic, that may well
not be the case. Many, quite reasonably, dislike relying on a specific
implementation's peculiarities, and prefer to write code that relies
only on what the _language_ specs guarantee.
Alex
Oct 29 '05 #34
ne********@gmail.com wrote:
I think implementing a finite state automaton would be a good (best?)
solution. I have drawn a FSM for you (try viewing the following in
fixed width font). Just increment the count when you reach state 5.

<---------------|
| |
0 0 | 1 0 |0
-->[1]--->[2]--->[3]--->[4]--->[5]-|
^ | | ^ | | |
1| |<---| | | |1 |1
|_| 1 |_| | |
^ 0 | |
|---------------------|<-----|

If you don't understand FSM's, try getting a book on computational
theory (the book by Hopcroft & Ullman is great.)

Here you don't have special cases whether reading in blocks or reading
whole at once (as you only need one byte at a time).

Indeed, but reading one byte at a time is about the slowest way to
process a file, in Python or any other language, because it fails to
amortize the overhead cost of function calls over many characters.

Buffering wasn't invented because early programmers had nothing better
to occupy their minds, remember :-)

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 29 '05 #35
"Alex Martelli" <al*****@yahoo.com> wrote in message
news:1h5760l.1e2eatkurdeo7N%al*****@yahoo.com...
In today's implementations of Classic Python, yes. In other equally
valid implementations of the language, such as Jython, IronPython, or,
for all we know, some future implementation of Classic, that may well
not be the case. Many, quite reasonably, dislike relying on a specific
implementation's peculiarities, and prefer to write code that relies
only on what the _language_ specs guarantee.


How could I identify when Python code does not close files and depends on
the runtime to take care of this? I want to know that the code will work
well under other Python implementations and future implementations which may
not have this provided.
Oct 29 '05 #36
"Paul Watson" <pw*****@redlinepy.com> writes:
How could I identify when Python code does not close files and depends on
the runtime to take care of this? I want to know that the code will work
well under other Python implementations and future implementations which may
not have this provided.


There is nothing in the Python language reference that guarantees the
files will be closed when the references go out of scope. That
CPython does it is simply an implementation artifact. If you want to
make sure they get closed, you have to close them explicitly. There
are some Python language extensions in the works to make this more
convenient (PEP 343) but for now you have to do it by hand.
Oct 29 '05 #37
ne********@gmail.com wrote:
I think implementing a finite state automaton would be a good (best?)
solution. I have drawn a FSM for you (try viewing the following in
fixed width font). Just increment the count when you reach state 5.

<---------------|
| |
0 0 | 1 0 |0
-->[1]--->[2]--->[3]--->[4]--->[5]-|
^ | | ^ | | |
1| |<---| | | |1 |1
|_| 1 |_| | |
^ 0 | |
|---------------------|<-----|

If you don't understand FSM's, try getting a book on computational
theory (the book by Hopcroft & Ullman is great.)


I already have that book. The above solution very slow in practice. None
of the solutions presented in this thread is nearly as fast as the

print file("filename", "rb").read().count("\x00\x00\x01\x00")

/David
Oct 29 '05 #38
On Sat, 29 Oct 2005 21:08:09 +0000, Tim Roberts wrote:
In any case, you are assuming that Python will automagically close the
file when you are done.


Nonsense. This behavior is deterministic. At the end of that line, the
anonymous file object out of scope, the object is deleted, and the file is
closed.


That is an implementation detail. CPython may do that, but JPython does
not -- or at least it did not last time I looked. JPython doesn't
guarantee that the file will be closed at any particular time, just that
it will be closed eventually.

If all goes well. What if you have a circular dependence and the file
reference never gets garbage-collected? What happens if the JPython
process gets killed before the file is closed? You might not care about
one file not being released, but what if it is hundreds of files?

In general, it is best practice to release external resources as soon as
you're done with them, and not rely on a garbage collector which may or
may not release them in a timely manner.

There are circumstances where things do not go well and the file never
gets closed cleanly -- for example, when your disk is full, and the
buffer is only written to disk when you close the file. Would you
prefer that error to raise an exception, or to pass silently? If you want
close exceptions to pass silently, then by all means rely on the garbage
collector to close the file.

You might not care about these details in a short script -- when I'm
writing a use-once-and-throw-away script, that's what I do. But it isn't
best practice: explicit is better than implicit.

I should also point out that for really serious work, the idiom:

f = file("parrot")
handle(f)
f.close()

is insufficiently robust for production level code. That was a detail I
didn't think I needed to drop on the original newbie poster, but depending
on how paranoid you are, or how many exceptions you want to insulate the
user from, something like this might be needed:

try:
f = file("parrot")
try:
handle(f)
finally:
try:
f.close()
except:
print "The file could not be closed; see your sys admin."
except:
print "The file could not be opened."
--
Steven.

Oct 29 '05 #39
"Paul Watson" <pw*****@redlinepy.com> writes:
"Mike Meyer" <mw*@mired.org> wrote in message
news:86************@bhuda.mired.org...
"Paul Watson" <pw*****@redlinepy.com> writes:

...
Did you do timings on it vs. mmap? Having to copy the data multiple
times to deal with the overlap - thanks to strings being immutable -
would seem to be a lose, and makes me wonder how it could be faster
than mmap in general.


The only thing copied is a string one byte less than the search string for
each block.


Um - you removed the code, but I could have *sworn* that it did
something like:

buf = buf[testlen:] + f.read(bufsize - testlen)

which should cause the the creation of three strings: the last few
bytes of the old buffer, a new bufferfull from the read, then the sum
of those two - created by copying the first two into a new string. So
you wind up copying all the data.

Which, as you showed, doesn't take nearly as much time as using mmap.

Thanks,
<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 29 '05 #40
Steven D'Aprano wrote:
On Fri, 28 Oct 2005 06:22:11 -0700, pi************@gmail.com wrote:
Which is quite fast. The only problems is that the file might be huge.
What *you* call huge and what *Python* calls huge may be very different
indeed. What are you calling huge?


I'm not saying that it is too big for Python. I am saying that it is too
big for the systems it is going to run on. These files can be 22 MB or 5
GB or ..., depending on the situation. It might not be okay to run a
tool that claims that much memory, even if it is available.
I really have no need for reading the entire file into a string as I am
doing here. All I want is to count occurences this substring. Can I
somehow count occurences in a file without reading it into a string
first?
Magic?


That would be nice :)

But you misunderstand me...
You have to read the file into memory at some stage, otherwise how can you
see what value the bytes are?
I haven't said that I would like to scan the file without reading it. I
am just saying that the .count() functionality implemented into strings
could just as well be applied to some abstraction such as a stream (I
come from C++). In C++, the count() functionality would be separated as
much as possible from any concrete datatype (such as a string),
precisely because it is a concept that is applicable at a more abstract
level. I should be able to say "count the substring occurences of this
stream" or "using this iterator" or something to that effect. If I could say

print file("filename", "rb").count("\x00\x00\x01\x00")

(or something like that)

instead of the original

print file("filename", "rb").read().count("\x00\x00\x01\x00")

it would be exactly what I am after. What is the conceptual difference?
The first solution should be at least as fast as the second. I have to
read and compare the characters anyway. I just don't need to store them
in a string. In essence, I should be able to use the "count occurences"
functionality on more things, such as a file, or even better, a file
read through a buffer with a size specified by me.

Here is another thought. What are you going to do with the count when you
are done? That sounds to me like a pretty pointless result: "Hi user, the
file XYZ has 27 occurrences of bitpattern \x00\x00\x01\x00. Would you like
to do another file?"

It might sound pointless to you, but it is not pointless for my purposes :)

If you must know, the above one-liner actually counts the number of
frames in an MPEG2 file. I want to know this number for a number of
files for various reasons. I don't want it to take forever.
If you are planning to use this count to do something, perhaps there is a
more efficient way to combine the two steps into one -- especially
valuable if your files really are huge.


Of course, but I don't need to do anything else in this case.

/David
Oct 29 '05 #41
Paul Watson <pw*****@redlinepy.com> wrote:
"Alex Martelli" <al*****@yahoo.com> wrote in message
news:1h5760l.1e2eatkurdeo7N%al*****@yahoo.com...
In today's implementations of Classic Python, yes. In other equally
valid implementations of the language, such as Jython, IronPython, or,
for all we know, some future implementation of Classic, that may well
not be the case. Many, quite reasonably, dislike relying on a specific
implementation's peculiarities, and prefer to write code that relies
only on what the _language_ specs guarantee.


How could I identify when Python code does not close files and depends on
the runtime to take care of this? I want to know that the code will work
well under other Python implementations and future implementations which may
not have this provided.


Then you should use try/finally (to have your code run correctly in all
of today's implementations; Python 2.5 will have a 'with' statement to
offer nicer syntax sugar for that, but it will be a while before all the
implementations get around to adding it).

If you're trying to test your code to ensure it explicitly closes all
files, you could (from within your tests) rebind built-ins 'file' and
'open' to be a class wrapping the real thing, and adding a flag to
remember if the file is open; at __del__ time it would warn if the file
had not been explicitly closed. E.g. (untested code):

import __builtin__
import warnings

_f = __builtin__.file
class testing_file(_f):
def __init__(self, *a, **k):
_f.__init__(self, *a, **k)
self._opened = True
def close(self):
_f.close(self)
self._opened = False
def __del__(self):
if self._opened:
warnings.warn(...)
self.close()

__builtin__.file = __builtin__.open = testing_file

Alex
Oct 29 '05 #42
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
...
I should also point out that for really serious work, the idiom:

f = file("parrot")
handle(f)
f.close()

is insufficiently robust for production level code. That was a detail I
didn't think I needed to drop on the original newbie poster, but depending
on how paranoid you are, or how many exceptions you want to insulate the
user from, something like this might be needed:

try:
f = file("parrot")
try:
handle(f)
finally:
try:
f.close()
except:
print "The file could not be closed; see your sys admin."
except:
print "The file could not be opened."


The inner try/finally is fine, but both the try/except are total, utter,
unmitigated disasters: they will hide a lot of information about
problems, let the program continue in a totally erroneous state, give
mistaken messages if handle(f) causes any kind of error totally
unrelated to opening the file (or if the user hits control-C during a
lengthy run of handle(f)), emit messages that can erroneously end up in
the redirected stdout of your program... VERY, VERY bad things.

Don't ever catch and ``handle'' exceptions in such ways. In particular,
each time you're thinking of writing a bare 'except:' clause, think
again, and you'll most likely find a much better approach.
Alex
Oct 29 '05 #43

Steve Holden wrote:
Indeed, but reading one byte at a time is about the slowest way to
process a file, in Python or any other language, because it fails to
amortize the overhead cost of function calls over many characters.

Buffering wasn't invented because early programmers had nothing better
to occupy their minds, remember :-)


Buffer, and then read one byte at a time from the buffer.

Vaibhav

Oct 30 '05 #44
David Rasmussen wrote:
None of the solutions presented in this thread is nearly as fast as the

print file("filename", "rb").read().count("\x00\x00\x01\x00")


Have you already timed Scott David Daniel's approach with a /large/
blocksize? It looks promising.

Peter
Oct 30 '05 #45
David Rasmussen wrote:
None of the solutions presented in this thread is nearly as fast as the

print file("filename", "rb").read().count("\x00\x00\x01\x00")


Have you already timed Scott David Daniels' approach with a /large/
blocksize? It looks promising.

Peter
Oct 30 '05 #46
Paul Watson wrote:
This is Cyngwin on Windows XP.


using cygwin to analyze performance characteristics of portable API:s
is a really lousy idea.

here are corresponding figures from a real operating system:

using a 16 MB file:

$ time python2.4 scanmap.py
real 0m0.080s
user 0m0.070s
sys 0m0.010s

$ time python2.4 scanpaul.py
real 0m0.458s
user 0m0.450s
sys 0m0.010s

using a 256 MB file (50% of available memory):

$ time python2.4 scanmap.py
real 0m0.913s
user 0m0.810s
sys 0m0.100s

$ time python2.4 scanpaul.py
real 0m7.149s
user 0m6.950s
sys 0m0.200s

using a 1024 MB file (200% of available memory):

$ time python2.4 scanpaul.py
real 0m34.274s
user 0m28.030s
sys 0m1.350s

$ time python2.4 scanmap.py
real 0m20.221s
user 0m3.120s
sys 0m1.520s

(Intel(R) Pentium(R) 4 CPU 2.80GHz, 512 MB RAM, relatively slow ATA
disks, relatively recent Linux, best result of multiple mixed runs shown.
scanmap performance would probably improve if Python supported the
"madvise" API, but I don't have time to test that today...)

</F>

Oct 30 '05 #47
On Sat, 29 Oct 2005 16:41:42 -0700, Alex Martelli wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
...
I should also point out that for really serious work, the idiom:

f = file("parrot")
handle(f)
f.close()

is insufficiently robust for production level code. That was a detail I
didn't think I needed to drop on the original newbie poster, but depending
on how paranoid you are, or how many exceptions you want to insulate the
user from, something like this might be needed:

try:
f = file("parrot")
try:
handle(f)
finally:
try:
f.close()
except:
print "The file could not be closed; see your sys admin."
except:
print "The file could not be opened."
The inner try/finally is fine, but both the try/except are total, utter,
unmitigated disasters: they will hide a lot of information about
problems, let the program continue in a totally erroneous state, give
mistaken messages if handle(f) causes any kind of error totally
unrelated to opening the file (or if the user hits control-C during a
lengthy run of handle(f)), emit messages that can erroneously end up in
the redirected stdout of your program... VERY, VERY bad things.


Of course. That's why I said "something like this" and not "do this" :-) I
don't expect people to "handle" exceptions with a print statement in
anything more serious than a throw-away script.

For serious, production-level code, more often than not you will end up
spending more time and effort handling errors than you spend on handling
the task your application is actually meant to do. But I suspect I'm not
telling Alex anything he doesn't already know.

Don't ever catch and ``handle'' exceptions in such ways. In particular,
each time you're thinking of writing a bare 'except:' clause, think
again, and you'll most likely find a much better approach.


What would you -- or anyone else -- recommend as a better approach?

Is there a canonical list somewhere that states every possible exception
from a file open or close?

--
Steven.

Oct 30 '05 #48
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
...
Don't ever catch and ``handle'' exceptions in such ways. In particular,
each time you're thinking of writing a bare 'except:' clause, think
again, and you'll most likely find a much better approach.
What would you -- or anyone else -- recommend as a better approach?


That depends on your application, and what you're trying to accomplish
at this point.

Is there a canonical list somewhere that states every possible exception
from a file open or close?


No. But if you get a totally unexpected exception, something that shows
the world has gone crazy and most likely any further action you perform
would run the risk of damaging the user's persistent data since the
macchine appears to be careening wildly out of control... WHY would you
want to perform any further action? Crashing and burning (ideally
leaving as detailed a core-dump as feasible for later post-mortem)
appears to be preferable. (Detailed information for post-mortem
purposes is best dumped in a sys.excepthook handler, since wild
unexpected exceptions may occur anywhere and it's impractical to pepper
your application code with bare except clauses for such purposes).

Obviously, if your program is so life-crucial that it cannot be missing
for a long period of time, you will have separately set up a "hot spare"
system, ready to take over at the behest of a separate monitor program
as soon as your program develops problems of such magnitude (a
"heartbeat" system helps with monitoring). You do need redundant
hardware for that, since the root cause of unexpected problems may well
be in a hardware fault -- the disk has crashed, a memory chip just
melted, the CPU's on strike, locusts...! Not stuff any program can do
much about in the short term, except by switching to a different
machine.
Alex

Oct 30 '05 #49
al*****@yahoo.com (Alex Martelli) writes:
[...]
If you're trying to test your code to ensure it explicitly closes all
files, you could (from within your tests) rebind built-ins 'file' and
'open' to be a class wrapping the real thing, and adding a flag to
remember if the file is open; at __del__ time it would warn if the file
had not been explicitly closed. E.g. (untested code):

[...]

In general __del__ methods interfere with garbage collection, don't
they? I guess in the case of file objects this is unlikely to be
problematic (because unlikely to be any reference cycles), but I
thought it might be worth warning people that in general this
debugging strategy might get rather confusing since the __del__ could
actually change the operation of the program by preventing garbage
collection of the objects whose lifetime you're trying to
investigate...
John

Oct 30 '05 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

21
by: CHANGE username to westes | last post by:
What are the most popular, and well supported, libraries of drivers for bar code scanners that include a Visual Basic and C/C++ API? My requirements are: - Must allow an application to be...
4
by: Zen | last post by:
I'm using Access 2000, and I'd like to know if there is a way to use a scanner (flatbed, doc-feed, etc) to scan forms with OMR or OCR software, and have the data be automatically (or if not...
8
by: Marie-Christine Bechara | last post by:
I have a form with a button called btnScan. When i click on this button i want to scan a file and save it in the database. Any hints?? ideas??? solutions??? *** Sent via Developersdex...
3
by: Brent Burkart | last post by:
I am using a streamreader to read a log file into my application. Now I want to be able to scan for error messages, such as "failed", "error", "permission denied", so I can take action such as...
6
by: Bob Alston | last post by:
I am looking for others who have built systems to scan documents, index them and then make them accessible from an Access database. My environment is a nonprofit with about 20-25 case workers who...
4
by: tshad | last post by:
We have a few pages that accept uploads and want to scan the files before accepting them. Does Asp.net have a way of doing a virus scan? We are using Trendmicro to scan files and email but don't...
1
kirubagari
by: kirubagari | last post by:
For i = 49 To mfilesize Step 6 rich1.SelStart = Len(rich1.Text) rich1.SelText = "Before : " & HexByte2Char(arrByte(i)) & _ " " & HexByte2Char(arrByte(i + 1)) & " " _ &...
23
by: Rotsey | last post by:
Hi, I am writing an app that scans hard drives and logs info about every fine on the drive. The first iteration of my code used a class and a generic list to store the data and rhis took...
2
by: iheartvba | last post by:
Hi Guys, I have been using EzTwain Pro to scan documents into my access program. It allows me to specify the location I want the Doc to go to. It also allows me to set the name of the document...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.