473,386 Members | 1,752 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Scanning a file

I want to scan a file byte for byte for occurences of the the four byte
pattern 0x00000100. I've tried with this:

# start
import sys

numChars = 0
startCode = 0
count = 0

inputFile = sys.stdin

while True:
ch = inputFile.read(1)
numChars += 1

if len(ch) < 1: break

startCode = ((startCode << 8) & 0xffffffff) | (ord(ch))
if numChars < 4: continue

if startCode == 0x00000100:
count = count + 1

print count
# end

But it is very slow. What is the fastest way to do this? Using some
native call? Using a buffer? Using whatever?

/David

Oct 28 '05
79 5166
On Sun, 30 Oct 2005 08:35:12 -0700, Alex Martelli wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
...
> Don't ever catch and ``handle'' exceptions in such ways. In particular,
> each time you're thinking of writing a bare 'except:' clause, think
> again, and you'll most likely find a much better approach.
What would you -- or anyone else -- recommend as a better approach?


That depends on your application, and what you're trying to accomplish
at this point.

Is there a canonical list somewhere that states every possible exception
from a file open or close?


No. But if you get a totally unexpected exception,


I'm more concerned about getting an expected exception -- or more
accurately, *missing* an expected exception. Matching on Exception is too
high. EOFError will probably need to be handled separately, since it often
isn't an error at all, just a change of state. IOError is the bad one.
What else can go wrong?

something that shows
the world has gone crazy and most likely any further action you perform
would run the risk of damaging the user's persistent data since the
macchine appears to be careening wildly out of control... WHY would you
want to perform any further action?


In circumstances where, as you put it, the hard disk has crashed, the CPU
is on strike, or the memory has melted down, not only can you not recover
gracefully, but you probably can't even fail gracefully -- at least not
without a special purpose fail-safe operating system.

I'm not concerned with mylist.append(None) unexpectedly -- and
impossibly? -- raising an ImportError. I can't predict every way things
can explode, and even if they do, I can't safely recover from them. But I
can fail gracefully from *expected* errors: rather than the application
just crashing, I can at least try to put up a somewhat informative dialog
box, or at least print something to the console. If opening a preferences
file fails, I can fall back on default settings. If writing the
preferences file fails, I can tell the user that their settings won't be
saved, and continue. Just because the disk is full or their disk quota is
exceeded, doesn't necessarily mean the app can't continue with the job on
hand.
--
Steven.

Oct 30 '05 #51
John J. Lee <jj*@pobox.com> wrote:
al*****@yahoo.com (Alex Martelli) writes:
[...]
If you're trying to test your code to ensure it explicitly closes all
files, you could (from within your tests) rebind built-ins 'file' and
'open' to be a class wrapping the real thing, and adding a flag to
remember if the file is open; at __del__ time it would warn if the file
had not been explicitly closed. E.g. (untested code): [...]

In general __del__ methods interfere with garbage collection, don't


Yes, cyclic gc only.
they? I guess in the case of file objects this is unlikely to be
problematic (because unlikely to be any reference cycles), but I
thought it might be worth warning people that in general this
debugging strategy might get rather confusing since the __del__ could
actually change the operation of the program by preventing garbage
collection of the objects whose lifetime you're trying to
investigate...


Yeah, but you'll find them all listed in gc.garbage for your perusal --
they DO get collected, but they get put in gc.garbage rather than FREED,
that's all. E.g.:
class a(object): .... def __del__(self): print 'del', self.__class__.__name__
.... class b(a): pass .... x=a(); y=b(); x.y=y; y.x=x
del x, y
gc.collect() 4 gc.garbage

[<__main__.a object at 0x64cf0>, <__main__.b object at 0x58510>]

So, no big deal -- run a gc.collect() and parse through gc.garbage for
any instances of your "wrapper of file" class, and you'll find ones that
were forgotten as part of a cyclic garbage loop and you can check
whether they were explicitly closed or not.
Alex
Oct 31 '05 #52
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
...
No. But if you get a totally unexpected exception,
I'm more concerned about getting an expected exception -- or more
accurately, *missing* an expected exception. Matching on Exception is too
high. EOFError will probably need to be handled separately, since it often
isn't an error at all, just a change of state. IOError is the bad one.
What else can go wrong?


Lots of things, but not ones you should WISH your application to
survive.
something that shows
the world has gone crazy and most likely any further action you perform
would run the risk of damaging the user's persistent data since the
macchine appears to be careening wildly out of control... WHY would you
want to perform any further action?


In circumstances where, as you put it, the hard disk has crashed, the CPU
is on strike, or the memory has melted down, not only can you not recover
gracefully, but you probably can't even fail gracefully -- at least not
without a special purpose fail-safe operating system.


Right: as I explained, all you can do is switch operation over to a "hot
spare" server (a different machine), through the good services of a
"monitor" machine - and hope the redundancy you've built into your
storage system (presumably a database with good solid mirroring running
on other machines yet, if your app is as critical as that) has survived.

I'm not concerned with mylist.append(None) unexpectedly -- and
impossibly? -- raising an ImportError. I can't predict every way things
can explode, and even if they do, I can't safely recover from them. But I
Exactly my point -- and yet if you use "except:", that's exactly what
you're TRYING to do, rather than let your app die.
can fail gracefully from *expected* errors: rather than the application
just crashing, I can at least try to put up a somewhat informative dialog
box, or at least print something to the console.
You can do that from your sys.excepthook routine, if it's OK for your
app to die afterwards. What we're discussing here are only, and
strictly, cases in which you wish your app to NOT die, but rather keep
processing (not just give nice error diagnostics and then terminate,
that's what sys.excepthook is for). And what I'm saying is that you
should keep processing only for errors you *DO* expect, and should not
even try if the error is NOT expected.
. If opening a preferences
file fails, I can fall back on default settings. If writing the
preferences file fails, I can tell the user that their settings won't be
saved, and continue. Just because the disk is full or their disk quota is
exceeded, doesn't necessarily mean the app can't continue with the job on
hand.


Sure, that's why you catch IOError, which covers these _expected_ cases
(or, if you want to be a bit wider, maybe OSError) AND check its errno
attribute to ensure you haven't mistakenly caught something you did NOT
in fact expect (and thus can't really handle), to use a bare raise
statement to re-raise the "accidentally caught" exception if needed.

But if something goes wrong that you had NOT anticipated, just log as
much info as you can for the postmortem, give nice diagnostics to the
user if you wish, and do NOT keep processing -- and for these
diagnostic-only purposes, use sys.excepthook, not a slew of try/except
all over your application making it crufty and unmaintainable.
Alex
Oct 31 '05 #53
On Sat, 29 Oct 2005 21:10:11 +0100, Steve Holden <st***@holdenweb.com> wrote:
Peter Otten wrote:
Bengt Richter wrote:

What struck me was
>> gen = byblocks(StringIO.StringIO('no'),1024,len('end?')-1)
>> [gen.next() for i in xrange(10)]

['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']

Ouch. Seems like I spotted the subtle cornercase error and missed the big
one.


No, you just realised subconsciously that we'd all spot the obvious one
and decided to point out the bug that would remain after the obvious one
had been fixed.

I still smelled a bug in the counting of substring in the overlap region,
and you motivated me to find it (obvious in hindsight, but aren't most ;-)

A substring can get over-counted if the "overlap" region joins infelicitously with the next
input. E.g., try counting 'xx' in 10*'xx' with a read chunk of 4 instead of 1024*1024:

Assuming corrections so far posted as I understand them:
def byblocks(f, blocksize, overlap): ... block = f.read(blocksize)
... yield block
... if overlap>0:
... while True:
... next = f.read(blocksize-overlap)
... if not next: break
... block = block[-overlap:] + next
... yield block
... else:
... while True:
... next = f.read(blocksize)
... if not next: break
... yield next
... def countsubst(f, subst, blksize=1024*1024): ... count = 0
... for block in byblocks(f, blksize, len(subst)-1):
... count += block.count(subst)
... f.close()
... return count
...
from StringIO import StringIO as S
countsubst(S('xx'*10), 'xx', 4) 13 ('xx'*10).count('xx') 10 list(byblocks(S('xx'*10), 4, len('xx')-1)) ['xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx', 'xx']

Of course, a large read chunk will make the problem either
go away
countsubst(S('xx'*10), 'xx', 1024)

10

or might make it low probability depending on the data.

Regards,
Bengt Richter
Oct 31 '05 #54
In article <11*********************@g43g2000cwa.googlegroups. com>,
ne********@gmail.com wrote:
Steve Holden wrote:
Indeed, but reading one byte at a time is about the slowest way to
process a file, in Python or any other language, because it fails to
amortize the overhead cost of function calls over many characters.

Buffering wasn't invented because early programmers had nothing better
to occupy their minds, remember :-)


Buffer, and then read one byte at a time from the buffer.


Have you mesured it?

#!/usr/bin/python
'''Time some file scanning.
'''

import sys, time

f = open(sys.argv[1])
t = time.time()
while True:
b = f.read(256*1024)
if not b:
break
print 'initial read', time.time() - t
f.close()

f = open(sys.argv[1])
t = time.time()
while True:
b = f.read(256*1024)
if not b:
break
print 'second read', time.time() - t
f.close()

if 1:
f = open(sys.argv[1])
t = time.time()
while True:
b = f.read(256*1024)
if not b:
break
for c in b:
pass
print 'third chars', time.time() - t
f.close()

f = open(sys.argv[1])
t = time.time()
n = 0
srch = '\x00\x00\x01\x00'
laplen = len(srch)-1
lap = ''
while True:
b = f.read(256*1024)
if not b:
break
n += (lap+b[:laplen]).count(srch)
n += b.count(srch)
lap = b[-laplen:]
print 'fourth scan', time.time() - t, n
f.close()
On my (old) system, with a 512 MB file so it won't all buffer, the
second time I get:

initial read 14.513395071
second read 14.8771388531
third chars 178.250257969
fourth scan 26.1602909565 1
__________________________________________________ ______________________
TonyN.:' *firstname*nlsnews@georgea*lastname*.com
' <http://www.georgeanelson.com/>
Oct 31 '05 #55
Fredrik Lundh wrote:
Paul Watson wrote:
This is Cyngwin on Windows XP.


using cygwin to analyze performance characteristics of portable API:s
is a really lousy idea.


Ok. So, I agree. That is just what I had at hand. Here are some other
numbers to which due diligence has also not been applied. Source code
is at the bottom for both file and mmap process. I would be willing for
someone to tell me what I could improve.

$ python -V
Python 2.4.1

$ uname -a
Linux ruth 2.6.13-1.1532_FC4 #1 Thu Oct 20 01:30:08 EDT 2005 i686

$ cat /proc/meminfo|head -2
MemTotal: 514232 kB
MemFree: 47080 kB

$ time ./scanfile.py
16384

real 0m0.06s
user 0m0.03s
sys 0m0.01s

$ time ./scanfilemmap.py
16384

real 0m0.10s
user 0m0.06s
sys 0m0.00s

Using a ~ 250 MB file, not even half of physical memory.

$ time ./scanfile.py
16777216

real 0m11.19s
user 0m10.98s
sys 0m0.17s

$ time ./scanfilemmap.py
16777216

real 0m55.09s
user 0m43.12s
sys 0m11.92s

==============================

$ cat scanfile.py
#!/usr/bin/env python

import sys

fn = 't.dat'
ss = '\x00\x00\x01\x00'
ss = 'time'

be = len(ss) - 1 # length of overlap to check
blocksize = 64 * 1024 # need to ensure that blocksize > overlap

fp = open(fn, 'rb')
b = fp.read(blocksize)
count = 0
while len(b) > be:
count += b.count(ss)
b = b[-be:] + fp.read(blocksize)
fp.close()

print count
sys.exit(0)

===================================

$ cat scanfilemmap.py
#!/usr/bin/env python

import sys
import os
import mmap

fn = 't.dat'
ss = '\x00\x00\x01\x00'
ss='time'

fp = open(fn, 'rb')
b = mmap.mmap(fp.fileno(), os.stat(fp.name).st_size,
access=mmap.ACCESS_READ)

count = 0
foundpoint = b.find(ss, 0)
while foundpoint != -1 and (foundpoint + 1) < b.size():
#print foundpoint
count = count + 1
foundpoint = b.find(ss, foundpoint + 1)
b.close()

print count

fp.close()
sys.exit(0)
Oct 31 '05 #56
Fredrik Lundh wrote:
Paul Watson wrote:
This is Cyngwin on Windows XP.


using cygwin to analyze performance characteristics of portable API:s
is a really lousy idea.


Ok. So, I agree. That is just what I had at hand. Here are some other
numbers to which due diligence has also not been applied. Source code
is at the bottom for both file and mmap process. I would be willing for
someone to tell me what I could improve.

$ python -V
Python 2.4.1

$ uname -a
Linux ruth 2.6.13-1.1532_FC4 #1 Thu Oct 20 01:30:08 EDT 2005 i686

$ cat /proc/meminfo|head -2
MemTotal: 514232 kB
MemFree: 47080 kB

$ time ./scanfile.py
16384

real 0m0.06s
user 0m0.03s
sys 0m0.01s

$ time ./scanfilemmap.py
16384

real 0m0.10s
user 0m0.06s
sys 0m0.00s

Using a ~ 250 MB file, not even half of physical memory.

$ time ./scanfile.py
16777216

real 0m11.19s
user 0m10.98s
sys 0m0.17s

$ time ./scanfilemmap.py
16777216

real 0m55.09s
user 0m43.12s
sys 0m11.92s

==============================

$ cat scanfile.py
#!/usr/bin/env python

import sys

fn = 't.dat'
ss = '\x00\x00\x01\x00'
ss = 'time'

be = len(ss) - 1 # length of overlap to check
blocksize = 64 * 1024 # need to ensure that blocksize > overlap

fp = open(fn, 'rb')
b = fp.read(blocksize)
count = 0
while len(b) > be:
count += b.count(ss)
b = b[-be:] + fp.read(blocksize)
fp.close()

print count
sys.exit(0)

===================================

$ cat scanfilemmap.py
#!/usr/bin/env python

import sys
import os
import mmap

fn = 't.dat'
ss = '\x00\x00\x01\x00'
ss='time'

fp = open(fn, 'rb')
b = mmap.mmap(fp.fileno(), os.stat(fp.name).st_size,
access=mmap.ACCESS_READ)

count = 0
foundpoint = b.find(ss, 0)
while foundpoint != -1 and (foundpoint + 1) < b.size():
#print foundpoint
count = count + 1
foundpoint = b.find(ss, foundpoint + 1)
b.close()

print count

fp.close()
sys.exit(0)
Oct 31 '05 #57
Alex Martelli wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
...
No. But if you get a totally unexpected exception,
I'm more concerned about getting an expected exception -- or more
accurately, *missing* an expected exception. Matching on Exception is too
high. EOFError will probably need to be handled separately, since it often
isn't an error at all, just a change of state. IOError is the bad one.
What else can go wrong?

Lots of things, but not ones you should WISH your application to
survive.


[snip]
Sure, that's why you catch IOError, which covers these _expected_ cases
(or, if you want to be a bit wider, maybe OSError) AND check its errno
attribute to ensure you haven't mistakenly caught something you did NOT
in fact expect (and thus can't really handle), to use a bare raise
statement to re-raise the "accidentally caught" exception if needed.
Ah, that's precisely the answer I was looking for:
IOError and OSError, and then check the errno.
Excellent: thanks for that.
But if something goes wrong that you had NOT anticipated, just log as
much info as you can for the postmortem, give nice diagnostics to the
user if you wish, and do NOT keep processing -- and for these
diagnostic-only purposes, use sys.excepthook, not a slew of try/except
all over your application making it crufty and unmaintainable.


Agreed -- I never intended people to draw the
conclusion that every line, or even every logical block
of lines, should be wrapped in try/except.
--
Steven.

Oct 31 '05 #58
Bengt Richter wrote:
I still smelled a bug in the counting of substring in the overlap region,
and you motivated me to find it (obvious in hindsight, but aren't most ;-)

A substring can get over-counted if the "overlap" region joins
infelicitously with the next input. E.g., try counting 'xx' in 10*'xx'
with a read chunk of 4 instead of 1024*1024:

Assuming corrections so far posted as I understand them:
>>> def byblocks(f, blocksize, overlap): ... block = f.read(blocksize)
... yield block
... if overlap>0:
... while True:
... next = f.read(blocksize-overlap)
... if not next: break
... block = block[-overlap:] + next
... yield block
... else:
... while True:
... next = f.read(blocksize)
... if not next: break
... yield next
... >>> def countsubst(f, subst, blksize=1024*1024): ... count = 0
... for block in byblocks(f, blksize, len(subst)-1):
... count += block.count(subst)
... f.close()
... return count
...
>>> from StringIO import StringIO as S
>>> countsubst(S('xx'*10), 'xx', 4) 13 >>> ('xx'*10).count('xx') 10 >>> list(byblocks(S('xx'*10), 4, len('xx')-1)) ['xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx', 'xx']

Of course, a large read chunk will make the problem either
go away
>>> countsubst(S('xx'*10), 'xx', 1024)
10

or might make it low probability depending on the data.


[David Rasmussen]
First of all, this isn't a text file, it is a binary file. Secondly,
substrings can overlap. In the sequence 0010010 the substring 0010
occurs twice.


Coincidentally the "always overlap" case seems the easiest to fix. It
suffices to replace the count() method with

def count_overlap(s, token):
pos = -1
n = 0
while 1:
try:
pos = s.index(token, pos+1)
except ValueError:
break
n += 1
return n

Or so I hope, without the thorough tests that are indispensable as we should
have learned by now...

Peter
Oct 31 '05 #59
No comments to this post?

/David
Oct 31 '05 #60
David Rasmussen wrote:
<snip>
If you must know, the above one-liner actually counts the number of
frames in an MPEG2 file. I want to know this number for a number of
files for various reasons. I don't want it to take forever.

<snip>

Don't you risk getting more "frames" than the file actually have? What
if the encoded data happens to have the magic byte values for something
else?

--
Lasse Vågsæther Karlsen
http://usinglvkblog.blogspot.com/
mailto:la***@vkarlsen.no
PGP KeyID: 0x2A42A1C2
Oct 31 '05 #61
David Rasmussen wrote:
Steven D'Aprano wrote:
On Fri, 28 Oct 2005 06:22:11 -0700, pi************@gmail.com wrote:
Which is quite fast. The only problems is that the file might be huge.

What *you* call huge and what *Python* calls huge may be very different
indeed. What are you calling huge?


I'm not saying that it is too big for Python. I am saying that it is too
big for the systems it is going to run on. These files can be 22 MB or 5
GB or ..., depending on the situation. It might not be okay to run a
tool that claims that much memory, even if it is available.


If your files can reach multiple gigabytes, you will
definitely need an algorithm that avoids reading the
entire file into memory at once.
[snip]
print file("filename", "rb").count("\x00\x00\x01\x00")

(or something like that)

instead of the original

print file("filename", "rb").read().count("\x00\x00\x01\x00")

it would be exactly what I am after.
I think I can say, without risk of contradiction, that
there is no built-in method to do that.

What is the conceptual difference?
The first solution should be at least as fast as the second. I have to
read and compare the characters anyway. I just don't need to store them
in a string. In essence, I should be able to use the "count occurences"
functionality on more things, such as a file, or even better, a file
read through a buffer with a size specified by me.


Of course, if you feel like coding the algorithm and
submitting it to be included in the next release of
Python... :-)
I can't help feeling that a generator with a buffer is
the way to go, but I just can't *quite* deal with the
case where the pattern overlaps the boundary... it is
very annoying.

But not half as annoying as it must be to you :-)

However, there may be a simpler solution *fingers
crossed* -- you are searching for a sub-string
"\x00\x00\x01\x00", which is hex 0x100. Surely you
don't want any old substring of "\x00\x00\x01\x00", but
only the ones which align on word boundaries?

So "ABCD\x00\x00\x01\x00" would match (in hex, it is
0x41424344 0x100), but "AB\x00\x00\x01\x00CD" should
not, because that is 0x41420000 0x1004344 in hex.

If that is the case, your problem is simpler: you don't
have to worry about the pattern crossing a boundary, so
long as your buffer is a multiple of four bytes.

--
Steven.

Oct 31 '05 #62
Alex Martelli wrote:
....
gc.garbage


[<__main__.a object at 0x64cf0>, <__main__.b object at 0x58510>]

So, no big deal -- run a gc.collect() and parse through gc.garbage for
any instances of your "wrapper of file" class, and you'll find ones that
were forgotten as part of a cyclic garbage loop and you can check
whether they were explicitly closed or not.
Alex


Since everyone needs this, how about building it in such that files
which are closed by the runtime, and not user code, are reported or
queryable? Perhaps a command line switch to either invoke or suppress
reporting them on exit.

Is there any facility for another program to peer into the state of a
Python program? Would this be a security problem?
Oct 31 '05 #63
Paul Watson wrote:
Alex Martelli wrote:
...
>gc.garbage
[<__main__.a object at 0x64cf0>, <__main__.b object at 0x58510>]

So, no big deal -- run a gc.collect() and parse through gc.garbage for
any instances of your "wrapper of file" class, and you'll find ones that
were forgotten as part of a cyclic garbage loop and you can check
whether they were explicitly closed or not.
Alex

Since everyone needs this, how about building it in such that files
which are closed by the runtime, and not user code, are reported or
queryable? Perhaps a command line switch to either invoke or suppress
reporting them on exit.

This is a rather poor substitute from correct program design and
implementation. It also begs the question of exactly what constitutes a
"file". What about a network socket that the user has run makefile() on?
What about a pipe to another process? This suggestion is rather ill-defined.
Is there any facility for another program to peer into the state of a
Python program? Would this be a security problem?


It would indeed be a security problem, and there are enough of those
already without adding more.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 31 '05 #64
On Mon, 31 Oct 2005 09:41:02 +0100, =?ISO-8859-1?Q?Lasse_V=E5gs=E6ther_Karlsen?= <la***@vkarlsen.no> wrote:
David Rasmussen wrote:
<snip>
If you must know, the above one-liner actually counts the number of
frames in an MPEG2 file. I want to know this number for a number of
files for various reasons. I don't want it to take forever.

<snip>

Don't you risk getting more "frames" than the file actually have? What
if the encoded data happens to have the magic byte values for something
else?

Good point, but perhaps the bit pattern the OP is looking for is guaranteed
(e.g. by some kind of HDLC-like bit or byte stuffing or escaping) not to occur
except as frame marker (which might make sense re the problem of re-synching
to frames in a glitched video stream).

The OP probably knows. I imagine this thread would have gone differently if the
title had been "How to count frames in an MPEG2 file?" and the OP had supplied
the info about what marks a frame and whether it is guaranteed not to occur
in the data ;-)

Regards,
Bengt Richter
Oct 31 '05 #65
On Mon, 31 Oct 2005 09:19:10 +0100, Peter Otten <__*******@web.de> wrote:
Bengt Richter wrote:
I still smelled a bug in the counting of substring in the overlap region,
and you motivated me to find it (obvious in hindsight, but aren't most ;-)

A substring can get over-counted if the "overlap" region joins
infelicitously with the next input. E.g., try counting 'xx' in 10*'xx'
with a read chunk of 4 instead of 1024*1024:

Assuming corrections so far posted as I understand them:
>>> def byblocks(f, blocksize, overlap): ... block = f.read(blocksize)
... yield block
... if overlap>0:
... while True:
... next = f.read(blocksize-overlap)
... if not next: break
... block = block[-overlap:] + next
... yield block
... else:
... while True:
... next = f.read(blocksize)
... if not next: break
... yield next
...
>>> def countsubst(f, subst, blksize=1024*1024):

... count = 0
... for block in byblocks(f, blksize, len(subst)-1):
... count += block.count(subst)
... f.close()
... return count
...
>>> from StringIO import StringIO as S
>>> countsubst(S('xx'*10), 'xx', 4)

13
>>> ('xx'*10).count('xx')

10
>>> list(byblocks(S('xx'*10), 4, len('xx')-1))

['xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx', 'xx']

Of course, a large read chunk will make the problem either
go away
>>> countsubst(S('xx'*10), 'xx', 1024)

10

or might make it low probability depending on the data.


[David Rasmussen]
First of all, this isn't a text file, it is a binary file. Secondly,
substrings can overlap. In the sequence 0010010 the substring 0010
occurs twice. The OP didn't reply to my post re the above for some reason
http://groups.google.com/group/comp....8a54b7c?hl=en&

Coincidentally the "always overlap" case seems the easiest to fix. It
suffices to replace the count() method with

def count_overlap(s, token):
pos = -1
n = 0
while 1:
try:
pos = s.index(token, pos+1)
except ValueError:
break
n += 1
return n

Or so I hope, without the thorough tests that are indispensable as we should
have learned by now...

Unfortunately, there is such a thing as a correct implementation of an incorrect spec ;-)
I have some doubts about the OP's really wanting to count overlapping patterns as above,
which is what I asked about in the above referenced post. Elsewhere he later reveals:

[David Rasmussen] If you must know, the above one-liner actually counts the number of
frames in an MPEG2 file. I want to know this number for a number of
files for various reasons. I don't want it to take forever.


In which case I doubt whether he wants to count as above. Scanning for the
particular 4 bytes would assume that non-frame-marker data is escaped
one way or another so it can't contain the marker byte sequence.
(If it did, you'd want to skip it, not count it, I presume). Robust streaming video
format would presumably be designed for unambigous re-synching, meaning
the data stream can't contain the sync mark. But I don't know if that
is guaranteed in conversion from file to stream a la HDLC or some link packet protocol
or whether it is actually encoded with escaping in the file. If framing in the file is with
length-specifying packet headers and no data escaping, then the filebytes.count(pattern)
approach is not going to do the job reliably, as Lasse was pointing out.

Requirements, requirements ;-)

Regards,
Bengt Richter
Oct 31 '05 #66
Steve Holden wrote:
Since everyone needs this, how about building it in such that files
which are closed by the runtime, and not user code, are reported or
queryable? Perhaps a command line switch to either invoke or suppress
reporting them on exit.

This is a rather poor substitute from correct program design and
implementation. It also begs the question of exactly what constitutes a
"file". What about a network socket that the user has run makefile() on?
What about a pipe to another process? This suggestion is rather
ill-defined.
Is there any facility for another program to peer into the state of a
Python program? Would this be a security problem?


It would indeed be a security problem, and there are enough of those
already without adding more.

regards
Steve


All I am looking for is the runtime to tell me when it is doing things
that are outside the language specification and that the developer
should have coded.

How "ill" will things be when large bodies of code cannot run
successfully on a future version of Python or a non-CPython
implementation which does not close files. Might as well put file
closing on exit into the specification.

The runtime knows it is doing it. Please allow the runtime to tell me
what it knows it is doing. Thanks.
Oct 31 '05 #67
Paul Watson wrote:
Steve Holden wrote:
Since everyone needs this, how about building it in such that files
which are closed by the runtime, and not user code, are reported or
queryable? Perhaps a command line switch to either invoke or suppress
reporting them on exit.


This is a rather poor substitute from correct program design and
implementation. It also begs the question of exactly what constitutes a
"file". What about a network socket that the user has run makefile() on?
What about a pipe to another process? This suggestion is rather
ill-defined.

Is there any facility for another program to peer into the state of a
Python program? Would this be a security problem?


It would indeed be a security problem, and there are enough of those
already without adding more.

regards
Steve

All I am looking for is the runtime to tell me when it is doing things
that are outside the language specification and that the developer
should have coded.

How "ill" will things be when large bodies of code cannot run
successfully on a future version of Python or a non-CPython
implementation which does not close files. Might as well put file
closing on exit into the specification.

The runtime knows it is doing it. Please allow the runtime to tell me
what it knows it is doing. Thanks.


In point oif fact I don't believe the runtime does any such thing
(though I must admit I haven't checked the source, so you may prove me
wrong).

As far as I know, Python simply relies on the opreating system to close
files left open at the end of the program.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Nov 1 '05 #68
Paul Watson <pw*****@redlinepy.com> writes:
[...]
How "ill" will things be when large bodies of code cannot run
successfully on a future version of Python or a non-CPython
implementation which does not close files. Might as well put file
closing on exit into the specification.

[...]

There are many, many ways of making a large body of code "ill".

Closing off this particular one would make it harder to get benefit of
non-C implementations of Python, so it has been judged "not worth it".
I think I agree with that judgement.
John

Nov 1 '05 #69
jj*@pobox.com (John J. Lee) writes:
Closing off this particular one would make it harder to get benefit of
non-C implementations of Python, so it has been judged "not worth it".
I think I agree with that judgement.


The right fix is PEP 343.
Nov 1 '05 #70
Steve Holden <st***@holdenweb.com> wrote:
...
The runtime knows it is doing it. Please allow the runtime to tell me
what it knows it is doing. Thanks.


In point oif fact I don't believe the runtime does any such thing
(though I must admit I haven't checked the source, so you may prove me
wrong).

As far as I know, Python simply relies on the opreating system to close
files left open at the end of the program.


Nope, see
<http://cvs.sourceforge.net/viewcvs.p...src/Objects/fi
leobject.c?rev=2.164.2.3&view=markup> :

"""
static void
file_dealloc(PyFileObject *f)
{
int sts = 0;
if (f->weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject *) f);
if (f->f_fp != NULL && f->f_close != NULL) {
Py_BEGIN_ALLOW_THREADS
sts = (*f->f_close)(f->f_fp);
"""
etc.

Exactly how the OP wants to "allow the runtime to tell [him] what it
knows it is doing", that is not equivalent to reading the freely
available sources of that runtime, is totally opaque to me, though.

"The runtime" (implementation of built-in object type `file`) could be
doing or not doing a bazillion things (in its ..._dealloc function as
well as many other functions), up to and including emailing the OP's
cousin if it detects the OP is up later than his or her bedtime -- the
language specs neither mandate nor forbid such behavior. How, exactly,
does the OP believe the language specs should "allow" (presumably,
REQUIRE) ``the runtime'' to communicate the sum total of all that it's
doing or not doing (beyond whatever the language specs themselves may
require or forbid it to do) on any particular occasion...?!
Alex
Nov 1 '05 #71
Paul Rubin wrote:
jj*@pobox.com (John J. Lee) writes:
Closing off this particular one would make it harder to get benefit of
non-C implementations of Python, so it has been judged "not worth it".
I think I agree with that judgement.

The right fix is PEP 343.


I am sure you are right. However, PEP 343 will not change the existing
body of Python source code. Nor will it, alone, change the existing
body of Python programmers who are writing code which does not close files.
Nov 1 '05 #72
Alex Martelli wrote:
Steve Holden <st***@holdenweb.com> wrote:
...
The runtime knows it is doing it. Please allow the runtime to tell me
what it knows it is doing. Thanks.


In point oif fact I don't believe the runtime does any such thing
(though I must admit I haven't checked the source, so you may prove me
wrong).

As far as I know, Python simply relies on the opreating system to close
files left open at the end of the program.

Nope, see
<http://cvs.sourceforge.net/viewcvs.p...src/Objects/fi
leobject.c?rev=2.164.2.3&view=markup> :

"""
static void
file_dealloc(PyFileObject *f)
{
int sts = 0;
if (f->weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject *) f);
if (f->f_fp != NULL && f->f_close != NULL) {
Py_BEGIN_ALLOW_THREADS
sts = (*f->f_close)(f->f_fp);
"""
etc.

Exactly how the OP wants to "allow the runtime to tell [him] what it
knows it is doing", that is not equivalent to reading the freely
available sources of that runtime, is totally opaque to me, though.

"The runtime" (implementation of built-in object type `file`) could be
doing or not doing a bazillion things (in its ..._dealloc function as
well as many other functions), up to and including emailing the OP's
cousin if it detects the OP is up later than his or her bedtime -- the
language specs neither mandate nor forbid such behavior. How, exactly,
does the OP believe the language specs should "allow" (presumably,
REQUIRE) ``the runtime'' to communicate the sum total of all that it's
doing or not doing (beyond whatever the language specs themselves may
require or forbid it to do) on any particular occasion...?!
Alex


The OP wants to know which files the runtime is closing automatically.
This may or may not occur on other or future Python implementations.
Identifying this condition will accelerate remediation efforts to avoid
the deleterious impact of failure to close().
The mechanism to implement such a capability might be similar to the -v
switch which traces imports, reporting to stdout. It might be a
callback function.
Nov 1 '05 #73
Alex Martelli wrote:
As far as I know, Python simply relies on the opreating system to close
files left open at the end of the program.


Nope, see
<http://cvs.sourceforge.net/viewcvs.p...src/Objects/fi
leobject.c?rev=2.164.2.3&view=markup>


that's slightly misleading: CPython will make a decent attempt (via the module cleanup
mechanism: http://www.python.org/doc/essays/cleanup.html ), but any files that are not
closed by that process will be handled by the OS.

CPython is not designed to run on an OS that doesn't reclaim memory and other re-
sources upon program exit.

</F>

Nov 1 '05 #74
Lasse Vågsæther Karlsen wrote:
David Rasmussen wrote:
<snip>
If you must know, the above one-liner actually counts the number of
frames in an MPEG2 file. I want to know this number for a number of
files for various reasons. I don't want it to take forever.


Don't you risk getting more "frames" than the file actually have? What
if the encoded data happens to have the magic byte values for something
else?


I am not too sure about the details, but I've been told from a reliable
source that 0x00000100 only occurs as a "begin frame" marker, and not
anywhere else. So far, it has been true on the files I have tried it on.

/David
Nov 1 '05 #75
Bengt Richter wrote:

Good point, but perhaps the bit pattern the OP is looking for is guaranteed
(e.g. by some kind of HDLC-like bit or byte stuffing or escaping) not to occur
except as frame marker (which might make sense re the problem of re-synching
to frames in a glitched video stream).

Exactly.
The OP probably knows. I imagine this thread would have gone differently if the
title had been "How to count frames in an MPEG2 file?" and the OP had supplied
the info about what marks a frame and whether it is guaranteed not to occur
in the data ;-)


Sure, but I wanted to ask the general question :) I am new to Python and
I want to learn about the language.

/David
Nov 1 '05 #76
Steven D'Aprano wrote:

However, there may be a simpler solution *fingers crossed* -- you are
searching for a sub-string "\x00\x00\x01\x00", which is hex 0x100.
Surely you don't want any old substring of "\x00\x00\x01\x00", but only
the ones which align on word boundaries?


Nope, sorry. On the files I have tried this on, the pattern could occur
on any byte boundary.

/David
Nov 1 '05 #77
David Rasmussen wrote:
Lasse Vågsæther Karlsen wrote:
David Rasmussen wrote:
<snip>
If you must know, the above one-liner actually counts the number of
frames in an MPEG2 file. I want to know this number for a number of
files for various reasons. I don't want it to take forever.

Don't you risk getting more "frames" than the file actually have? What
if the encoded data happens to have the magic byte values for
something else?


I am not too sure about the details, but I've been told from a reliable
source that 0x00000100 only occurs as a "begin frame" marker, and not
anywhere else. So far, it has been true on the files I have tried it on.


Not too reliable then.

0x00000100 is one of a number of unique start codes in
the MPEG2 standard. It is guaranteed to be unique in
the video stream, however when searching for codes
within the video stream, make sure you're in the video
stream!

See, for example,
http://forum.doom9.org/archive/index.php/t-29262.html

"Actually, one easy way (DVD specific) is to look for
00 00 01 e0 at byte offset 00e of the pack. Then look
at byte 016, it contains the size of the extension.
Resume your scan at 017 + contents of 016."

Right. Glad that's the easy way.

I really suspect that you need a proper MPEG2 parser,
and not just blindly counting bytes -- at least if you
want reliable, accurate counts and not just "number of
frames, plus some file-specific random number". And
heaven help you if you want to support MPEGs that are
slightly broken...

(It has to be said, depending on your ultimate needs,
"close enough" may very well be, um, close enough.)

Good luck!

--
Steven.

Nov 2 '05 #78
On Tue, 01 Nov 2005 07:14:57 -0600, Paul Watson <pw*****@redlinepy.com> wrote:
Paul Rubin wrote:
jj*@pobox.com (John J. Lee) writes:
Closing off this particular one would make it harder to get benefit of
non-C implementations of Python, so it has been judged "not worth it".
I think I agree with that judgement.

The right fix is PEP 343.


I am sure you are right. However, PEP 343 will not change the existing
body of Python source code. Nor will it, alone, change the existing
body of Python programmers who are writing code which does not close files.


It might be possible to recompile existing code (unchanged) to capture most
typical cpython use cases, I think...

E.g., I can imagine a family of command line options based on hooking import on
startup and passing option info to the selected and hooked import module,
which module would do extra things at the AST stage of compiling and executing modules
during import, to accomplish various things.

(I did a little proof of concept a while back, see

http://mail.python.org/pipermail/pyt...st/296594.html

that gives me the feeling I could do this kind of thing).

E.g., for the purposes of guaranteeing close() on files opened in typical cpython
one-liners or single-suiters) like e.g.

for i, line in enumerate(open(fpath)):
print '%04d: %s' %(i, line.rstrip())

I think a custom import could recognize the open call
in the AST and extract it and wrap it up in a try/finally AST structure implementing
something like the following in the place of the above;

__f = open(fpath) # (suitable algorithm for non-colliding __f names is required)
try:
for i, line in enumerate(__f):
print '%04d: %s' %(i, line.rstrip())
finally:
__f.close()

In this case, the command line info passed to the special import might look like
python -with open script.py

meaning calls of open in a statement/suite should be recognized and extracted like
__f = open(fpath) above, and the try/finally be wrapped around the use of it.

I think this would capture a lot of typical usage, but of course I haven't bumped into
the gotchas yet, since I haven't implemented it ;-)

On a related note, I think one could implement macros of a sort in a similar way.
The command line parameter would pass the name of a class which is actually extracted
at AST-time, and whose methods and other class variables represent macro definitions
to be used in the processing of the rest of the module's AST, before compilation per se.

Thus you could implement e.g. in-lining, so that

----
#example.py
class inline:
def mac(acc, x, y):
acc += x*y

tot = 0
for i in xrange(10):
mac(tot, i*i)
----

Could be run with

python -macros inline example.py

and get the same identical .pyc as you would with the source

----
#example.py
tot = 0
for i in xrange(10):
tot += i*i
----

IOW, a copy of the macro body AST is substituted for the macro call AST, with
parameter names translated to actual macro call arg names. (Another variant
would also permit putting the macros in a separate module, and recognize their
import into other modules, and "do the right thing" instead of just translating
the import. Maybe specify the module by python - macromodule inline example.py
and then recognize "import inline" in example.py's AST).

Again, I just have a hunch I could make this work (and a number of people
here could beat me to it if they were motivated, I'm sure). Also have a hunch
I might need some flame shielding. ;-)

OTOH, it could be an easy way to experiment with some kinds of language
tweaks. The only limitation really is the necessity for the source to
look legal enough that an AST is formed and preserves the requisite info.
After that, there's no limit to what an AST-munger could do, especially
if it is allowed to call arbitrary tools and create auxiliary files such
as e.g. .dlls for synthesized imports plugging stuff into the final translated context ;-)
(I imagine this is essentially what the various machine code generating optimizers do).

IMO the concept of modules and their (optionally specially controlled) translation
and use could evolve in may interesting directions. E.g., __import__ could grow
keyword parameters too ... Good thing there is a BDFL with a veto, eh? ;-)

Should I bother trying to implement this import for with and macros from
the pieces I have (plus imp, to do it "right") ?

BTW, I haven't experimented with command line dependent site.py/sitecustomize.py stuff.
Would that be a place to do sessionwise import hooking and could one rewrite sys.argv
so the special import command line opts would not be visible to subsequent
processing (and the import hook would be in effect)? IWT so, but probably should read
site.py again and figure it out, but appreciate any hints on pitfalls ;-)

Regards,
Bengt Richter
Nov 2 '05 #79
Steven D'Aprano wrote:

0x00000100 is one of a number of unique start codes in the MPEG2
standard. It is guaranteed to be unique in the video stream, however
when searching for codes within the video stream, make sure you're in
the video stream!

I know I am in the cases I am interested in.
And heaven help you if you want to support MPEGs that are slightly
broken...


I don't. This tool is for use in house only. And on MPEGs that are
generated in house too.

/David
Nov 2 '05 #80

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

21
by: CHANGE username to westes | last post by:
What are the most popular, and well supported, libraries of drivers for bar code scanners that include a Visual Basic and C/C++ API? My requirements are: - Must allow an application to be...
4
by: Zen | last post by:
I'm using Access 2000, and I'd like to know if there is a way to use a scanner (flatbed, doc-feed, etc) to scan forms with OMR or OCR software, and have the data be automatically (or if not...
8
by: Marie-Christine Bechara | last post by:
I have a form with a button called btnScan. When i click on this button i want to scan a file and save it in the database. Any hints?? ideas??? solutions??? *** Sent via Developersdex...
3
by: Brent Burkart | last post by:
I am using a streamreader to read a log file into my application. Now I want to be able to scan for error messages, such as "failed", "error", "permission denied", so I can take action such as...
6
by: Bob Alston | last post by:
I am looking for others who have built systems to scan documents, index them and then make them accessible from an Access database. My environment is a nonprofit with about 20-25 case workers who...
4
by: tshad | last post by:
We have a few pages that accept uploads and want to scan the files before accepting them. Does Asp.net have a way of doing a virus scan? We are using Trendmicro to scan files and email but don't...
1
kirubagari
by: kirubagari | last post by:
For i = 49 To mfilesize Step 6 rich1.SelStart = Len(rich1.Text) rich1.SelText = "Before : " & HexByte2Char(arrByte(i)) & _ " " & HexByte2Char(arrByte(i + 1)) & " " _ &...
23
by: Rotsey | last post by:
Hi, I am writing an app that scans hard drives and logs info about every fine on the drive. The first iteration of my code used a class and a generic list to store the data and rhis took...
2
by: iheartvba | last post by:
Hi Guys, I have been using EzTwain Pro to scan documents into my access program. It allows me to specify the location I want the Doc to go to. It also allows me to set the name of the document...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.