468,251 Members | 1,491 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,251 developers. It's quick & easy.

File IO question.

Hi All,

I'm hoping that someone might have some pointers or
examples on how to proceed with a solution to the
following problem:

A test application, which produces a trace file, is
being run for very long periods of time. Say 72 hours
or more.

The application is often running on older PCs that
have relatively small hard drives in comparison to
to how big the trace file can become.

The trace files tend to accumulate as they're not
always deleted once archived to a network database,
and often times a machine will crash in the middle of
a
very long test due to the hard drive filling up.

There are dozens of PCs being used for the tests so
buying bigger hard drives isn't really feasible. And
I'm guessing the hard drives would fill up eventually
anyway regardless of the size of the hard drive, the
crashes just wouldn't occur quite as often.

The nice thing is that the trace files compress quite
well.

I messed around with the mmap and file object stuff
as well as the win32 extensions thinking that I could
extract and compress the data that was being written
to
the trace file, by the application, in chuncks.

Although I was able to get it to work on a contrived
setup, it didn't work when used with the real
application.

Any hints on how to get something similar to the above
to work or recommendations on alternate solutions
would
be *greatly* appreciated.

Thanks,
Joe

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail
Jul 18 '05 #1
3 1427
You answered your own question:
I'm guessing the hard drives would fill up eventually
anyway regardless of the size of the hard drive, the
crashes just wouldn't occur quite as often.
If this statement is true, compressing the files won't
help either as that is just the same as a larger hard
drive.

But, you could limit the size and number of trace files
and periodically throw away the oldest ones to keep the
trace files at a nearly constant size. I just can't tell
from your description exactly what is getting written to
the trace file that makes it so large (debugging info?).
Maybe you could only log errors/warnings to the trace
file so it grows more slowly?

Larry Bates
Syscon, Inc.

"J Poirier" <oo**@yahoo.com> wrote in message
news:ma*************************************@pytho n.org... Hi All,

I'm hoping that someone might have some pointers or
examples on how to proceed with a solution to the
following problem:

A test application, which produces a trace file, is
being run for very long periods of time. Say 72 hours
or more.

The application is often running on older PCs that
have relatively small hard drives in comparison to
to how big the trace file can become.

The trace files tend to accumulate as they're not
always deleted once archived to a network database,
and often times a machine will crash in the middle of
a
very long test due to the hard drive filling up.

There are dozens of PCs being used for the tests so
buying bigger hard drives isn't really feasible. And
I'm guessing the hard drives would fill up eventually
anyway regardless of the size of the hard drive, the
crashes just wouldn't occur quite as often.

The nice thing is that the trace files compress quite
well.

I messed around with the mmap and file object stuff
as well as the win32 extensions thinking that I could
extract and compress the data that was being written
to
the trace file, by the application, in chuncks.

Although I was able to get it to work on a contrived
setup, it didn't work when used with the real
application.

Any hints on how to get something similar to the above
to work or recommendations on alternate solutions
would
be *greatly* appreciated.

Thanks,
Joe

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

Jul 18 '05 #2
J Poirier wrote:
A test application, which produces a trace file, is
being run for very long periods of time. Say 72 hours
or more.


If the test application is a python app, you coud always replace
the appropriate
open('name', 'w')
with
bz2.BZ2File('name.bz2', 'w')
Jul 18 '05 #3
J Poirier <oo**@yahoo.com> wrote in message news:<ma*************************************@pyth on.org>...

Here is my piece of advice:

Assuming that all computers have access to the network at all the
time:

1. For your trace files use one of logger classes.
2. Restrict the size of the file to some reasonable limit.
3. Name the log files with a combination of computer name and a
timestamp
4. As soon as the limit is reached logger will notify the client that
it is about to change the file. As a reaction to this callback you can
send the file to some server (see bellow) and delete the local copy.
5. At application startup send all files to the server and delete the
local copy.

"Some server":

Depending how much you want to "invest", you have several options:

1. use shared network directory and just copy the file. The drawback:
all computers have access to this shared directory. This can be a
problem in some cases.

2. use xmlrpc server/client combination. This means that you run
SimpleXMLRPCServer or something like on a dedicated machine and embed
the xmlrpcclient into your application. You can even do
filtering/compression/decompression on the fly. XMLRPC is
transactional, so to simplify the protocol, put the file name as one
parameter and the file content as another parameter.
Hi All,

I'm hoping that someone might have some pointers or
examples on how to proceed with a solution to the
following problem:

A test application, which produces a trace file, is
being run for very long periods of time. Say 72 hours
or more.

The application is often running on older PCs that
have relatively small hard drives in comparison to
to how big the trace file can become.

The trace files tend to accumulate as they're not
always deleted once archived to a network database,
and often times a machine will crash in the middle of
a
very long test due to the hard drive filling up.

There are dozens of PCs being used for the tests so
buying bigger hard drives isn't really feasible. And
I'm guessing the hard drives would fill up eventually
anyway regardless of the size of the hard drive, the
crashes just wouldn't occur quite as often.

The nice thing is that the trace files compress quite
well.

I messed around with the mmap and file object stuff
as well as the win32 extensions thinking that I could
extract and compress the data that was being written
to
the trace file, by the application, in chuncks.

Although I was able to get it to work on a contrived
setup, it didn't work when used with the real
application.

Any hints on how to get something similar to the above
to work or recommendations on alternate solutions
would
be *greatly* appreciated.

Thanks,
Joe

__________________________________
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

Jul 18 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

1 post views Thread by Marc Cromme | last post: by
21 posts views Thread by siroregano | last post: by
6 posts views Thread by portCo | last post: by
4 posts views Thread by saytri | last post: by
14 posts views Thread by =?Utf-8?B?R2lkaQ==?= | last post: by
reply views Thread by NPC403 | last post: by
reply views Thread by kermitthefrogpy | last post: by
reply views Thread by zattat | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.