hi,
what i want to achieve:
i have a cgi file, that writes an entry to a text-file..
like a log entry (when was it invoked, when did his worke end).
it's one line of text.
the problem is:
what happens if 2 users invoke the cgi at the same time?
and it will happen, because i am trying now to stress test it, so i will
start 5-10 requests in parallel and so on.
so, how does one synchronizes several processes in python?
first idea was that the cgi will create a new temp file every time,
and at the end of the stress-test, i'll collect the content of all those
files. but that seems as a stupid way to do it :(
another idea was to use a simple database (sqlite?) which probably has
this problem solved already...
any better ideas?
thanks,
gabor 19 7492
gabor <ga***@nekomancer.net> writes: so, how does one synchronizes several processes in python?
first idea was that the cgi will create a new temp file every time, and at the end of the stress-test, i'll collect the content of all those files. but that seems as a stupid way to do it :(
There was a thread about this recently ("low-end persistence
strategies") and for Unix the simplest answer seems to be the
fcntl.flock function. For Windows I don't know the answer.
Maybe os.open with O_EXCL works.
gabor <ga***@nekomancer.net> wrote: so, how does one synchronizes several processes in python?
This is a very hard problem to solve in the general case, and the answer
depends more on the operating system you're running on than on the
programming language you're using.
On the other hand, you said that each process will be writing a single line
of output at a time. If you call flush() after each message is written,
that should be enough to ensure that the each line gets written in a single
write system call, which in turn should be good enough to ensure that
individual lines of output are not scrambled in the log file.
If you want to do better than that, you need to delve into OS-specific
things like the flock function in the fcntl module on unix.
Roy Smith wrote: gabor <ga***@nekomancer.net> wrote: On the other hand, you said that each process will be writing a single line of output at a time. If you call flush() after each message is written, that should be enough to ensure that the each line gets written in a single write system call, which in turn should be good enough to ensure that individual lines of output are not scrambled in the log file.
Unfortunately this assumes that the open() call will always succeed,
when in fact it is likely to fail sometimes when another file has
already opened the file but not yet completed writing to it, AFAIK.
If you want to do better than that, you need to delve into OS-specific things like the flock function in the fcntl module on unix.
The OP was probably on the right track when he suggested that things
like SQLite (conveniently wrapped with PySQLite) had already solved this
problem.
-Peter
Peter Hansen <pe***@engcorp.com> writes: The OP was probably on the right track when he suggested that things like SQLite (conveniently wrapped with PySQLite) had already solved this problem.
But they haven't. They depend on messy things like server processes
constantly running, which goes against the idea of a cgi that only
runs when someone calls it.
Peter Hansen <pe***@engcorp.com> wrote: The OP was probably on the right track when he suggested that things like SQLite (conveniently wrapped with PySQLite) had already solved this problem.
Perhaps, but a relational database seems like a pretty heavy-weight
solution for a log file.
On Fri, May 27, 2005 at 09:27:38AM -0400, Roy Smith wrote: Peter Hansen <pe***@engcorp.com> wrote: The OP was probably on the right track when he suggested that things like SQLite (conveniently wrapped with PySQLite) had already solved this problem. Perhaps, but a relational database seems like a pretty heavy-weight solution for a log file.
On the other hand, it works ;-)
-- Gerhard
--
Gerhard Häring - gh@ghaering.de - Python, web & database development
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)
iD8DBQFClygYdIO4ozGCH14RAmzYAJwKRemKk6jYLCSN4HG/EhX8x/gPbACgvL4e
bSLQlGaD1oEIzDGAEQXZrzs=
=4oWu
-----END PGP SIGNATURE-----
Sorry, why is the temp file solution 'stupid'?, (not
aesthetic-pythonistic???) - it looks OK: simple and direct, and
certainly less 'heavy' than any db stuff (even embedded)
And collating in a 'official log file' can be done periodically by
another process, on a time-scale that is 'useful' if not
instantaneous...
Just trying to understand here...
JMD
On 2005-05-27, Peter Hansen <pe***@engcorp.com> wrote: Roy Smith wrote: gabor <ga***@nekomancer.net> wrote: On the other hand, you said that each process will be writing a single line of output at a time. If you call flush() after each message is written, that should be enough to ensure that the each line gets written in a single write system call, which in turn should be good enough to ensure that individual lines of output are not scrambled in the log file.
Unfortunately this assumes that the open() call will always succeed, when in fact it is likely to fail sometimes when another file has already opened the file but not yet completed writing to it, AFAIK.
Not in my experience. At least under Unix, it's perfectly OK
to open a file while somebody else is writing to it. Perhaps
Windows can't deal with that situation?
--
Grant Edwards grante Yow! FOOLED you! Absorb
at EGO SHATTERING impulse
visi.com rays, polyester poltroon!!
Grant Edwards wrote: On 2005-05-27, Peter Hansen <pe***@engcorp.com> wrote:Unfortunately this assumes that the open() call will always succeed, when in fact it is likely to fail sometimes when another file has already opened the file but not yet completed writing to it, AFAIK.
Not in my experience. At least under Unix, it's perfectly OK to open a file while somebody else is writing to it. Perhaps Windows can't deal with that situation?
Hmm... just tried it: you're right! On the other hand, the results were
unacceptable: each process has a separate file pointer, so it appears
whichever one writes first will have its output overwritten by the
second process.
Change the details, but the heart of my objection is the same.
-Peter
On 05/27/2005-06:02PM, Peter Hansen wrote: Hmm... just tried it: you're right! On the other hand, the results were unacceptable: each process has a separate file pointer, so it appears whichever one writes first will have its output overwritten by the second process.
Did you open the files for 'append' ?
Peter Hansen wrote: Grant Edwards wrote: Not in my experience. At least under Unix, it's perfectly OK to open a file while somebody else is writing to it. Perhaps Windows can't deal with that situation?
Hmm... just tried it: you're right!
Umm... the part you were right about was NOT the possibility that
Windows can't deal with the situation, but the suggestion that it might
actually be able to (since apparently it can). Sorry to confuse.
-Peter
Christopher Weimann wrote: On 05/27/2005-06:02PM, Peter Hansen wrote:
Hmm... just tried it: you're right! On the other hand, the results were unacceptable: each process has a separate file pointer, so it appears whichever one writes first will have its output overwritten by the second process.
Did you open the files for 'append' ?
Nope. I suppose that would be a rational thing to do for log files,
wouldn't it? I wonder what happens when one does that...
-Peter
gabor wrote: the problem is: what happens if 2 users invoke the cgi at the same time?
Would BerkleyDB support that?
Hi !
On windows, with PyWin32, to read this little sample-code :
import time
import win32file, win32con, pywintypes
def flock(file):
hfile = win32file._get_osfhandle(file.fileno())
win32file.LockFileEx(hfile, win32con.LOCKFILE_EXCLUSIVE_LOCK, 0, 0xffff,
pywintypes.OVERLAPPED())
def funlock(file):
hfile = win32file._get_osfhandle(file.fileno())
win32file.UnlockFileEx(hfile, 0, 0xffff, pywintypes.OVERLAPPED())
file = open("FLock.txt", "r+")
flock(file)
file.seek(123)
for i in range(500):
file.write("AAAAAAAAAA")
print i
time.sleep(0.001)
#funlock(file)
file.close()
Michel Claveau
Well I just tried it on Linux anyway. I opened the file in two python
processes using append mode.
I then wrote simple function to write then flush what it is passed:
def write(msg):
foo.write("%s\n" % msg)
foo.flush()
I then opened another terminal and did 'tail -f myfile.txt'.
It worked just fine.
Maybe that will help. Seems simple enough to me for basic logging.
Cheers,
Bill
jean-marc wrote: Sorry, why is the temp file solution 'stupid'?, (not aesthetic-pythonistic???) - it looks OK: simple and direct, and certainly less 'heavy' than any db stuff (even embedded)
And collating in a 'official log file' can be done periodically by another process, on a time-scale that is 'useful' if not instantaneous...
Just trying to understand here...
actually this is what i implemented after asking the question, and works
fine :)
i just thought that maybe there is a solution where i don't have to deal
with 4000 files in the temp folder :)
gabor
Isn't a write to a file that's opened as append atomic in most operating
systems? At least in modern Unix systems. man open(2) should give more
information about this.
Like:
f = file("filename", "a")
f.write(line)
f.flush()
if line fits into the stdio buffer. Otherwise os.write can be used.
As this depends on the OS support for append, it is not portable. But
neither is locking. And I am not sure if it works for NFS-mounted files.
--
Piet van Oostrum <pi**@cs.uu.nl>
URL: http://www.cs.uu.nl/~piet [PGP]
Private email: pi**@vanoostrum.org
Roy Smith wrote: Peter Hansen <pe***@engcorp.com> wrote:
The OP was probably on the right track when he suggested that things like SQLite (conveniently wrapped with PySQLite) had already solved this problem.
Perhaps, but a relational database seems like a pretty heavy-weight solution for a log file.
Excel seems like a pretty heavyweight solution for most of the
applications it's used for, too. Most people are interested in solving a
problem and moving on, and while this may lead to bloatware it can also
lead to the inclusion of functionality that can be hugely useful in
other areas of the application.
regards
Steve
--
Steve Holden +1 703 861 4237 +1 800 494 3119
Holden Web LLC http://www.holdenweb.com/
Python Web Programming http://pydish.holdenweb.com/ This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Bryant Huang |
last post by:
Hello!
I would like to read in files, during run-time, which contain plain
Python function definitions, and then call those functions by their
string name. In other words, I'd like to read in...
|
by: Brett |
last post by:
What are the advantages/disadvantages of using one process with multiple
threads or doing the same task with multiple processes, each having one
thread?
I see using multiple threads under one...
|
by: DaveK777 |
last post by:
I'm about to start development on a new project in PHP, which I've
never used before. I got the latest version (5.1.4) installed and
running on my dev/test server, and I tried enabling APC since...
|
by: colson |
last post by:
I know how to create a new process using System.Diagnostics.Process and
wait for the process to end using WaitForExit(). How do I wait for
multiple processes? Is there an equivalent to...
|
by: baumann |
last post by:
Hi,
i write
static time_t current_time = time();
but the gcc compiler complains:
error: initializer element is not constant.
I want to know
1, why,
|
by: jonathan184 |
last post by:
how to move one file at a time to another folder.
I am trying to one file at a time for every 30 minutes.
So far the script is transferring all files at the same time and the source folder...
|
by: ajash.pv |
last post by:
helooo... plz help,
i wana 2 download more than one file at a time(only single selection
of destintion).i mean the file save ask will show only one time but i
wana to store more than one file in...
|
by: abhijit patra |
last post by:
How the codes in the DLLs are shared by multiple processes?
|
by: Swan |
last post by:
Can anyone plz tell me,Can I write file on server using binary access,or it is only to write on local?(Actually I am creating OCX for Http File Upload Control in VB using API's.So for that I need to...
|
by: Charles Arthur |
last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
|
by: emmanuelkatto |
last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud.
Please let me know.
Thanks!
Emmanuel
|
by: BarryA |
last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
|
by: nemocccc |
last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
|
by: Sonnysonu |
last post by:
This is the data of csv file
1 2 3
1 2 3
1 2 3
1 2 3
2 3
2 3
3
the lengths should be different i have to store the data by column-wise with in the specific length.
suppose the i have to...
|
by: Hystou |
last post by:
There are some requirements for setting up RAID:
1. The motherboard and BIOS support RAID configuration.
2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
|
by: Hystou |
last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
|
by: jinu1996 |
last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
|
by: tracyyun |
last post by:
Dear forum friends,
With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
| |