I'm not sure whether this question belongs entirely here or in a perl
group--but probably it requires knowledge of both.
I've written a perl module, currently in use, which does asynchronous
searches of library databases anywhere in the world. It forks off a
separate process for each database which it contacts. In the past
I've had it search successfuly through more than 1000 databases,
reporting back one one record from each.
The processes are forked out 5 at a time, every 25 seconds. The
forked processes are controlled by Event timers, so that they time
out after 20 seconds, if they can't connect to the server they are
supposed to query. But if they do connect, they are left in play.
This means that a backlog of forked processes cand build up while
waiting to get the records they've requested, especially when querying
large numbers.
Each record averages about 1k . There is, in addition, about 5k
overhead for each forked process. Recently, I've wanted to use this
module to again query upwards of 1000 databases but to bring back
between 10 and 25 records. So, we now have as much as 30k devoted to
each forked process. The result was a great deal of disk thrashing
and repeated reports to the terminal from the operating system that it
it was out of memory and had killed one of the forked processes.
During this time the terminal was essentially locked and wouldn't
respond to the keyboard, so there was nothing I could do but wait or
reboot. The disk thrashing I assuming was a sign of memory swapping.
I tried to solve the problem by running a copy of top from the main
program before forking off each batch of 5 forked processes. I
examined the outupt from top to determined whether the main process
had gone above 60% of memory capacity. If so, I implement a 5 minute
timeout period. This worked like a charm. But when I looked at what
was really hapening, it turned out that the sleep period never had to
be set. Memory never exceeded about 5% of the total memory resources.
So, I thought, maybe it just needs the extra time between each batch
of 5 forks. Instead of using the call to top, I implemented a 60
second timeout between each batch of 5 forks. This was better than
nothing but there was nevertheless a significant memory drain and
eventually the terminal froze. By using the ps command I could see
that there was a huge backlog of forked processes in memory--this was
not the case when I used top. And the access to top took only 6
seconds.
So, it seems that there something soothing to the operating system in
running the external program--top--which has nothing to do with giving
the system more time to processes the forks.
Myron Turner
www.room535.org