470,613 Members | 2,344 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,613 developers. It's quick & easy.

Linux, Perl, and Memory problem

I'm not sure whether this question belongs entirely here or in a perl
group--but probably it requires knowledge of both.

I've written a perl module, currently in use, which does asynchronous
searches of library databases anywhere in the world. It forks off a
separate process for each database which it contacts. In the past
I've had it search successfuly through more than 1000 databases,
reporting back one one record from each.

The processes are forked out 5 at a time, every 25 seconds. The
forked processes are controlled by Event timers, so that they time
out after 20 seconds, if they can't connect to the server they are
supposed to query. But if they do connect, they are left in play.
This means that a backlog of forked processes cand build up while
waiting to get the records they've requested, especially when querying
large numbers.

Each record averages about 1k . There is, in addition, about 5k
overhead for each forked process. Recently, I've wanted to use this
module to again query upwards of 1000 databases but to bring back
between 10 and 25 records. So, we now have as much as 30k devoted to
each forked process. The result was a great deal of disk thrashing
and repeated reports to the terminal from the operating system that it
it was out of memory and had killed one of the forked processes.
During this time the terminal was essentially locked and wouldn't
respond to the keyboard, so there was nothing I could do but wait or
reboot. The disk thrashing I assuming was a sign of memory swapping.
I tried to solve the problem by running a copy of top from the main
program before forking off each batch of 5 forked processes. I
examined the outupt from top to determined whether the main process
had gone above 60% of memory capacity. If so, I implement a 5 minute
timeout period. This worked like a charm. But when I looked at what
was really hapening, it turned out that the sleep period never had to
be set. Memory never exceeded about 5% of the total memory resources.

So, I thought, maybe it just needs the extra time between each batch
of 5 forks. Instead of using the call to top, I implemented a 60
second timeout between each batch of 5 forks. This was better than
nothing but there was nevertheless a significant memory drain and
eventually the terminal froze. By using the ps command I could see
that there was a huge backlog of forked processes in memory--this was
not the case when I used top. And the access to top took only 6
seconds.

So, it seems that there something soothing to the operating system in
running the external program--top--which has nothing to do with giving
the system more time to processes the forks.

Myron Turner
www.room535.org
Jul 19 '05 #1
3 2632
On Mon, 22 Mar 2004 16:11:10 GMT, mt*****@ms.umanitoba.ca (Myron
Turner) wrote:
Sorry, I left of the last point-which is that I'd like to know what is
happening here so that I can address the problem without just blindly
insert a call to linux's top command between each 5 forked processes.

Myron Turner
www.room535.org
Jul 19 '05 #2
Myron Turner wrote:
By using the ps command I could see
that there was a huge backlog of forked processes in memory


You've clearly got a logic-flow error; too may process are being
created all at once. Fix that first.

Try outputting debugging messages before and after each fork.

warn "Process $$ about to fork\n";
$child_pid = fork();
if ($child_pid) {
warn "Parent process $$ created child $child_pid\n";
} else {
warn "New child process $$ created\n";
}

Check the messages going to STDERR to verify that the expected
number of child processes are being created in the proper order.
-Joe

P.S. Please post to comp.lang.perl.misc next time.
Jul 19 '05 #3
On Mon, 22 Mar 2004 20:26:38 GMT, Joe Smith <Jo*******@inwap.com>
wrote:
Myron Turner wrote:
By using the ps command I could see
that there was a huge backlog of forked processes in memory


You've clearly got a logic-flow error; too may process are being
created all at once. Fix that first.

That wasn't the case. I was able to fix the problem by monitoring
disk activity and allowing the user to tailor throughput to his/her
own memory resources.

Myron Turner
Myron Turner
www.room535.org
Jul 19 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

9 posts views Thread by I Report, You Decide | last post: by
2 posts views Thread by bouchia.nazha | last post: by
6 posts views Thread by surfivor | last post: by
4 posts views Thread by skyy | last post: by
reply views Thread by xiangni330 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.