By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,561 Members | 3,265 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,561 IT Pros & Developers. It's quick & easy.

C# Fork Help

P: n/a
Now before I go into detail I just want to say that this is purely for
my own benefit and has no real world usage. I remember way back when
the tool for *nix systems called forkbomb was created. I recently
explained it to a friend of mine and at that time decided to see if I
could replicate it in C#. I have created an application that, to me at
least, mimics what fork would/should do on *nix/bsd systems. I know
that fork spawns a new process, basically identical to the parent
process. For C# this was rather easy, at least getting a new process
started. What I have is this.

while (rc >= 0)
{
rc = fork();
if (rc == 0) break;
}

static int fork()
{
int pid = Process.GetCurrentProcess().Id;
int val = 0;

try
{
Console.WriteLine("Pid {0}", pid);

string fileName =
Process.GetCurrentProcess().MainModule.FileName.Re place(".vshost",
"");

ProcessStartInfo info = new ProcessStartInfo(fileName);

info.UseShellExecute = false;

Process processChild = Process.Start(info);

val = 1;
}
catch
{
val = 0;
}

return val;
}

Now when I run this, all hell breaks loose and my system will get to
around 1000 processes in roughly 20 seconds and then crash. Is this
essentially what fork() would do in the above mentioned environments?
Or what would be a better solution to this?

Thanks in advance

Joe

Apr 9 '07 #1
Share this Question
Share on Google+
5 Replies


P: n/a
On Sun, 08 Apr 2007 20:12:00 -0700, JoeW <te********@gmail.comwrote:
[...]
Now when I run this, all hell breaks loose and my system will get to
around 1000 processes in roughly 20 seconds and then crash. Is this
essentially what fork() would do in the above mentioned environments?
Sort of. That is, it's been a very long time since I did any Unix
programming, but it's my understanding that Unix's idea of a process and a
thread is much more lightweight than those concepts in Windows. As such,
you can probably get quite a bit further growing the number of processes,
and possibly get far enough that the number of processes stabilizes.
Still, eventually even on a Unix computer it's possible that you'd run out
of resources and at a minimum, not be able to start any more processes (at
least not until the earlier ones have had a chance to finish running and
exit).

For what it's worth, I think your program would be more "bomb-like" :) if
each process created two more processes, rather than just one. ;)

I don't believe that Windows should actually crash when you reach the
maximum number of processes, but I can easily believe that it does. It's
been my experience that Windows itself (at least the NT-based line, which
includes XP and Vista) is generally pretty stable, but vulnerable to
poorly-written third-party components. In particular, drivers can very
easily send the OS into la-la land and cause it to crash. IMHO, an
interesting experiment would be to boot Windows into safe mode and run
your program and see what happens there. (Of course, it's always possible
that it is in fact Windows itself that's vulnerable to this condition...I
wouldn't rule anything out without testing it myself).
Or what would be a better solution to this?
Define "better solution". What are you trying to accomplish? Wantonly
creating thousands of new processes without limit isn't much of a
"solution" to anything, except overloading and possibly crashing your
computer. So I'd say that your current code does that pretty well. :)

If you have a different objective, then you should explain what that
objective is, and then perhaps someone can comment on whether there's a
"better solution" for that objective or not.

Pete
Apr 9 '07 #2

P: n/a
"JoeW" <te********@gmail.comwrote in message
news:11**********************@p77g2000hsh.googlegr oups.com...
Now before I go into detail I just want to say that this is purely for
my own benefit and has no real world usage. I remember way back when
the tool for *nix systems called forkbomb was created. I recently
explained it to a friend of mine and at that time decided to see if I
could replicate it in C#. I have created an application that, to me at
least, mimics what fork would/should do on *nix/bsd systems. I know
that fork spawns a new process, basically identical to the parent
process. For C# this was rather easy, at least getting a new process
started. What I have is this.

while (rc >= 0)
{
rc = fork();
if (rc == 0) break;
}

static int fork()
{
int pid = Process.GetCurrentProcess().Id;
int val = 0;

try
{
Console.WriteLine("Pid {0}", pid);

string fileName =
Process.GetCurrentProcess().MainModule.FileName.Re place(".vshost",
"");

ProcessStartInfo info = new ProcessStartInfo(fileName);

info.UseShellExecute = false;

Process processChild = Process.Start(info);

val = 1;
}
catch
{
val = 0;
}

return val;
}

Now when I run this, all hell breaks loose and my system will get to
around 1000 processes in roughly 20 seconds and then crash. Is this
essentially what fork() would do in the above mentioned environments?
Or what would be a better solution to this?

Thanks in advance

Joe


Actually your system won't get at 1000 processes running, the system reserves a process
control structure but doesn't actually load and initializes that process before
Process.Start returns. So, the PID's you see don't reflect the running process,
The reason for the chaotic behavior is the result of a lack of resources, Windows cannot
create that many processes especially not that many "CLR" processes.
Each CLR process start with 3 threads , each having has a default stack of 1MB (committed),
add to that ~5MB of non shared memory for the program (CLR version dependant), so you easily
end up with a minimum of ~8MB Virtual memory per process.
Now, say that you have 2GB of RAM with a total of 4GB of VM available, that means that the
system starts thrashing even before 250 processes get created, and becomes highly
unresponsive.
When the system reaches it's max. VM size occupation, "hell breaks loose". The OS will try
to keep control, but it can no longer create processes, worse, it can no longer "initialize"
the already created processes, instead it starts recovery actions, displaying dialogs with
all kind of failure messages , possibly initiating DrWatson(s) (yet another process), which
will now require more resource like window handles and non-pageable pool memory for the USER
objects for the dialogs, but as I said your system has become highly unresponsive, you can
hardly act upon the dialog requests, and even if you can click a button and as such free a
resource, the system will automatically reassign the resource with another one, finally you
end with a system that has so many user requests pending that the system itself has
exhausted it's non pageable pool and the USER objects, you get the "false" impression that
the system crashed, but it has not (at least not on XP and higher).
If you can manage to respond to all pending request, it should be possible bring the system
back to a state where you can kill the running processes and finally get back to a normal
state.
Note that above is true for 64 bit system as well, at some time you will hit a wall, or the
RAM limit, the VM limit, the non-pageable pool and USER object resource limit and possibly
another one. Even with 8GB RAM and 12GB VM space, I wasn't able to create 1000 processes (on
W2K3 64bit) using the sample you posted, although, I was able to recover.

Willy.

Apr 9 '07 #3

P: n/a
On Apr 9, 1:06 am, "Peter Duniho" <NpOeStPe...@nnowslpianmk.com>
wrote:
On Sun, 08 Apr 2007 20:12:00 -0700, JoeW <teh.sn1...@gmail.comwrote:
[...]
Now when I run this, all hell breaks loose and my system will get to
around 1000 processes in roughly 20 seconds and then crash. Is this
essentially what fork() would do in the above mentioned environments?

Sort of. That is, it's been a very long time since I did any Unix
programming, but it's my understanding that Unix's idea of a process and a
thread is much more lightweight than those concepts in Windows. As such,
you can probably get quite a bit further growing the number of processes,
and possibly get far enough that the number of processes stabilizes.
Still, eventually even on a Unix computer it's possible that you'd run out
of resources and at a minimum, not be able to start any more processes (at
least not until the earlier ones have had a chance to finish running and
exit).

For what it's worth, I think your program would be more "bomb-like" :) if
each process created two more processes, rather than just one. ;)

I don't believe that Windows should actually crash when you reach the
maximum number of processes, but I can easily believe that it does. It's
been my experience that Windows itself (at least the NT-based line, which
includes XP and Vista) is generally pretty stable, but vulnerable to
poorly-written third-party components. In particular, drivers can very
easily send the OS into la-la land and cause it to crash. IMHO, an
interesting experiment would be to boot Windows into safe mode and run
your program and see what happens there. (Of course, it's always possible
that it is in fact Windows itself that's vulnerable to this condition...I
wouldn't rule anything out without testing it myself).
Or what would be a better solution to this?

Define "better solution". What are you trying to accomplish? Wantonly
creating thousands of new processes without limit isn't much of a
"solution" to anything, except overloading and possibly crashing your
computer. So I'd say that your current code does that pretty well. :)

If you have a different objective, then you should explain what that
objective is, and then perhaps someone can comment on whether there's a
"better solution" for that objective or not.

Pete
Well by "better solution" I mean more efficient, as of now it only
creates a single new process each time. To me this is a very simple
recursive solution but I want something like O(2^n), you know 1 makes
2, 2 makes 4, etc. Again I know you guys probably have better things
to do than help me 'forkbomb" my system but its all for the sake of
learning and I thank you for the help.

Apr 9 '07 #4

P: n/a
On Mon, 09 Apr 2007 07:30:03 -0700, JoeW <te********@gmail.comwrote:
Well by "better solution" I mean more efficient, as of now it only
creates a single new process each time. To me this is a very simple
recursive solution but I want something like O(2^n), you know 1 makes
2, 2 makes 4, etc. Again I know you guys probably have better things
to do than help me 'forkbomb" my system but its all for the sake of
learning and I thank you for the help.
Well, if you find that 20 seconds is too long a time to wait for your
computer to become unusable, then as I mentioned you could simply start
two processes in your code instead of one. Though, even with that change
I think that you will find that the program has no more practical value
than the way it is now. :)

Note that the "O" notation can't be used to describe what you want. Even
adding a second call to start a second process in your code, the code
itself is still basically constant order. This isn't important here, but
if and when you start using "O" notation to describe actual algorithms, it
will be important for you to understand how it's used and you've used it
incorrectly here.

Pete
Apr 9 '07 #5

P: n/a
JoeW wrote:
>[...]
Now when I run this, all hell breaks loose and my system will get to
around 1000 processes in roughly 20 seconds and then crash. Is this
essentially what fork() would do in the above mentioned environments?
If I remember correctly fork starts a new process on Unix systems, which
has the same memory content as it's parent process and the application
continues processing by returning from the fork function call in both
processes. In contrast to Windows, where a new started process will
start at it's main entry function and which hasn't "copied" or better
said mapped all the memory of it's parent process.

To really simulate fork() you would have to dig somewhat deeper (memory
handling etc.). In managed applications it's somewhat easier to share
memory between assembly instances, though it's still not the same as
fork accomplishes.

Or what would be a better solution to this?
In Windows you IMHO should use threads. They will share the same memory,
so are somewhat comparable to fork, but they will continue to share
memory IIRC in contrast to fork.
And to simulate fork returning 2 threads you would also have to dig very
deep into Windows internals, meaning you would have to start a new
"child thread" with the same stack as it's parent thread.

To make a long story short: IMHO fork cannot be simulated *easily* in
Windows. Use threading in both operating systems.
Thanks in advance
Hope it helps.
Joe
Andre
Apr 10 '07 #6

This discussion thread is closed

Replies have been disabled for this discussion.