I want to preface this reply by noting that it's been a few years since
I've done any significant network code. My working knowledge is limited
these days, and lots of details are fuzzy. You'll get much better advice
in a Winsock- and/or TCP/IP-specific newsgroup for questions like this.
That said, here's what I have to offer...
On Mon, 07 Jan 2008 16:47:07 -0800, Chizl <Ch***@NoShitMail.comwrote:
First of all thanks for taking the time to respond.. I have some
comments
below.
"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
>My recollection is that the operation system has its own maximum backlog
value. Trying to set the backlog above this value will result in either
no effect, or setting to the maximum value. In neither case will you
get
the backlog you're asking for.
Doesn't that defeat the purpose of the call?
I don't see how. Just because there's a maximum, that doesn't make being
able to set the value a useless operation.
You'll probably find it useful to read the Winsock doc page for the
listen() function:
http://msdn2.microsoft.com/en-us/lib...68(VS.85).aspx
Note the comment near the bottom:
The backlog parameter is limited (silently) to a reasonable
value as determined by the underlying service provider. Illegal
values are replaced by the nearest legal value. There is no
standard provision to find out the actual backlog value
Note also that it says that if you request a "reasonable maximum" for the
backlog on a normal TCP socket, it will use "several hundred".
Unfortunately, it's not more specific than that, so I have no way to know
whether 1024 is going to be respected or not. I suspect not though.
Unfortunately, I don't have any recent working knowledge of this. All I
can tell you is what my recollection is from having used the API in the
past.
Basically: if you're only trying to connect up to a couple hundred clients
at once and not all of them succeed, that suggests to me that either
you're congesting the network or the backlog is not getting set to 1024
(or both could be true at the same time, I suppose).
Of course, there's also the possibility that you're not running into a
backlog problem, and that you're getting errors just due to not running
your accept frequently enough. If that's the case, then no matter what
you set the backlog to, it won't fix the problem. I don't have a good
enough recollection off the top of my head with regards to exactly the
interchange between the endpoints when a connection request winds up in
the backlog but not yet processed to know whether this applies or not.
>One suggestion: in the code you posted, rather than having a thread sit
and loop calling BeginAcceptSocket() over and over, just have the
EndAcceptSocket() callback call it (and do so right away). [...]
I'll look deeper into this, but I got this info from MS.
http://msdn2.microsoft.com/en-us/lib...eptsocket.aspx
For better or worse, the MSDN doc samples are not always the best. Also,
note that the sample you're looking at does not actually handle multiple
connection requests. In other words, that sample doesn't really even
claim to be illustrating how to write code that asynchronously handles
multiple connection requests. It's just a very basic illustration as to
how the async API can be used.
>You have other performance-hindering design choices there as well:
* your accept callback really should be doing very little. If you
have some heavyweight processing you want to do when accepting a client,
you should handle that elsewhere if handling a large number of
simultaneous connections is a priority for you. Otherwise, you get
stuck
with a thread that's doing something other than accepting connections.
I'm assuming you mean my MyWebServer.AddConnection() call. That's doing
nothing but checking current connections < max connection, then
incriment a
counter.
Okay...it's good that's not expensive, but the rest of the method
definitely is expensive. Creating a thread and starting it isn't cheap.
Hopefully your CSocket constructor is low-cost, but even so...doing all of
this stuff can add up.
> * You should not be creating a new thread for each connection. Use
the async i/o methods for the Socket class instead (e.g. BeginReceive).
Using a thread for each connection you will unnecessarily limit the
maximum number of connections you can handle -- the number of threads
any
given process can create is much lower than the number of sockets a
process could theoretically handle -- and at the same hurt performance
because of the constant thread context switches that will be required to
deal with multiple active connections.
The thread I'm creating is after I've released the callback. In
VC++ I've
tested a spawn of over 2000 threads, your saying in C# that isn't
possible?
I'm surprised you got over 2000 threads even in unmanaged code. The
theoretical maximum, with 1MB stacks for each thread, on 32-bit Windows is
2048 just based on the address space alone, and that assumes that
_nothing_ else is consuming any of that address space. Obviously, any
application that does anything interesting is going to allow fewer threads
than that.
On 64-bit Windows, it's completely different. The theoretical limit will
be bounded more by your disk space than anything else, but even there
you're going to run into performance issues first.
Finally, in the context of a server, 2000 connections is a drop in the
bucket. A properly written server can handle hundreds of thousands of
active connections, and you just are not going to reach that scale using
one thread per connection.
The bottom line: creating lots of threads is both unnecessary and
inefficient. If the number of active connections you expect to manage is
in the hundreds, then you can get away with that design. If it's only
dozens, then it might even work well. But once you get into thousands of
connections or more, the cost of the threads really starts affecting your
throughput and scalability.
Also, even if you weren't creating a new thread for each connection,
having two different threads managing the accept logic is going to slow
your code down significantly. But adding a new thread for each connection
just makes it worse, because not only will Windows have to context switch
from the thread calling EndAcceptSocket() back to the one calling
BeginAcceptSocket(), it _also_ has to context switch to the thread you
just created as well. In other words, the one-thread-per-connection
design is not only inherently inefficient, it synergistically worsens the
problems with the other part of your design.
Either problem independently is something worth fixing. But together,
they can really hurt.
Now, is all of this the reason for the behavior you're seeing? I've no
idea. It's not really possible to say without more information about what
exactly is causing the connection failures. But it's certainly a
possibility.
Pete