469,608 Members | 2,153 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,608 developers. It's quick & easy.

Question about multi-threading

Hello. I have a doubt about multi-threading, if it is or not the best way to
achieve the following:

I have a server application that listens a port through a socket.
A client will send messages to that application (many messages).
Because the amount of messages is huge, the server application should only
send each received message to another process (a thread?) that manages the
message, and then go with the next message.

I want a limited number of threads, so the server app just looks for one
that is free and send the message to it. Once the threads finish the work,
they wait to be called by the server app.

Are threads the solution for that?


Regards,

Diego F.
Apr 25 '07 #1
7 1562
Diego,

Yes, I believe they are, but you will need to hold off on accepting new
or processing new messages (the semantics are up to you) until the previous
ones complete.

Are you locked into the protocol that you are using for requests from
the clients, or are you able to dictate what the message format is? If so,
I would recommend using WCF to handle the communications (you can use a Tcp
connection for the transport and a binary encoder for the format of the
messages) as it has this kind of tuning built in.

If the format of the messages is locked, and you have to implement this
custom, you still might want to create a custom transport in WCF on the
service side as well as a custom encoder, and allowing WCF to handle all the
plumbing involving threading and the like.

Hope this helps.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Diego F." <di********@msn.comwrote in message
news:OC**************@TK2MSFTNGP03.phx.gbl...
Hello. I have a doubt about multi-threading, if it is or not the best way
to achieve the following:

I have a server application that listens a port through a socket.
A client will send messages to that application (many messages).
Because the amount of messages is huge, the server application should only
send each received message to another process (a thread?) that manages the
message, and then go with the next message.

I want a limited number of threads, so the server app just looks for one
that is free and send the message to it. Once the threads finish the work,
they wait to be called by the server app.

Are threads the solution for that?


Regards,

Diego F.

Apr 25 '07 #2
"Diego F." <di********@msn.comwrote in message
news:OC**************@TK2MSFTNGP03.phx.gbl...
I have a server application that listens a port through a socket.
A client will send messages to that application (many messages).
Because the amount of messages is huge, the server application should only
send each received message to another process (a thread?) that manages the
message, and then go with the next message.

I want a limited number of threads, so the server app just looks for one
that is free and send the message to it. Once the threads finish the work,
they wait to be called by the server app.

Are threads the solution for that?
Sure. Rather than "send each received message to another process", I might
write "enqueue each received message to another thread".

Assuming that the order of processing is unimportant, it seems to me that an
easy solution is to use the BackgroundWorker class. With this, .NET manages
your pool of threads. You can add a task to the pool, and if there's a
thread available to work on the task, it is assigned the task and runs. If
all of the threads are already busy, your task is kept in a queue until
there's a thread available.

In this paradigm, a "task" is represented simply as a delegate which you add
to the BackgroundWorker's DoWork event. You can pass unique data to the
RunWorkerAsync() method, and extract that data from the Argument property of
the DoWorkEventArgs passed into your DoWork delegate. The call to
RunWorkerAsync() enqueues your task, and when there's a thread available to
run the task, your DoWork delegate is called for the actual processing.

I read the reply from Nicholas recommending the use of WCF for this purpose.
I have to admit, I don't know anything about WCF and maybe it has this sort
of thing built right into it. If it is, maybe Nicholas could be a bit more
explicit as to what part of WCF would be useful for doing this sort of
data-processing delegation.

Pete

Apr 25 '07 #3
Peter,

Assuming that the OP can rewire the client and server to use WCF, it
would be a good fit here. In particular, I'm assuming that the OP would
want to use the TCP binding in WCF.

With WCF, at least for the Tcp binding, you can set the ListenBacklog
and MaxPendingAccepts properties in the config file (and through code, if
you prefer).

The ListenBacklog property will allow you to set the maximum number of
queued connection requests that can be pending, while the MaxPendingAccepts
property indicates the maximum number of concurrent accepting threads on the
endpoint will have.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com

"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote in message
news:13*************@corp.supernews.com...
"Diego F." <di********@msn.comwrote in message
news:OC**************@TK2MSFTNGP03.phx.gbl...
>I have a server application that listens a port through a socket.
A client will send messages to that application (many messages).
Because the amount of messages is huge, the server application should
only send each received message to another process (a thread?) that
manages the message, and then go with the next message.

I want a limited number of threads, so the server app just looks for one
that is free and send the message to it. Once the threads finish the
work, they wait to be called by the server app.

Are threads the solution for that?

Sure. Rather than "send each received message to another process", I
might write "enqueue each received message to another thread".

Assuming that the order of processing is unimportant, it seems to me that
an easy solution is to use the BackgroundWorker class. With this, .NET
manages your pool of threads. You can add a task to the pool, and if
there's a thread available to work on the task, it is assigned the task
and runs. If all of the threads are already busy, your task is kept in a
queue until there's a thread available.

In this paradigm, a "task" is represented simply as a delegate which you
add to the BackgroundWorker's DoWork event. You can pass unique data to
the RunWorkerAsync() method, and extract that data from the Argument
property of the DoWorkEventArgs passed into your DoWork delegate. The
call to RunWorkerAsync() enqueues your task, and when there's a thread
available to run the task, your DoWork delegate is called for the actual
processing.

I read the reply from Nicholas recommending the use of WCF for this
purpose. I have to admit, I don't know anything about WCF and maybe it has
this sort of thing built right into it. If it is, maybe Nicholas could be
a bit more explicit as to what part of WCF would be useful for doing this
sort of data-processing delegation.

Pete

Apr 25 '07 #4
"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:uJ**************@TK2MSFTNGP04.phx.gbl...
[...]
The ListenBacklog property will allow you to set the maximum number of
queued connection requests that can be pending, while the
MaxPendingAccepts property indicates the maximum number of concurrent
accepting threads on the endpoint will have.
It's not clear to me that that's what the OP wants. You seem to be talking
about having control over the lower-level aspects of making TCP connections,
while the OP's question seems to me to just be talking about separating his
i/o (the network communications) from the processing (the act of "managing
the message").

Do I misunderstand the original post, or is there something in WCF that is
particularly suited to addressing the latter?

Pete

Apr 25 '07 #5
Peter,

The OP wants to have a limited number of threads to process incoming
requests that come in on a port. This is what WCF is doing with the
MaxPendingAccepts property. It basically says "hey, you can have

The thing is, while the client is only going to process a limited number
of calls, the OP will still have to worry about queuing the incoming calls
that will come in while his service is running. Then, when the operations
are done, the OP also has to worry about dispatching those requests.

That's where the ListenBacklog property will come in, as it will
indicate how many pending requests can be backlogged before requests are
starting to be turned away (the OP might want to turn this way up).

The reason to recommend WCF in this case is because it handles all of
this for you. You just have to define your interface, and code against it
(within the model, of course). WCF will handle listening for the request,
managing how many requests are processed at the same time, denying other
requests if too many are coming in, etc, etc for you. The OP is going to
have to worry about this himself if he codes it from scratch.
--
- Nicholas Paldino [.NET/C# MVP]
- mv*@spam.guard.caspershouse.com
"Peter Duniho" <Np*********@NnOwSlPiAnMk.comwrote in message
news:13*************@corp.supernews.com...
"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote
in message news:uJ**************@TK2MSFTNGP04.phx.gbl...
>[...]
The ListenBacklog property will allow you to set the maximum number of
queued connection requests that can be pending, while the
MaxPendingAccepts property indicates the maximum number of concurrent
accepting threads on the endpoint will have.

It's not clear to me that that's what the OP wants. You seem to be
talking about having control over the lower-level aspects of making TCP
connections, while the OP's question seems to me to just be talking about
separating his i/o (the network communications) from the processing (the
act of "managing the message").

Do I misunderstand the original post, or is there something in WCF that is
particularly suited to addressing the latter?

Pete

Apr 25 '07 #6
There are a number of different architectures available for you to pick
from. In general, the best architecture for building a "big" socket
application is to use Async Sockets, and let Windows (and the CLR) manage
the majority of your threading for you. If possible, you should avoid
creating and managing your own threads as there's alot of complexity
inherent in doing so.

I build big socket servers for a living, and some time ago I wrote up a
bunch of the architectural iterations that I've been through when doing this
in .Net.

http://www.coversant.com/Coversant/B...0/Default.aspx

--
Chris Mullins, MCSD.NET, MCPD:Enterprise, Microsoft C# MVP
http://www.coversant.com/blogs/cmullins

"Diego F." <di********@msn.comwrote in message
news:OC**************@TK2MSFTNGP03.phx.gbl...
Hello. I have a doubt about multi-threading, if it is or not the best way
to achieve the following:

I have a server application that listens a port through a socket.
A client will send messages to that application (many messages).
Because the amount of messages is huge, the server application should only
send each received message to another process (a thread?) that manages the
message, and then go with the next message.

I want a limited number of threads, so the server app just looks for one
that is free and send the message to it. Once the threads finish the work,
they wait to be called by the server app.

Are threads the solution for that?


Regards,

Diego F.

Apr 25 '07 #7
"Nicholas Paldino [.NET/C# MVP]" <mv*@spam.guard.caspershouse.comwrote in
message news:%2******************@TK2MSFTNGP03.phx.gbl...
Peter,

The OP wants to have a limited number of threads to process incoming
requests that come in on a port. This is what WCF is doing with the
MaxPendingAccepts property. It basically says "hey, you can have
I think you left out some words. :)

As far as "wants to have a limited number of threads to process incoming
requests that come in on a port" goes...

I admit to not being familiar with WCF, but if I read the docs right, using
the ListenBackLog property assumes that each request is made with a new TCP
connection. It's equivalent to the "backlog" parameter of Socket.Listen().
I don't read the original post as asking for that functionality (and
frankly, I don't really see how WCF provides significantly easier access to
that functionality than simply setting it using the plain Socket.Listen()
method).

I guess only the OP can really clear this up, as it comes down to a
disagreement as to what he actually meant. Hopefully between the two of us,
someone's provided him with a useful answer. :)

Pete

Apr 25 '07 #8

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

6 posts views Thread by Adam Hartshorne | last post: by
1 post views Thread by herrcho | last post: by
2 posts views Thread by herrcho | last post: by
13 posts views Thread by John Salerno | last post: by
28 posts views Thread by jakk | last post: by
1 post views Thread by Frank Millman | last post: by
5 posts views Thread by Tom | last post: by
13 posts views Thread by ARC | last post: by
14 posts views Thread by Alexander Dong Back Kim | last post: by
reply views Thread by devrayhaan | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.