471,305 Members | 1,396 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,305 software developers and data experts.

server scenario - variables in the right spot?

i think i just realized i'm an idiot. again.

(not syntactically correct code... just pieces to illustrate)

class StateObject
{
members like socket, receiveBuffer, receiveBufferSize, StringBuilder
etc..
}

class program
{
//class globals
public AutoResetEvent clientConnected = new AutoResetEvent(false);
public AutoResetEvent sendDone = new AutoResetEvent(false);
public AutoResetEvent receiveDone = new AutoResetEvent(false);

static void main(string[] args)
{
//main server thread
StartServer();
}

static StartServer()
{
//main server thread
setup server socket, bind, listen, etc..
while(true)
{
socket.beginAccept(yada yada, AcceptCallback, listenerSocket);
clientConnected.WaitOne();
{
}

static void AcceptCallback(IAsyncResult ar)
{
//thread for handling client
clientConnected.Set(); //signal main thread to continue, keep
listening for and accept other clients
StateObject state = new StateObject();
state.ComSocket = ((socket)ar.AsyncState).EndAccept(ar);
receive(state);
receiveDone.WaitOne();
send(state);
sendDone.WaitOne();

and so on handling client's session
}

static void recieve(StateObject state)
{
does a state.ComSocket.BeginReceive passing state as the state
object
}

static void ReceiveCallback(IAsyncResult ar)
{
//client's worker thread
yada yada and
receiveDone.Set() //notify client's thread receive is done
}
}

my question pertians to the class global variables which are all
autoResetEvents above but I don't think the type matters with respect to my
question, but please do let me know if it does. When I follow scenario of a
client connecting this works out fine... actual code works also. But when I
think of scenario of many clients connecting I began questioning the
placement of my AutoResetEvents (class globals). What happens when many
clients are being serviced at the same time? Will they all be using those
SAME AutoResetEvents and screwing each other up?

If so, I assume moving them into the StateObject class so each client gets a
separate instance would solve the problem? Would this be a typical solution
or am I way off here with my server's code structure?

I've used similar setups for clients where its one instance of your program
running so no issue. Now that I'm 'attempting' to create a server the
scenario is obviously different, although I know not technically accurate,
i'm thinking of it like each client session is like another instance of your
program running, so those variables that are visible to the whole program
wind up being shared by all the client sessions?

I need enlightenment. again.

Jun 7 '07 #1
7 1815
On Thu, 07 Jun 2007 12:21:21 -0700, David <no****@nospam.comwrote:
i think i just realized i'm an idiot. again.
For what it's worth, I've seen idiots around here, and you ain't it. :)
[...] What happens when many
clients are being serviced at the same time? Will they all be using those
SAME AutoResetEvents and screwing each other up?
[...]
Well, the biggest issue I see is that you should not need the events at
all. Instead, for example, your AcceptCallback() method should post
another Socket.BeginAccept(). Likewise sending and receiving.

You are correct that there is some potential for interference between
clients (though actually, the accept case is probably okay...it's the
connection-specific stuff with sending and receiving where things get
messy), but that all is avoided if you take advantage of the inherent
threading model that the asynch API provides in the first place.

Pete
Jun 7 '07 #2
thanks Peter. I figured I wasn't using the async model correctly. I'm having
a hard time visualizing how the client-server conversation looks (in code)
when done as you suggest (without waithandles). Inevitably there is a
conversation in which one side can't take the next step until an answer is
received. So how is this 'wait' handled using asynch api as its supposed to
be used? I can grasp the beginAccept callback posting another beginAccept as
opposed to the loop structure I have in my example (in fact I think I'll
make that change), but as you put it, the 'connection-specific stuff with
sending and receiving', thats where I can't visualize the asynch structure,
other than how I did it (using waithandles)... it seems my head needs that
'control' or 'main' thread to keep things straight. For example, in some
cases my beginReceive callback posts another beginReceive until all data is
received, but its another thread that started that receive operation in the
first place and is waiting to do something with what is received... You
*are* suggesting I would not need that 'control' or 'main' thread approach
if using asych api better, right? what would that look like? no waithandles?
makes my brain twist in ways its not meant to.. hehe. :)

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Thu, 07 Jun 2007 12:21:21 -0700, David <no****@nospam.comwrote:
>i think i just realized i'm an idiot. again.

For what it's worth, I've seen idiots around here, and you ain't it. :)
thanks.
>
>[...] What happens when many
clients are being serviced at the same time? Will they all be using those
SAME AutoResetEvents and screwing each other up?
[...]

Well, the biggest issue I see is that you should not need the events at
all. Instead, for example, your AcceptCallback() method should post
another Socket.BeginAccept(). Likewise sending and receiving.

You are correct that there is some potential for interference between
clients (though actually, the accept case is probably okay...it's the
connection-specific stuff with sending and receiving where things get
messy), but that all is avoided if you take advantage of the inherent
threading model that the asynch API provides in the first place.

Pete

Jun 7 '07 #3
Well, I was going to point you to the Socket samples on MSDN (there are
links to them at the bottom of the Socket.BeginReceive method page, for
example), but after looking at them I'm not sure they are the best place
to look for guidance. For one, they have the same funny
"while(true){ accept code with a waitable event }" paradigm you had in
your code, and for another they have a bug in the sending code in which
they don't check the number of bytes sent.

But the samples *do* illustrate what I was talking about, with respect to
posting a new receive or send within the callback for the one being
handled. So I suppose that counts for something. Now, all that said, you
had a specific question in there somewhere. Let's see if I can remind
myself where it was... :)

On Thu, 07 Jun 2007 14:11:31 -0700, David <no****@nospam.comwrote:
thanks Peter. I figured I wasn't using the async model correctly. I'm
having
a hard time visualizing how the client-server conversation looks (in
code)
when done as you suggest (without waithandles). Inevitably there is a
conversation in which one side can't take the next step until an answer
is
received. So how is this 'wait' handled using asynch api as its supposed
to
be used?
Both sides can be waiting to receive at the same time, even as one side is
also doing something else (like preparing to send, and then sending).
That's the beauty of the asynch mechanism. You just provide the Socket
class with a buffer where it can put data, and then if and when it gets
data, it puts the data there and tells you about it. Likewise for
sending, you give it a buffer where it gets data, and then when it's
finished sending at least some of the data, it tells you about it and you
send whatever more data needs to be sent (which could be the remainder of
the data you tried to send earlier, or some new data, or none at all).

It's hard for me to give a specific example, since your original code was
sort of a mix of actual code and pseudocode. But as the general idea
goes, here's a sample of what the server might look like (also pseudocode,
so I don't have to bother looking everything up at the moment :) ):

void StartServer()
{
create a listening socket
call Socket.BeginAccept()
}

void AcceptCallback()
{
call Socket.BeginAccept() on the listening socket
call Socket.EndAccept() with current AsyncResult to get the
connected socket
call Socket.BeginReceive() on the connected socket
optionally, call Socket.BeginSend() on the connected socket to
send data to the client (you would skip this if the client is expected to
initiate the conversation)
}

void ReceiveCallback()
{
call Socket.BeginReceive() on the connected socket
call Socket.EndReceive() with current AsyncResult to get the
latest data
append the received data to your buffer, and if you've gotten
enough data to do something with, do it (make sure you don't lose track of
any extra data you might have received that would be part of a subsequent
transmission...if your protocol is strictly alternating, this shouldn't
normally be a problem)

NOTE: if your processing is inexpensive, you may in fact find
yourself doing it right here, and if so that processing may in fact lead
to your code calling Socket.BeginSend from within the receive callback.
There's nothing wrong with doing that at all.
}

void SendCallback()
{
call Socket.EndSend() with current AsyncResult to see how much
data was sent
subtract the number of bytes sent with the number of bytes you
need to send; if the result is greater than zero, you still have more data
to send, and so call Socket.BeginSend() to do that.
}

The client would be very similar, except that instead of a StartServer()
method, it would have a ConnectServer() method where it calls
BeginConnect() instead of BeginAccept(). Instead of the AcceptCallback()
method it would have a ConnectCallback() method, but the internals would
be very similar; there would be no reposting of the BeginConnect() of
course, but you would call BeginReceive() as well as possibly BeginSend(),
if the client is the one expected to initiate the conversation in your
protocol).

One thing I'll point out in the above is that the way I've written it, the
first thing I do in the accept and receive methods is post another one.
*Before* finishing the most recent one. I realize this may seem like more
confusion on top of what might already be a bit confusing. :) You can,
if you like, put the new calls to BeginAccept() and BeginReceive() at the
end, but I have read posts from people who when doing that have seen
performance suffer (and in the case of UDP, actually seeing more data
being lost than would be normal). There's no harm in posting multiple
receives, and in fact this is not uncommon in the native Winsock use of
i/o completion ports (on which the Socket class is based).

Because the above is just pseudocode, it should be apparent that a lot of
details are left out. In particular, you have to write code to manage
your send and receive buffers, to keep track of how much data you are
trying to send or receive, where that data is, and how much has been sent
or received so far. But those are fairly basic details, and not relevant
to the bigger picture of how the async stuff works.
[...] it seems my head needs that
'control' or 'main' thread to keep things straight. For example, in some
cases my beginReceive callback posts another beginReceive until all data
is
received, but its another thread that started that receive operation in
the
first place and is waiting to do something with what is received...
Well, the point of the above is that you are *always* ready to receive,
even if you haven't done anything yet for which you'd expect to be able to
receive. There's no harm in being ready to receive, and presumably at the
point in which you do something (like send data to the other end) that
would cause data to be sent back to you, your data structures are set up
to be ready to handle that case. That way in the receive callback, it can
just handle the received data normally.

Hope that helps.

Pete
Jun 7 '07 #4
yes, very helpful, thank you.

I used below links to get me started... as you mentioned seeing also, thats
where I got the loop with the resetEvent to control incoming connections. I
just changed them to autoResetEvents because i didn't see why a manual was
needed (more code).

http://msdn2.microsoft.com/en-us/lib...5f(vs.80).aspx
http://msdn2.microsoft.com/en-us/lib...DownFilterText
http://msdn2.microsoft.com/en-us/lib...2a(VS.80).aspx

these examples do use the waithandle events but I understand that it is
likely just to keep the code short in order to just illustrate the point at
hand... I didn't realize this and so used them.

- ConnectCallback calling beginAccept again instead of loop with
waithandles. No problem. Got it.

- ReceiveCallback calling beginReceive again: I do this but not exactly how
you illustrate, I first check if there is more data to receive and only do
it then. But I now understand what you are saying about always being ready
to receive (calling beginReceive immediately inside the ReceiveCallback).
This seems like an important modification I need to consider. But it has a
domino effect on my whole structure, of course. If i'm understanding this
correctly (a big 'if' hehe), then I would need to change my 'protocol' by
embedding 'control' information in order to keep things in order. For
example: currently, i'm just sending back and forth the 'commands',
'parameters' needed by those commands, and the 'results' of running those
commands. The 'control' of this happening is within my programs sequential
execution, whereas what I envision what you are saying is more or less a
perpetual sending/receiving motion where within the data received would lie
what it is, what to do with it, where you are in a multi-step process etc...

to try to illustrate (what I'm doing now) (just the connection-specific
stuff, client-server communication, assume this is CommandA which has a few
steps, assume everywhere I say 'wait' i mean using an autoResetEvent, all
sends and receives use the async begin*)

- client sends commandA and waits for confirmation from server of its
receipt

- server receives command and after verifying it can work with it, send back
confirmation of it's receipt and waits for next part

- client receives confirmation of command's receipt from server so *now
knows* it can send, lets just say, part2 of commandA, it sends and waits for
confirmation

- server receives part2 of commandA and sends back confirmation of receipt
and begins executing commandA

- client receives server's confirmation of receipt of part2 of commandA and
waits for results

- server finishes executing and send results back to client

- client receives results

assume there are some commands that may have more than 2 parts (trying to
keep example short). The 'control' in this case is my code on both sides
sequentially going through those steps. Now, with this perpetual
send/receive machine (that sounds cool) I'm envisioning the 'control' having
to be within the data sent so that it would be something like:

- server receives data, breaks it down per the protocol determining that it
is part2 of commandA already in progress, and proceeds appropriately. Like a
big switch statement based on the 'command' part of the protocol, then
within that case, potentially another switch statement or other control
structure for the particular 'part' of the command or where in the commands
total process the data belongs. This way there is no waithandles and control
is removed from the thread where it was and placed in the protocol, allowing
this 'perpetual machine' to run.

does this sound like I'm getting it?

well, either way, I appreciate your help. Sorry my posts are so long but I
don't know any other way to get out my thoughts/questions. I realize what I
really should do is go get a good book specifically on async network
programming for .net. I did read a .net network programming book but it was
for beginners, which I am, but didn't go into nearly enough detail on asynch
pattern to actually use it effectively.

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
Well, I was going to point you to the Socket samples on MSDN (there are
links to them at the bottom of the Socket.BeginReceive method page, for
example), but after looking at them I'm not sure they are the best place
to look for guidance. For one, they have the same funny "while(true){
accept code with a waitable event }" paradigm you had in your code, and
for another they have a bug in the sending code in which they don't check
the number of bytes sent.

But the samples *do* illustrate what I was talking about, with respect to
posting a new receive or send within the callback for the one being
handled. So I suppose that counts for something. Now, all that said, you
had a specific question in there somewhere. Let's see if I can remind
myself where it was... :)

On Thu, 07 Jun 2007 14:11:31 -0700, David <no****@nospam.comwrote:
>thanks Peter. I figured I wasn't using the async model correctly. I'm
having
a hard time visualizing how the client-server conversation looks (in
code)
when done as you suggest (without waithandles). Inevitably there is a
conversation in which one side can't take the next step until an answer
is
received. So how is this 'wait' handled using asynch api as its supposed
to
be used?

Both sides can be waiting to receive at the same time, even as one side is
also doing something else (like preparing to send, and then sending).
That's the beauty of the asynch mechanism. You just provide the Socket
class with a buffer where it can put data, and then if and when it gets
data, it puts the data there and tells you about it. Likewise for
sending, you give it a buffer where it gets data, and then when it's
finished sending at least some of the data, it tells you about it and you
send whatever more data needs to be sent (which could be the remainder of
the data you tried to send earlier, or some new data, or none at all).

It's hard for me to give a specific example, since your original code was
sort of a mix of actual code and pseudocode. But as the general idea
goes, here's a sample of what the server might look like (also pseudocode,
so I don't have to bother looking everything up at the moment :) ):

void StartServer()
{
create a listening socket
call Socket.BeginAccept()
}

void AcceptCallback()
{
call Socket.BeginAccept() on the listening socket
call Socket.EndAccept() with current AsyncResult to get the
connected socket
call Socket.BeginReceive() on the connected socket
optionally, call Socket.BeginSend() on the connected socket to
send data to the client (you would skip this if the client is expected to
initiate the conversation)
}

void ReceiveCallback()
{
call Socket.BeginReceive() on the connected socket
call Socket.EndReceive() with current AsyncResult to get the
latest data
append the received data to your buffer, and if you've gotten
enough data to do something with, do it (make sure you don't lose track of
any extra data you might have received that would be part of a subsequent
transmission...if your protocol is strictly alternating, this shouldn't
normally be a problem)

NOTE: if your processing is inexpensive, you may in fact find
yourself doing it right here, and if so that processing may in fact lead
to your code calling Socket.BeginSend from within the receive callback.
There's nothing wrong with doing that at all.
}

void SendCallback()
{
call Socket.EndSend() with current AsyncResult to see how much
data was sent
subtract the number of bytes sent with the number of bytes you
need to send; if the result is greater than zero, you still have more data
to send, and so call Socket.BeginSend() to do that.
}

The client would be very similar, except that instead of a StartServer()
method, it would have a ConnectServer() method where it calls
BeginConnect() instead of BeginAccept(). Instead of the AcceptCallback()
method it would have a ConnectCallback() method, but the internals would
be very similar; there would be no reposting of the BeginConnect() of
course, but you would call BeginReceive() as well as possibly BeginSend(),
if the client is the one expected to initiate the conversation in your
protocol).

One thing I'll point out in the above is that the way I've written it, the
first thing I do in the accept and receive methods is post another one.
*Before* finishing the most recent one. I realize this may seem like more
confusion on top of what might already be a bit confusing. :) You can,
if you like, put the new calls to BeginAccept() and BeginReceive() at the
end, but I have read posts from people who when doing that have seen
performance suffer (and in the case of UDP, actually seeing more data
being lost than would be normal). There's no harm in posting multiple
receives, and in fact this is not uncommon in the native Winsock use of
i/o completion ports (on which the Socket class is based).

Because the above is just pseudocode, it should be apparent that a lot of
details are left out. In particular, you have to write code to manage
your send and receive buffers, to keep track of how much data you are
trying to send or receive, where that data is, and how much has been sent
or received so far. But those are fairly basic details, and not relevant
to the bigger picture of how the async stuff works.
>[...] it seems my head needs that
'control' or 'main' thread to keep things straight. For example, in some
cases my beginReceive callback posts another beginReceive until all data
is
received, but its another thread that started that receive operation in
the
first place and is waiting to do something with what is received...

Well, the point of the above is that you are *always* ready to receive,
even if you haven't done anything yet for which you'd expect to be able to
receive. There's no harm in being ready to receive, and presumably at the
point in which you do something (like send data to the other end) that
would cause data to be sent back to you, your data structures are set up
to be ready to handle that case. That way in the receive callback, it can
just handle the received data normally.

Hope that helps.

Pete

Jun 8 '07 #5
On Fri, 08 Jun 2007 08:42:40 -0700, David <no****@nospam.comwrote:
[...]
- ReceiveCallback calling beginReceive again: I do this but not exactly
how
you illustrate, I first check if there is more data to receive and only
do
it then. But I now understand what you are saying about always being
ready
to receive (calling beginReceive immediately inside the ReceiveCallback).
This seems like an important modification I need to consider. But it has
a
domino effect on my whole structure, of course.
Assuming you are using the same receive callback for each BeginReceive(),
then I don't see what the difference is. If you are using different
receive callbacks depending on the state of your connection, then yes...I
can see that you would have a hard time posting a new receive before
you've processed the current one. But then, I'd suggest that's not a good
design anyway.
If i'm understanding this
correctly (a big 'if' hehe), then I would need to change my 'protocol' by
embedding 'control' information in order to keep things in order.
This I definitely don't feel is true. How you receive the data should not
affect what is in the protocol at all. If you are currently using
different receive callbacks depending on where you are in the protocol
"conversation", then yes...that would have to change. But I think it
should anyway. Other than that, nothing else would need to change.

If you are using different receive callbacks depending on the state of the
conversation, then you are essentially maintaining that state in code.
IMHO, it is better to be data-driven, and to use your data structures to
maintain the state. If you do that, then you can use a single receive
callback to process all inbound data. What that callback does at any
given moment would depend on the state of your data structures, and it
would never be a problem to have a receive posted and ready to process
incoming data.
For
example: currently, i'm just sending back and forth the 'commands',
'parameters' needed by those commands, and the 'results' of running those
commands. The 'control' of this happening is within my programs
sequential
execution, whereas what I envision what you are saying is more or less a
perpetual sending/receiving motion where within the data received would
lie
what it is, what to do with it, where you are in a multi-step process
etc...
Having a sequential process that you expect your application protocol to
follow doesn't rule out using a single receive callback, and always
posting a new receive as soon as you start processing the current one.
The sequence of commands should be tracked in your data structures, and by
doing so you can use the same code to process any command.
to try to illustrate (what I'm doing now) (just the connection-specific
stuff, client-server communication, assume this is CommandA which has a
few
steps, assume everywhere I say 'wait' i mean using an autoResetEvent, all
sends and receives use the async begin*)

- client sends commandA and waits for confirmation from server of its
receipt
The client can call BeginReceive() as soon as its connected
(BeginConnect() completes). There is no need to delay that until after
sending "commandA". The posted receive won't be completed until after the
command has been sent and replied to, of course, but there's no harm in
being ready to receive beforehand.
- server receives command and after verifying it can work with it, send
back
confirmation of it's receipt and waits for next part
The server can call BeginReceive() as soon as it's connected
(BeginAccept() completes). This is perhaps more obvious since in your
application protocol it appears that the client initiates the
communications. But even if the server started things, it could still
call BeginReceive() as soon as it's connected.

As the very first thing in the receive callback, it would call another
BeginReceive(). Assuming it's already received all the data the client
sent (including the data currently being processed), that receive wouldn't
complete until after the server gets a chance to reply ("confirmation of
its receipt"), but it's not harmful to have the receive posted and waiting.
- client receives confirmation of command's receipt from server so *now
knows* it can send, lets just say, part2 of commandA, it sends and waits
for
confirmation
In the client data structure, it should of course keep track of where in
the conversation it is. If the next thing to do is wait for confirmation
of the receipt of "commandA" and then send "part2" of "commandA", then the
data structure should reflect that somehow. Then when it completes
another receive, it knows what to do with that.
- server receives part2 of commandA and sends back confirmation of
receipt
and begins executing commandA

- client receives server's confirmation of receipt of part2 of commandA
and
waits for results

- server finishes executing and send results back to client

- client receives results
As above, assuming the state of the conversation is maintained in each
data structure (client and server), then the client and server simple take
the appropriate action according to the state of the conversation. Having
an extra receive (or several) posted isn't a problem.

You can have as many "parts" or "commands" or whatever as you like. It
doesn't matter...you can still keep track of the sequence in data and then
the code requires just a single receive callback that does particular
things based on the state of the conversation.

Note that nothing about the above requires a change to the application
protocol itself. Just the implementation of the code that handles the
protocol.
assume there are some commands that may have more than 2 parts (trying to
keep example short). The 'control' in this case is my code on both sides
sequentially going through those steps. Now, with this perpetual
send/receive machine (that sounds cool) I'm envisioning the 'control'
having
to be within the data sent so that it would be something like:

- server receives data, breaks it down per the protocol determining that
it
is part2 of commandA already in progress, and proceeds appropriately.
Like a
big switch statement based on the 'command' part of the protocol, then
within that case, potentially another switch statement or other control
structure for the particular 'part' of the command or where in the
commands
total process the data belongs. This way there is no waithandles and
control
is removed from the thread where it was and placed in the protocol,
allowing
this 'perpetual machine' to run.

does this sound like I'm getting it?
Almost. The main problem in the above is that you are assuming that you
need to change the application protocol so that it includes the "state"
information within it, when in fact the state information can be kept
locally in data structures.

If you want a protocol where various commands can be sent at random times,
then of course you would need some sort of data within the protocol to
tell you what command you're dealing with at any given time. But if the
protocol is strictly a sequential conversation, that state can be easily
maintained within the client and server data structures, rather than being
sent explicitly over the network.

Now, all that said, a couple of other wrinkles you may not be aware of:

* Any given receive may result in any number of bytes between 1 and
the total number of bytes sent but not yet received. This means that in
your receive callback (whether you have many or just one), you may or may
not receive enough data to complete a "command". You need to include
logic to keep track of what command you're working on, and how much of it
you've received so far, so that as you receive new data you can append it
to the command you're working on currently, and correctly detect when
you've actually received enough data to process (that is, the "command"
has been completed).

* Perhaps more confusingly, while it is not a problem to have multiple
receives posted via BeginReceive(), you do need to keep in mind that the
callbacks can be executed in any order for any receive that has completed,
due to threading issues. The buffers will be filled in the order in which
you post them; that much is guaranteed. However, it is possible to have a
BeginReceive() that you posted second, execute its callback first. That
is, you would wind up running the code in the callback that actually
processes the data for that buffer before the code in the same callback
that processes the data for the first buffer.

In the second issue, the usual implementation for dealing with it would be
to maintain a list of the buffers you've posted for receiving, flagging
them as completed when you get the callback, and treating the list as a
queue for the purpose of processing the data, locking it before processing
anything, and only processing data from the beginning up to but not
including the first one that has _not_ been marked as completed. Doing so
will result in a buffer being marked as completed in one callback thread,
but possibly being actually processed in a different one. For example,
suppose you have two buffers posted for a receive, and the callback for
the second buffer is actually called first:

The callback code:
lock the queue
mark buffer as received
while the first buffer in the queue exists and is marked received
{
process the buffer
}
unlock the queue

What each thread does:

Buffer 1 Buffer 2
-------- --------
lock the queue
mark the buffer as received
first buffer (Buffer 1) in the queue
isn't marked yet, so loop doesn't execute
unlock the queue
lock the queue
mark the buffer as received
first buffer (Buffer 1) in the queue is now marked, so process it
second buffer (Buffer 2) in the queue is also marked, so process it as well
no more buffers, so exit loop
unlock the queue

The first issue above is mandatory. You absolutely have to deal with it.
The second is not, and if you find it too confusing, that's a good reason
to _not_ post a new receive as the very first thing in the receive
callback, and rather to post it after you've finished processing the
current buffer.

Note that you still don't need event handles; the only difference is where
in the processing of the current receive event do you wind up posting a
new buffer for receiving. Doing it first is more efficient, but more
complicated. For simplicity's sake you may prefer to do it last, after
you're done processing the current data. Note that doing so doesn't
change _whether_ you post a new receive (ie you will always post a new
receive regardless), nor does it change _how_ you post that new receive
(ie you will always post the same size buffer, or perhaps even the same
exact buffer in this case, you posted before). It only changes _where_
you post the new receive.
well, either way, I appreciate your help. Sorry my posts are so long but
I
don't know any other way to get out my thoughts/questions. I realize
what I
really should do is go get a good book specifically on async network
programming for .net.
I don't know if there is one. You may find Winsock books useful, but even
there I'm not aware of any that are considered _great_. "Network
Programming for Microsoft Windows, Second Edition" by Ohlund and Jones, as
well as "Windows Sockets Network Programming (Addison-Wesley Advanced
Windows Series)" by Quinn and Shute may be useful to you, but they aren't
specific to .NET and in fact the topics that would be most applicable to
..NET would be those regarding i/o completion ports, and frankly because
..NET simplifies the use of i/o completion ports so much (in a good way),
I'm not convinced that learning all the intracacies of doing them in
Winsock is necessary.

Pete
Jun 8 '07 #6
thanks again Pete. I'm embarrassed to have to ask for clarification on this
as it shows my lack of experience but:

you said: "IMHO, it is better to be data-driven, and to use your data
structures to
maintain the state."

If I'm correct in what I think you mean by 'data structures' then I also
think I'm all set. Good to go. By 'data structure', are you referring to
whatever custom structure(s), like classes or structs, you are using in code
to keep track of what I was calling 'control' data?... and if I'm correct is
the same thing your referring to as 'state' data?.... for example, a
MyStateClass with members for keeping track of these connection-specific
attributes like what command is being executed, maybe how many parts, or
steps, the command has, and a way to mark which steps are done or not done,
etc... maybe even a MyStateClass with a MyCommand class as a member...
MyStateClass could have members like the socket and buffers and attributes
used by them and the MyCommand class could have members like TheCommand,
NumberOfSteps, LastStepCompleted, NextStepToComplete, etc..

so if that's what you mean by data structures (custom classes), and if the
use of them as I described above is what you mean by being data driven, and
using your data structures to maintain the state, then I totally get it now.
If not, well, I'll keep trying and thank you anyway! you have been a great
help and even if I'm not getting this one I *have* learned a good deal from
you (on previous posts as well).
"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Fri, 08 Jun 2007 08:42:40 -0700, David <no****@nospam.comwrote:
>[...]
- ReceiveCallback calling beginReceive again: I do this but not exactly
how
you illustrate, I first check if there is more data to receive and only
do
it then. But I now understand what you are saying about always being
ready
to receive (calling beginReceive immediately inside the ReceiveCallback).
This seems like an important modification I need to consider. But it has
a
domino effect on my whole structure, of course.

Assuming you are using the same receive callback for each BeginReceive(),
then I don't see what the difference is. If you are using different
receive callbacks depending on the state of your connection, then yes...I
can see that you would have a hard time posting a new receive before
you've processed the current one. But then, I'd suggest that's not a good
design anyway.
>If i'm understanding this
correctly (a big 'if' hehe), then I would need to change my 'protocol' by
embedding 'control' information in order to keep things in order.

This I definitely don't feel is true. How you receive the data should not
affect what is in the protocol at all. If you are currently using
different receive callbacks depending on where you are in the protocol
"conversation", then yes...that would have to change. But I think it
should anyway. Other than that, nothing else would need to change.

If you are using different receive callbacks depending on the state of the
conversation, then you are essentially maintaining that state in code.
IMHO, it is better to be data-driven, and to use your data structures to
maintain the state. If you do that, then you can use a single receive
callback to process all inbound data. What that callback does at any
given moment would depend on the state of your data structures, and it
would never be a problem to have a receive posted and ready to process
incoming data.
>For
example: currently, i'm just sending back and forth the 'commands',
'parameters' needed by those commands, and the 'results' of running those
commands. The 'control' of this happening is within my programs
sequential
execution, whereas what I envision what you are saying is more or less a
perpetual sending/receiving motion where within the data received would
lie
what it is, what to do with it, where you are in a multi-step process
etc...

Having a sequential process that you expect your application protocol to
follow doesn't rule out using a single receive callback, and always
posting a new receive as soon as you start processing the current one.
The sequence of commands should be tracked in your data structures, and by
doing so you can use the same code to process any command.
>to try to illustrate (what I'm doing now) (just the connection-specific
stuff, client-server communication, assume this is CommandA which has a
few
steps, assume everywhere I say 'wait' i mean using an autoResetEvent, all
sends and receives use the async begin*)

- client sends commandA and waits for confirmation from server of its
receipt

The client can call BeginReceive() as soon as its connected
(BeginConnect() completes). There is no need to delay that until after
sending "commandA". The posted receive won't be completed until after the
command has been sent and replied to, of course, but there's no harm in
being ready to receive beforehand.
>- server receives command and after verifying it can work with it, send
back
confirmation of it's receipt and waits for next part

The server can call BeginReceive() as soon as it's connected
(BeginAccept() completes). This is perhaps more obvious since in your
application protocol it appears that the client initiates the
communications. But even if the server started things, it could still
call BeginReceive() as soon as it's connected.

As the very first thing in the receive callback, it would call another
BeginReceive(). Assuming it's already received all the data the client
sent (including the data currently being processed), that receive wouldn't
complete until after the server gets a chance to reply ("confirmation of
its receipt"), but it's not harmful to have the receive posted and
waiting.
>- client receives confirmation of command's receipt from server so *now
knows* it can send, lets just say, part2 of commandA, it sends and waits
for
confirmation

In the client data structure, it should of course keep track of where in
the conversation it is. If the next thing to do is wait for confirmation
of the receipt of "commandA" and then send "part2" of "commandA", then the
data structure should reflect that somehow. Then when it completes
another receive, it knows what to do with that.
>- server receives part2 of commandA and sends back confirmation of
receipt
and begins executing commandA

- client receives server's confirmation of receipt of part2 of commandA
and
waits for results

- server finishes executing and send results back to client

- client receives results

As above, assuming the state of the conversation is maintained in each
data structure (client and server), then the client and server simple take
the appropriate action according to the state of the conversation. Having
an extra receive (or several) posted isn't a problem.

You can have as many "parts" or "commands" or whatever as you like. It
doesn't matter...you can still keep track of the sequence in data and then
the code requires just a single receive callback that does particular
things based on the state of the conversation.

Note that nothing about the above requires a change to the application
protocol itself. Just the implementation of the code that handles the
protocol.
>assume there are some commands that may have more than 2 parts (trying to
keep example short). The 'control' in this case is my code on both sides
sequentially going through those steps. Now, with this perpetual
send/receive machine (that sounds cool) I'm envisioning the 'control'
having
to be within the data sent so that it would be something like:

- server receives data, breaks it down per the protocol determining that
it
is part2 of commandA already in progress, and proceeds appropriately.
Like a
big switch statement based on the 'command' part of the protocol, then
within that case, potentially another switch statement or other control
structure for the particular 'part' of the command or where in the
commands
total process the data belongs. This way there is no waithandles and
control
is removed from the thread where it was and placed in the protocol,
allowing
this 'perpetual machine' to run.

does this sound like I'm getting it?

Almost. The main problem in the above is that you are assuming that you
need to change the application protocol so that it includes the "state"
information within it, when in fact the state information can be kept
locally in data structures.

If you want a protocol where various commands can be sent at random times,
then of course you would need some sort of data within the protocol to
tell you what command you're dealing with at any given time. But if the
protocol is strictly a sequential conversation, that state can be easily
maintained within the client and server data structures, rather than being
sent explicitly over the network.

Now, all that said, a couple of other wrinkles you may not be aware of:

* Any given receive may result in any number of bytes between 1 and
the total number of bytes sent but not yet received. This means that in
your receive callback (whether you have many or just one), you may or may
not receive enough data to complete a "command". You need to include
logic to keep track of what command you're working on, and how much of it
you've received so far, so that as you receive new data you can append it
to the command you're working on currently, and correctly detect when
you've actually received enough data to process (that is, the "command"
has been completed).

* Perhaps more confusingly, while it is not a problem to have multiple
receives posted via BeginReceive(), you do need to keep in mind that the
callbacks can be executed in any order for any receive that has completed,
due to threading issues. The buffers will be filled in the order in which
you post them; that much is guaranteed. However, it is possible to have a
BeginReceive() that you posted second, execute its callback first. That
is, you would wind up running the code in the callback that actually
processes the data for that buffer before the code in the same callback
that processes the data for the first buffer.

In the second issue, the usual implementation for dealing with it would be
to maintain a list of the buffers you've posted for receiving, flagging
them as completed when you get the callback, and treating the list as a
queue for the purpose of processing the data, locking it before processing
anything, and only processing data from the beginning up to but not
including the first one that has _not_ been marked as completed. Doing so
will result in a buffer being marked as completed in one callback thread,
but possibly being actually processed in a different one. For example,
suppose you have two buffers posted for a receive, and the callback for
the second buffer is actually called first:

The callback code:
lock the queue
mark buffer as received
while the first buffer in the queue exists and is marked received
{
process the buffer
}
unlock the queue

What each thread does:

Buffer 1 Buffer 2
-------- --------
lock the queue
mark the buffer as received
first buffer (Buffer 1) in the queue
isn't marked yet, so loop doesn't execute
unlock the queue
lock the queue
mark the buffer as received
first buffer (Buffer 1) in the queue is now marked, so process it
second buffer (Buffer 2) in the queue is also marked, so process it as
well
no more buffers, so exit loop
unlock the queue

The first issue above is mandatory. You absolutely have to deal with it.
The second is not, and if you find it too confusing, that's a good reason
to _not_ post a new receive as the very first thing in the receive
callback, and rather to post it after you've finished processing the
current buffer.

Note that you still don't need event handles; the only difference is where
in the processing of the current receive event do you wind up posting a
new buffer for receiving. Doing it first is more efficient, but more
complicated. For simplicity's sake you may prefer to do it last, after
you're done processing the current data. Note that doing so doesn't
change _whether_ you post a new receive (ie you will always post a new
receive regardless), nor does it change _how_ you post that new receive
(ie you will always post the same size buffer, or perhaps even the same
exact buffer in this case, you posted before). It only changes _where_
you post the new receive.
>well, either way, I appreciate your help. Sorry my posts are so long but
I
don't know any other way to get out my thoughts/questions. I realize
what I
really should do is go get a good book specifically on async network
programming for .net.

I don't know if there is one. You may find Winsock books useful, but even
there I'm not aware of any that are considered _great_. "Network
Programming for Microsoft Windows, Second Edition" by Ohlund and Jones, as
well as "Windows Sockets Network Programming (Addison-Wesley Advanced
Windows Series)" by Quinn and Shute may be useful to you, but they aren't
specific to .NET and in fact the topics that would be most applicable to
.NET would be those regarding i/o completion ports, and frankly because
.NET simplifies the use of i/o completion ports so much (in a good way),
I'm not convinced that learning all the intracacies of doing them in
Winsock is necessary.

Pete

Jun 9 '07 #7
On Sat, 09 Jun 2007 07:30:15 -0700, David <no****@nospam.comwrote:
[...]
MyStateClass could have members like the socket and buffers and
attributes
used by them and the MyCommand class could have members like TheCommand,
NumberOfSteps, LastStepCompleted, NextStepToComplete, etc..
That all sounds very reasonable to me, and as far as I can tell from your
description is in fact what I was describing.
[...]
If not, well, I'll keep trying and thank you anyway! you have been a
great
help and even if I'm not getting this one I *have* learned a good deal
from
you (on previous posts as well).
You're most welcome. :)

Pete
Jun 9 '07 #8

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

13 posts views Thread by Larry L | last post: by
5 posts views Thread by Abhilash.k.m | last post: by
17 posts views Thread by minlar | last post: by
reply views Thread by Crisco www.misericordia.com.br | last post: by
2 posts views Thread by djc | last post: by
107 posts views Thread by DaveC | last post: by
17 posts views Thread by =?Utf-8?B?SmltIFJvZGdlcnM=?= | last post: by
reply views Thread by rosydwin | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.