By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
425,979 Members | 943 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 425,979 IT Pros & Developers. It's quick & easy.

implementing a time bound wait on the socket (TCP)

P: n/a
Problem:

I send a lot of requests to the application (running on a different
box, of course), and I receive back responses from the app .
Below: socket corresponds to Socket socket=new
Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);

in my Wait method I have the following
public void Wait (uint milliseconds)
{
while(/*Remain idle for passed number of "milliseconds"*/){
if (socket.Poll(1, SelectMode.SelectRead))
{
ProcessSocket(socket);// read info of the buffer
and calls registered callbacks for the client
}
else
return; //returns after Poll has expired
}
}
}

Hence when a client calls Wait, he get all the callbacks, and then
Wait blocks for milliseconds before assuming that nothing else is
coming from the wire. I have trouble implementing the last point.

Oct 20 '08 #1
Share this Question
Share on Google+
21 Replies


P: n/a
On Mon, 20 Oct 2008 12:10:51 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
[...]
Hence when a client calls Wait, he get all the callbacks, and then
Wait blocks for milliseconds before assuming that nothing else is
coming from the wire. I have trouble implementing the last point.
Don't call Socket.Poll(). Ever. It's just not the right way to implement
things. Problems include that you are unnecessarily using the CPU and the
Poll() method isn't reliable (the Socket may become unreasonable between
the time Poll() says it's readable and the time you get around to actually
trying to read it).

Correct approaches to the problem involve keeping a separate timer (for
example, using the System.Threading.Timer class) that performs some
appropriate action. If you are looking for a "wait since last read any
data", then you need to reset the timer each time you actually
successfully read data. An appropriate action might be to close the
socket, but without knowing your specific goals, it's impossible to say
for sure.

Also, that assumes that "assuming that nothing else is coming from the
wire" is a valid approach. If you have no control over the application
protocol, maybe that's correct. But generally speaking, it's a poor way
to deal with network i/o. Communications should have a well-defined end,
so that timeouts aren't required at all.

Pete
Oct 20 '08 #2

P: n/a
Don't call Socket.Poll(). *Ever. *It's just not the right way to implement *
things. *Problems include that you are unnecessarily using the CPU and the *
Poll() method isn't reliable (the Socket may become unreasonable between *
the time Poll() says it's readable and the time you get around to actually *
trying to read it).
Well, I inherited the project from the previous developer used poll,
among other things. The ongoing design on the client side is not in
the best state, and I am trying to sort it out
Correct approaches to the problem involve keeping a separate timer (for *
example, using the System.Threading.Timer class) that performs some *
appropriate action. *If you are looking for a "wait since last read any*
data", then you need to reset the timer each time you actually *
successfully read data. *An appropriate action might be to close the *
socket, but without knowing your specific goals, it's impossible to say *
for sure.
The simplistic version of the protocol: you send a request, via
socket, to the remote application, and it 'immediately' response with
a response in a byte buffer (there are no intentional delays on the
application side). The buffer contains various headers, sub-headers,
and the message. After parsing it, I forward a response to clients via
subscribed events. So client sends many requests, then waits for
responses. I want client to be able to specify how much time to
allocate for the [post] responses before sending additional set of
requests or close the connection.
Also, that assumes that "assuming that nothing else is coming from the *
wire" is a valid approach. *If you have no control over the application*
protocol, maybe that's correct. *But generally speaking, it's a poor way *
to deal with network i/o. *Communications should have a well-defined end, *
so that timeouts aren't required at all.
Any good references you may recommend to design this sort of
applications?

Thanks...

Oct 20 '08 #3

P: n/a
On Mon, 20 Oct 2008 13:43:46 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
>Don't call Socket.Poll(). ¬*Ever. [...]

Well, I inherited the project from the previous developer used poll,
among other things. The ongoing design on the client side is not in
the best state, and I am trying to sort it out
Whether you wrote it or you inherited it, calling Poll() is still bad.
You don't need to defend the code to me; I'm just sharing what I know
about network programming.
The simplistic version of the protocol: you send a request, via
socket, to the remote application, and it 'immediately' response with
a response in a byte buffer (there are no intentional delays on the
application side). The buffer contains various headers, sub-headers,
and the message. After parsing it, I forward a response to clients via
subscribed events. So client sends many requests, then waits for
responses. I want client to be able to specify how much time to
allocate for the [post] responses before sending additional set of
requests or close the connection.
I am skeptical of that design. Use it at your own risk.
Any good references you may recommend to design this sort of
applications?
The Winsock FAQ is actually a good place to start, even though it's not at
all specific to .NET. Most of the network programming issues are the same
regardless of what API you're using. The other must-read is the "Windows
Sockets Lame List" (sorry, the title is much funnier if you're familiar
with the old Seattle-based TV show "Almost Live").

The FAQ: http://tangentsoft.net/wskfaq/
The Lame List: http://tangentsoft.net/wskfaq/articles/lame-list.html

In the Lame List, see in particular comments regarding calling recv() with
MSG_PEEK and the use of ioctlsocket() with FNIOREAD. Calling Poll() is
basically the same thing, with the same problems (note that the Lame List
doesn't describe _all_ the problems, just what the author felt was the
most serious one(s)).

Pete
Oct 20 '08 #4

P: n/a
On Oct 20, 6:31*pm, "Peter Duniho" <NpOeStPe...@nnowslpianmk.com>
wrote:
On Mon, 20 Oct 2008 13:43:46 -0700, puzzlecracker <ironsel2...@gmail.com>*
wrote:
Don't call Socket.Poll(). *Ever. [...]
Well, I inherited the project from the previous developer used poll,
among other things. The ongoing design on the client side is not in
the best state, and I am trying to sort it out

Whether you wrote it or you inherited it, calling Poll() is still bad. *
You don't need to defend the code to me; I'm just sharing what I know *
about network programming.
The simplistic version of the protocol: you send a request, via
socket, to the remote application, and it 'immediately' response *with
a response in a byte buffer (there are no intentional delays on the
application side). The buffer contains various headers, sub-headers,
and the message. After parsing it, I forward a response to clients via
subscribed events. So client sends many requests, then waits for
responses. I want client to be able to specify how much time to
allocate for the [post] responses before sending additional set of
requests or close the connection.

I am skeptical of that design. *Use it at your own risk.
I am would like to steer clear of WinSock and use native csharp stuff.
First of all, why is

Why would I have these problems --" Problems include that you are
unnecessarily using the CPU and the
Poll() method isn't reliable (the Socket may become unreasonable
between the time Poll() says it's readable and the time you get
around to actually trying to read it). " if I begin the reading
right after I call poll? I just don't see why/how socket can become
unreasonable?

However, if this is the case, why not use Socket.Receive with
SocketFlag.Peek and not have to resort to WinSock altogether?

thanks

Oct 21 '08 #5

P: n/a
On Tue, 21 Oct 2008 13:40:25 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
I am would like to steer clear of WinSock and use native csharp stuff.
I'm not suggesting you do otherwise. But, the .NET (not C#) stuff is
built on top of Winsock, and all the same caveats that apply to Winsock
apply to the .NET stuff too. So if you want to learn how to write .NET
network code, you should start by becoming familiar with Winsock, at least
to some degree.

The caveats that apply to Winsock _should_ be documented in the .NET API,
but they aren't. This is unfortunate, but it simply means that people
writing code for .NET Sockets need to be familiar with caveats that apply
to Winsock, and to networking in general.
First of all, why is

Why would I have these problems --" Problems include that you are
unnecessarily using the CPU and the
Poll() method isn't reliable (the Socket may become unreasonable
between the time Poll() says it's readable and the time you get
around to actually trying to read it). " if I begin the reading
right after I call poll? I just don't see why/how socket can become
unreasonable?
First, let me correct my previous statement: I meant to write
"unreadable", not "unreasonable". Sorry about the typo.

Second, one reason the readability state can change is that the network
driver is not required to hang on to data that it's buffered. As long as
it hasn't already acknowledged the packet, it's allowed to toss the data
away, and it might not acknowledge the packet until you retrieve the data.

Basically, there is no documented guarantee that the state returned by
Poll() is anything other than a momentary snapshot of the current state,
and so if you write code that assumes there _is_ such a guarantee, your
code is broken.
However, if this is the case, why not use Socket.Receive with
SocketFlag.Peek and not have to resort to WinSock altogether?
I never said you should resort to Winsock. I did say you should not call
Poll(), and for similar reasons you should not use the SocketFlag.Peek
flag. For more details on why this is, you can read the relevant
commentary in the Winsock resources. Those details should be included in
the .NET documentation, but the fact that they aren't doesn't mean that
they don't apply; it just means you need to seek them elsewhere.

Pete
Oct 22 '08 #6

P: n/a
On Oct 21, 8:09 pm, "Peter Duniho" <NpOeStPe...@nnowslpianmk.com>
wrote:
On Tue, 21 Oct 2008 13:40:25 -0700, puzzlecracker <ironsel2...@gmail.com>
wrote:
I am would like to steer clear of WinSock and use native csharp stuff.

I'm not suggesting you do otherwise. But, the .NET (not C#) stuff is
built on top of Winsock, and all the same caveats that apply to Winsock
apply to the .NET stuff too. So if you want to learn how to write .NET
network code, you should start by becoming familiar with Winsock, at least
to some degree.

The caveats that apply to Winsock _should_ be documented in the .NET API,
but they aren't. This is unfortunate, but it simply means that people
writing code for .NET Sockets need to be familiar with caveats that apply
to Winsock, and to networking in general.
First of all, why is
Why would I have these problems --" Problems include that you are
unnecessarily using the CPU and the
Poll() method isn't reliable (the Socket may become unreasonable
between the time Poll() says it's readable and the time you get
around to actually trying to read it). " if I begin the reading
right after I call poll? I just don't see why/how socket can become
unreasonable?

First, let me correct my previous statement: I meant to write
"unreadable", not "unreasonable". Sorry about the typo.

Second, one reason the readability state can change is that the network
driver is not required to hang on to data that it's buffered. As long as
it hasn't already acknowledged the packet, it's allowed to toss the data
away, and it might not acknowledge the packet until you retrieve the data.
I see, in this case, the reliable (or rather more reliable)way to get
data from the socket is to call Receive . This poses an issue, as I
described before, whereby I want to be able to calculate the timeout
after the last pocket (or byte [65535]) and a user's max inactive
wait time. In other words, I would have to start a timer
simultaneously with Socket.Receive. I can already envision the
vagaries of code.

Then, what's the better alternative: using socket.Poll or Receive
coupled with Timer for the better and more reliable handling of
network data processing?
Thanks...
Oct 22 '08 #7

P: n/a
On Tue, 21 Oct 2008 17:43:13 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
I see, in this case, the reliable (or rather more reliable)way to get
data from the socket is to call Receive . This poses an issue, as I
described before, whereby I want to be able to calculate the timeout
after the last pocket (or byte [65535]) and a user's max inactive
wait time. In other words, I would have to start a timer
simultaneously with Socket.Receive. I can already envision the
vagaries of code.
Well, as I mentioned before, the basic approach you're trying to implement
is really not the best way to do this in the first place. Having a
timeout as part of your communications protocol is almost certainly
unnecessary and so introduces unneeded complexity into your code.

But, if you insist, using a Timer is really not that big of a deal. The
worst issue is the race condition that exists between the Timer and a call
to Receive(). You'll need to decide what to do if the call to Receive()
completes successfully just as your Timer elapses and is handled. I'd
recommend allowing that to be considered a true success and ignoring the
Timer expiration, but one way or the other you need to address that.

Other than that, the implementation can be reasonably simple.
Then, what's the better alternative: using socket.Poll or Receive
coupled with Timer for the better and more reliable handling of
network data processing?
The most reliable handling of network data processing would not involve a
timeout on the connection at all. But, if you feel you must have the
timeout, there is no question: a timer is a much better approach than
using the Poll() method.

Pete
Oct 22 '08 #8

P: n/a
On Tue, 21 Oct 2008 23:34:12 -0700, ozbear <oz****@bigpond.comwrote:
[...]
>The most reliable handling of network data processing would not involve
a
timeout on the connection at all.
<snip>
I don't understand why anyone would say that.
Because it's true.
We live in a world of
timeouts, all the way from watchdog timers in operating systems
to alarm clocks next to our beds. In a /perfect/ world with /perfect/
networks what you say might be true, but we don't. Packets get lost,
routers malfunction, network traffic gets delayed if rerouted thru
congested links. All of these things contribute to an environment
where some timeout logic is required in all but the most naive
applications. That is why we have keep-alive probes even built into
the logic of the TCP/IP protocol.
First, keep-alive in TCP has little to do with time-outs (TCP/IP itself,
which is not the same as TCP, doesn't have an inherent keep-alive feature,
nor would it have any reason to). The fact that the default interval is
two hours should be proof enough of that (who wants to wait two hours for
a time-out?).

Secondly, I'm not talking about eschewing timeouts altogether. I'm
talking about implementing a timeout as an integral part of an application
protocol. If you've been following this thread, you should understand
that the timeout being used here is part of the protocol, not part of
error detection and recovery.

The application protocol should have a better-defined demarcation of
end-of-transmission than simply waiting some arbitrary period of time and
calling that good.

Pete
Oct 22 '08 #9

P: n/a
>
Secondly, I'm not talking about eschewing timeouts altogether. *I'm *
talking about implementing a timeout as an integral part of an application *
protocol. *If you've been following this thread, you should understand *
that the timeout being used here is part of the protocol, not part of *
error detection and recovery.

The application protocol should have a better-defined demarcation of *
end-of-transmission than simply waiting some arbitrary period of time and*
calling that good.

Some application domain do require timeouts. Let's say market data
servers, where information is pushed to a client at indeterminate time
frequencies, among other domains. I wonder if there is a common
approach to this architecture... there has to be....
Oct 22 '08 #10

P: n/a
On Wed, 22 Oct 2008 06:22:11 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
Some application domain do require timeouts. Let's say market data
servers, where information is pushed to a client at indeterminate time
frequencies, among other domains.
Why does that require timeouts _as part of the application protocol_?

Pete
Oct 22 '08 #11

P: n/a
On Oct 22, 2:09*pm, "Peter Duniho" <NpOeStPe...@nnowslpianmk.com>
wrote:
On Wed, 22 Oct 2008 06:22:11 -0700, puzzlecracker <ironsel2...@gmail.com>*
wrote:
Some application domain do require timeouts. Let's say *market data
servers, where information is pushed to a client at indeterminate time
frequencies, among other domains.

Why does that require timeouts _as part of the application protocol_?

Pete

It doesn't require a timeout per se --Receive would be sufficient.
However, timeouts handle the case where client would like to
unsubscripted in the case data hasn't arrived within a specified
timeout window.

On a side note, you've mentioned the race condition between timeout
and Receive. Is there a way to avoid the race without making a
mentioned assumption?

thanks
Oct 22 '08 #12

P: n/a
On Wed, 22 Oct 2008 14:20:09 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
On Oct 22, 2:09¬*pm, "Peter Duniho" <NpOeStPe...@nnowslpianmk.com>
wrote:
>On Wed, 22 Oct 2008 06:22:11 -0700, puzzlecracker
<ironsel2...@gmail.com¬*
wrote:
Some application domain do require timeouts. Let's say ¬*market data
servers, where information is pushed to a client at indeterminate time
frequencies, among other domains.

Why does that require timeouts _as part of the application protocol_?

Pete


It doesn't require a timeout per se --Receive would be sufficient.
However, timeouts handle the case where client would like to
unsubscripted in the case data hasn't arrived within a specified
timeout window.
Why would the client need to do that? Can't you just provide the user
with a way to explicitly disconnect? For example, a button they can push
if they don't want to remain connected? What downside is there to
remaining connected?

In any case, yes...it's true that using a timeout since last
communications as a guide for when to disconnect is sometimes used. But
it's done as a single-sided feature only based on specific needs of one
endpoint or the other, not as part of the application protocol itself.
Your previous messages seemed to imply you were doing the latter, not the
former.
On a side note, you've mentioned the race condition between timeout
and Receive. Is there a way to avoid the race without making a
mentioned assumption?
No. The race condition is inherent in having two different code paths
both using the same resource.

Pete
Oct 22 '08 #13

P: n/a
Why would the client need to do that? Can't you just provide the user
with a way to explicitly disconnect? For example, a button they can push
if they don't want to remain connected? What downside is there to
remaining connected?
In any case, yes...it's true that using a timeout since last
communications as a guide for when to disconnect is sometimes used. But
it's done as a single-sided feature only based on specific needs of one
endpoint or the other, not as part of the application protocol itself.
Your previous messages seemed to imply you were doing the latter, not the
former.
I provide API to the client, so that he doesn't have to use the GUI
application. I want the client to be able to specify that he wants to
stop receiving server's callbacks after some inactivity time....

Here is a pseudo-client api example:

APIConnector client =new APIConnector (); //perhaps this should be
implemented as a factory???
client.OnLogin+=/*delegate*/;
client./*Other events*/+=/*other delegates*/

client.Login();
client.DoAction1();
client.DoAction2()

/*As client make these calls, he receives callbacks as they arrive
from the server asynchronously **/

Now after client make a last call, he wants to wait some time and then
Disconnect, hence.

uint timeoutInMilliseconds=/**/;
client.WaitUntilInactive(timeout).
client.Destroy();

No. The race condition is inherent in having two different code paths
both using the same resource.
That's where Monitor comes in.
Btw, I want to handle callbacks asynchronously, simply, by
Socket.BeginReceive after socket connects to the server.
byte [] buffer;

socket.BeginReceive(buffer, 0,buffer.Length, 0,new
AsyncCallback(this.ReadCallback), socket);

and in ReadCalllback, I will process the events and dispatch to
delegates.

BTW, will AsyncCallback continously, assuming no errors on the link,
read data into buffer and invoke callbacks?

Also, can calls to AsyncCallback starve the main thread? perhaps I
should change the priority?

Thanks
Oct 23 '08 #14

P: n/a
On Wed, 22 Oct 2008 19:25:56 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
[...]
Now after client make a last call, he wants to wait some time and then
Disconnect, hence.

uint timeoutInMilliseconds=/**/;
client.WaitUntilInactive(timeout).
client.Destroy();
Well, you can use that design if you like. I wouldn't. It creates a
situation where the APIConnector class can become invalid without any
interaction by the client code. It would be better to have the client
code manage the timeout itself, and provide a
Disconnect()/Logout()/whatever method that the client code can call to do
the disconnect. That allows, and even requires, that the client code
itself manage _all_ conditions that might lead to a forced disconnect.

At the very least, I hope that your APIConnector class has an event that
is raised when the APIConnector instance times out.
>No. The race condition is inherent in having two different code paths
both using the same resource.

That's where Monitor comes in.
No, it's not. A thread synchronization object like Monitor can be helpful
in managing race conditions, by ensuring that your data structures remain
uncorrupted and coherent, but it cannot _remove_ the race condition. In
this design, you will always have the race condition.
Btw, I want to handle callbacks asynchronously, simply, by
Socket.BeginReceive after socket connects to the server.

byte [] buffer;

socket.BeginReceive(buffer, 0,buffer.Length, 0,new
AsyncCallback(this.ReadCallback), socket);

and in ReadCalllback, I will process the events and dispatch to
delegates.

BTW, will AsyncCallback continously, assuming no errors on the link,
read data into buffer and invoke callbacks?
No. You have to call BeginReceive() again each time the previous
BeginReceive() completes.
Also, can calls to AsyncCallback starve the main thread? perhaps I
should change the priority?
Calls to your callback method are not executed on the main thread. And
you definitely should not change the priority. Modifying a thread's
priority is nearly always the wrong way to deal with problems, even
starvation ones.

In this case, you should have no starvation issues, as long as you code
your callback methods carefully. They should do a minimum of work, just
so that they don't hog the IOCP thread pool, but generally speaking the
framework will manage the IOCP thread pool well enough to ensure that if
i/o completes, it will get handled in a reasonably timely manner.

Pete
Oct 23 '08 #15

P: n/a
Well, you can use that design if you like. I wouldn't. It creates a
situation where the APIConnector class can become invalid without any
interaction by the client code. It would be better to have the client
code manage the timeout itself, and provide a
Disconnect()/Logout()/whatever method that the client code can call to do
the disconnect. That allows, and even requires, that the client code
itself manage _all_ conditions that might lead to a forced disconnect.

At the very least, I hope that your APIConnector class has an event that
is raised when the APIConnector instance times out.
Client cannot manage timeouts without help of APIConnector since it
doesn't know when packets arrive.

No. You have to call BeginReceive() again each time the previous
BeginReceive() completes.
Then my option is to call BeginReceive again inside ReadCallback
recursively. I feel this could cause a problem, not sure. Or is there
a different way I can implement that so of design?
Also, can calls to AsyncCallback starve the main thread? perhaps I
should change the priority?

Is it better to let client itself ask for callbacks? In other words,
should client make calls to the server, and then call Wait() and get
all the callbacks and proceed with other with eventual call to
Disconnect/Destroy?... Then again, Wait should be constrained by a
timeout, back to square one.

Thanks
Oct 23 '08 #16

P: n/a
On Wed, 22 Oct 2008 20:28:20 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
>Well, you can use that design if you like. I wouldn't. It creates a
situation where the APIConnector class can become invalid without any
interaction by the client code. It would be better to have the client
code manage the timeout itself, and provide a
Disconnect()/Logout()/whatever method that the client code can call to
do
the disconnect. That allows, and even requires, that the client code
itself manage _all_ conditions that might lead to a forced disconnect.

At the very least, I hope that your APIConnector class has an event that
is raised when the APIConnector instance times out.

Client cannot manage timeouts without help of APIConnector since it
doesn't know when packets arrive.
But the client shouldn't be executing a timeout on the basis of packets.

Just to be clear: in this thread, you've used the term "packets" even
though you seem to be describing a TCP connection. In reality, at the
Socket level you don't see packets with TCP, you see bytes.

So, when you write "packets", you either mean some arbitrary series of
bytes that have been received in one operation (i.e. what you actually
receive on a TCP socket), or you mean a logical message defined by the
application level protocol (a common enough (mis)-use of the word
"packets").

Now, if you mean the latter, then the client surely does see those, since
your APIConnector is providing those to the client as they arrive. If you
mean the former, then all you can accomplish by implementing a timeout
between these arbitrary series of bytes is to artificially introduce
errors into your network i/o when you otherwise wouldn't have had any.

Since you say the client doesn't know when the "packets" arrive, I can
only assume you're talking of the latter, and IMHO causing an error to
occur when one otherwise wouldn't have is simply not a good design.
>No. You have to call BeginReceive() again each time the previous
BeginReceive() completes.

Then my option is to call BeginReceive again inside ReadCallback
recursively. I feel this could cause a problem, not sure. Or is there
a different way I can implement that so of design?
It's not recursive, unless the operation is able to complete immediately,
which is rare and not a problem at all. Calling BeginReceive() from your
callback method is in fact the standard way to use BeginReceive() and the
other asynchronous methods.
Also, can calls to AsyncCallback starve the main thread? perhaps I
should change the priority?

Is it better to let client itself ask for callbacks? In other words,
should client make calls to the server, and then call Wait() and get
all the callbacks and proceed with other with eventual call to
Disconnect/Destroy?... Then again, Wait should be constrained by a
timeout, back to square one.
The whole point of using asynchronous methods is to avoid having to block
any threads. So I wouldn't introduce something to cause a thread to
block. It's perfectly reasonable to have events in your APIConnector
class that are raised when some i/o has completed in the class. You can
manage those events however you like, but IMHO it makes the most sense to
just raise them as you receive data from the server and have enough data
to represent an event.

Pete
Oct 23 '08 #17

P: n/a
"puzzlecracker" <ir*********@gmail.comwrote in message
news:ee**********************************@s1g2000p rg.googlegroups.com...
Problem:

I send a lot of requests to the application (running on a different
box, of course), and I receive back responses from the app .
Below: socket corresponds to Socket socket=new
Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);

in my Wait method I have the following
public void Wait (uint milliseconds)
{
while(/*Remain idle for passed number of "milliseconds"*/){
if (socket.Poll(1, SelectMode.SelectRead))
{
ProcessSocket(socket);// read info of the buffer
and calls registered callbacks for the client
}
else
return; //returns after Poll has expired
}
}
}

Hence when a client calls Wait, he get all the callbacks, and then
Wait blocks for milliseconds before assuming that nothing else is
coming from the wire. I have trouble implementing the last point.
Take a look at Socket.Select. Set up the socket you want to read from in
the checkRead list and call it with the timeout you want. The call will
return when data is available or there is a timeout.

I think that is the building block you need.

Regards,
Steve
Oct 24 '08 #18

P: n/a
On Oct 23, 9:43*pm, "Steve" <nospam_steve...@comcast.netwrote:
"puzzlecracker" <ironsel2...@gmail.comwrote in message

news:ee**********************************@s1g2000p rg.googlegroups.com...
Problem:
I send a lot of requests to the application (running on a different
box, of course), and *I receive back *responses from the app .
Below: socket corresponds to Socket socket=new
Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.Tcp);
in my Wait method I have the following
public void Wait (uint milliseconds)
{
* * * * while(/*Remain idle *for passed number of "milliseconds"*/){
* * * * * * * *if (socket.Poll(1, SelectMode.SelectRead))
* * * * * * * *{
* * * * * * * * * *ProcessSocket(socket);// read info of the buffer
and calls registered callbacks for the client
* * * * * * * *}
* * * * * * * *else
* * * * * * * * * *return; //returns after Poll hasexpired
* * * * * *}
}
}
Hence when a client calls Wait, he get all the callbacks, and then
Wait blocks for milliseconds before assuming that nothing else is
coming from the wire. I have trouble implementing the last point.

Take a look at Socket.Select. *Set up the socket you want to read from in
the checkRead list and call it with the timeout you want. *The call will
return when data is available or there is a timeout.
Steve, I am quite aware of Select, however, I think it will have
similar issues to Poll method. In addition, I only anticipate one
socket connection to the main, which is remote, application. What's
can select do differently than poll, other than having list of
connection, and not just one?

Thanks
Oct 24 '08 #19

P: n/a
Well, you can use that design if you like. *I wouldn't. *It creates a
situation where the APIConnector class can become invalid without any
interaction by the client code. *It would be better to have the client
code manage the timeout itself, and provide a
Disconnect()/Logout()/whatever method that the client code can call to*
do
the disconnect. *That allows, and even requires, that the client code
itself manage _all_ conditions that might lead to a forced disconnect.
At the very least, I hope that your APIConnector class has an event that
is raised when the APIConnector instance times out.
Exactly, we need to define what time out means for APIConnecter. Is it
configurable by the client, when he/she creates an instance of
APIConnecter class? My design idea was for client to spell it out to
the API connecter "at this point in time, I want to you tell me when
you have been idle enough with WaitUntilInactive call". Curious what
you think of it.

..
But the client shouldn't be executing a timeout on the basis of packets.
Just to be clear: in this thread, you've used the term "packets" even *
though you seem to be describing a TCP connection. *In reality, at the *
Socket level you don't see packets with TCP, you see bytes.
sorry, yes I was referring to pocket as TCP level data, which is just
an array of bytes... umm, I see pockets with ethereal :)))
Now, if you mean the latter, then the client surely does see those, since*
your APIConnector is providing those to the client as they arrive. *If you *
mean the former, then all you can accomplish by implementing a timeout *
between these arbitrary series of bytes is to artificially introduce *
errors into your network i/o when you otherwise wouldn't have had any.

Since you say the client doesn't know when the "packets" arrive, I can *
only assume you're talking of the latter, and IMHO causing an error to *
occur when one otherwise wouldn't have is simply not a good design.
No. *You have to call BeginReceive() again each time the previous
It's not recursive, unless the operation is able to complete immediately,*
which is rare and not a problem at all. *Calling BeginReceive() from your *
callback method is in fact the standard way to use BeginReceive() and the*
other asynchronous methods.
Thanks

Oct 24 '08 #20

P: n/a
>"puzzlecracker" <ir*********@gmail.comwrote in message
>news:e6b314fd-4806-4c54-bdf6->f6**********@t54g2000hsg.googlegroups.com...
[snip]
>>
Take a look at Socket.Select. Set up the socket you want to read from in
the checkRead list and call it with the timeout you want. The call will
return when data is available or there is a timeout.

Steve, I am quite aware of Select, however, I think it will have
similar issues to Poll method. In addition, I only anticipate one
socket connection to the main, which is remote, application. What's
can select do differently than poll, other than having list of
connection, and not just one?

Thanks
Sorry, on closer exammination it appears that Poll is basically select for
one socket. I've done a fair amount TCP/IP programming in other languages
using BSD sockets and Winsock and the standard answer there is to use
select.

The way I have handled timeouts with messages using TCP/IP sockets in a
quasi-realtime environment is to create my own layer on top of the socket
stream.

I divide the stream into messages. Each message is preceded a header that
includes the length and a numeric command code. This allows the receiver to
identifiy when an entire message has been received and to deal with the
message based on the command code. One command code I define is "are you
there", another is "acknowledge". I have a separate thread that
periodically sends an "are you there" message. If it doesn't receive an
"acknowledge" in a reasonable amount of time, it triggers recovery action.

I have found that this recovery code seldom gets executed.

In reading about TCP/IP I have found that it is kind of a two-edged sword.
On the one hand it is really nice in that it creates the abstraction of a
continuous stream of bytes across a network that is very convenient to use.
On the other hand it was set up for systems where a few seconds or even few
minute delays are acceptable when recovering from communication errors. It
makes sense if you think of about TCP/IP as being a protocol for sending
files from coast to coast. But when you're trying to send them across a
room and want delivery in less than 100 msec, it makes it difficult.

Regards,
Steve
Oct 26 '08 #21

P: n/a
On Mon, 27 Oct 2008 08:27:15 -0700, puzzlecracker <ir*********@gmail.com>
wrote:
Is it a good idea to use Poll and then check if socket is available.
socket.IsAvailable()??
There is no "IsAvailable()"...I assume you mean Socket.Available.

Not in my opinion. It's a waste to use either. For a blocking socket,
just call Receive(), not Poll(). While Poll() _might_ make sense for a
non-blocking socket, if you want non-blocking semantics in .NET it's
better to use BeginReceive().

In either case, IMHO the MSDN documentation is flat out wrong when it
writes "If you are using a non-blocking Socket, Available is a good way to
determine whether data is queued for reading". As outlined in the Lame
List, checking for available data is pointless. The information can
change between the time you check and the time you try to use it. You
should simply try to read as much data as you're prepared to handle, and
deal with whatever the network stack actually winds up giving you. The
only 100% reliable way to know for sure how much data will be returned by
a call to Receive() is to actually call Receive().

Pete
Oct 27 '08 #22

This discussion thread is closed

Replies have been disabled for this discussion.