By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
458,187 Members | 1,594 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 458,187 IT Pros & Developers. It's quick & easy.

Close a blocked socket

P: n/a
I have a sync socket application. The client is blocked with
Socket.Receive(...) in a thread, another thread calls Socket.Close(). This
unblock the blocked thread. But the socket server is still connected. Any
idea?

Thanks.
Apr 10 '07 #1
Share this Question
Share on Google+
14 Replies


P: n/a
Use asynchronous non-blocking sockets. Implementing blocking sockets in
situations where connections may fail is looking for trouble.

Michael

"MikeZ" <Mi***@discussions.microsoft.comwrote in message
news:50**********************************@microsof t.com...
>I have a sync socket application. The client is blocked with
Socket.Receive(...) in a thread, another thread calls Socket.Close(). This
unblock the blocked thread. But the socket server is still connected. Any
idea?

Thanks.

Apr 10 '07 #2

P: n/a
On Mon, 09 Apr 2007 18:10:00 -0700, MikeZ
<Mi***@discussions.microsoft.comwrote:
I have a sync socket application. The client is blocked with
Socket.Receive(...) in a thread, another thread calls Socket.Close().
This unblock the blocked thread. But the socket server is still
connected. Any idea?
What does "the socket server is still connected"? Do you mean that the
server's socket for the connection (the one returned in the accept call
when the client connected) doesn't indicate that the connection has been
closed or reset?

If so, you should probably check your linger options. If you use a zero
timeout when closing a socket, then the connection is simply aborted
without a graceful shutdown and the server won't know that the connection
has been closed. IMHO, it's better to use the Shutdown method first, to
explicitly initiate a graceful shutdown rather than relying on the
behavior of the Close method (since what it does depends on a variety of
other things).

All that said, since the connection could be aborted for other reasons
without the server being notified, you still logic in the server to deal
with that condition. A graceful shutdown is nice, especially to ensure
that pending data that has been enqueued for sending really gets sent.
But there's no way to guarantee one, so your server needs to be prepared
to deal with the case when it doesn't happen.

Pete
Apr 10 '07 #3

P: n/a
On Mon, 09 Apr 2007 19:50:20 -0700, Michael Rubinstein
<mSPAM_REMOVEr@mĀ®ubinstein.comwrote:
Use asynchronous non-blocking sockets. Implementing blocking sockets
in situations where connections may fail is looking for trouble.
I disagree. There are a variety of reasons to use non-blocking sockets
rather than blocking, but the criteria "where connections may fail" is not
one of them (and is meaningless anyway...since any connection can fail,
using that as a reason to not use blocking sockets would mean no one would
ever use blocking sockets for connection-oriented protocols, which is
obviously not the case).

Pete
Apr 10 '07 #4

P: n/a
Pete, you are right. Mike did not mention failed connections.

Michael

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Mon, 09 Apr 2007 19:50:20 -0700, Michael Rubinstein
<mSPAM_REMOVEr@m®ubinstein.comwrote:
> Use asynchronous non-blocking sockets. Implementing blocking sockets
in situations where connections may fail is looking for trouble.

I disagree. There are a variety of reasons to use non-blocking sockets
rather than blocking, but the criteria "where connections may fail" is not
one of them (and is meaningless anyway...since any connection can fail,
using that as a reason to not use blocking sockets would mean no one would
ever use blocking sockets for connection-oriented protocols, which is
obviously not the case).

Pete

Apr 10 '07 #5

P: n/a
Peter,

In Server, the Socket.Connected property is TRUE even this socket is closed
by client.

I implemented a socket pool in server side, I need to know a socket status,
and close the socket, so other client can open a new socket when pool reach
the max number.

The server side is Async socket, and client is Sync socket. I found when
server is sending data to client, and client close the socket, server know
the socket is disconnected. When server does not send any data to client, and
client close the connection, server still think the socket is connected.

Anyway, I am using the last sending time to control the socket pool. It is
not perfect.

Thanks.

Apr 10 '07 #6

P: n/a
On Tue, 10 Apr 2007 05:07:35 -0700, Michael Rubinstein
<mSPAM_REMOVEr@mĀ®ubinstein.comwrote:
Pete, you are right. Mike did not mention failed connections.
Well, what I meant was that even if he did mention failed connections,
that's not an argument against blocking sockets.

You may not be in agreement with that opinion. :)
Apr 10 '07 #7

P: n/a
On Tue, 10 Apr 2007 09:48:01 -0700, MikeZ
<Mi***@discussions.microsoft.comwrote:
[...]
The server side is Async socket, and client is Sync socket. I found when
server is sending data to client, and client close the socket, server
know the socket is disconnected. When server does not send any data to
client, and client close the connection, server still think the socket
is connected.
As I wrote in my previous message, you may want to look at the linger
options for the socket, and/or simply use Shutdown before calling Close on
the socket.

If you just close the socket on the client side without doing a graceful
shutdown of the connection, the server isn't notified and you wind up with
exactly the situation you're talking about.

Pete
Apr 10 '07 #8

P: n/a
Pete, my opinion is that blocking sockets should be used only when there
is a compelling reason for doing so. I must admit, I can't name a single
one. Must be my ignorance <g>. I suspect the popularity of blocking sockets
is more due to the fact that earlier (8 years or so) MSDN examples use
blocking sockets, while non-blocking Winsock samples were published later
and are less known.

Michael

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Tue, 10 Apr 2007 05:07:35 -0700, Michael Rubinstein
<mSPAM_REMOVEr@m®ubinstein.comwrote:
> Pete, you are right. Mike did not mention failed connections.

Well, what I meant was that even if he did mention failed connections,
that's not an argument against blocking sockets.

You may not be in agreement with that opinion. :)

Apr 10 '07 #9

P: n/a
On Tue, 10 Apr 2007 12:58:12 -0700, Michael Rubinstein
<mSPAM_REMOVEr@mĀ®ubinstein.comwrote:
Pete, my opinion is that blocking sockets should be used only when
there is a compelling reason for doing so.
I can agree with that. But "compelling reason" is in the eye of the
beholder.
I must admit, I can't name a single one.
Personally, I see no reason to not use blocking sockets if one is dealing
with a very simple situation (say, peer-to-peer application where you only
ever have one connection). Before .NET, using plain old Winsock, I would
extend this to be a simple situation in which there's no window message
pump, since WSAAsyncSelect is a pretty easy and convenient way to handle
socket i/o without adding a new thread.

In .NET, the async use of Sockets is quite nice and, even more important,
scales very well due to its use of IOCP. But it may be easier for a
programmer to conceptualize his i/o algorithm by dedicating a thread or
two to the socket i/o and using blocking sockets. It is much more
important for the code to be written correctly than for it to be written
using some particular paradigm, and if using blocking sockets advances
this goal of correctness, then it seems to me that's a good reason to use
blocking sockets.

In the simple scenario I mention above, I certainly see no real advantage
to using the Begin/End pattern over straight blocking sockets.

But my main point is that whatever one thinks about blocking versus
non-blocking, I really don't see how the question of whether a connection
can fail or not comes into it.

Pete
Apr 10 '07 #10

P: n/a
Pete, interesting arguments. I am basically on the same path, except that I
took a different turn - you decided to go blocking, me - non-blocking, and
here is the funny part - for the same reason. I came to a conclusion, before
..NET, that non-blocking model is easier and should scale better, you decided
the other way.
>But it may be easier for a programmer to conceptualize his i/o algorithm
by dedicating a thread or two to the socket i/o and using blocking
sockets.
I am running up to 30 simultaneous connections over the Internet and can
easily scale up to 100 or more if the needed. Poor men's IIS.
In .NET, the async use of Sockets is quite nice and, even more important,
scales very well due to its use of IOCP.
I am not impressed by .NET Socket class. I rewrote my servers and clients in
..NET, complete rewrite from Win32. I don't see any benefits on the socket
side except for the PR part (oh .NET, good). There is another thread today:
'BeginAccept Callback problem'. Touches on the problem with asynchronous
socket callbacks. Not much of a problem to worry about, except that under
Win32 it did not exist.
But my main point is that whatever one thinks about blocking versus
non-blocking, I really don't see how the question of whether a connection
can fail or not comes into it
Some connections will fail. If you are better of handling failed connections
(on both sides) using blocking sockets, then good for you. I have good
connection to the Internet on the server side. However, clients use laptops
on wireless lans. They shutdown their machines literally - fold the laptop
and walk out of the office. My servers handle it quite well. If I would
choose the blocking path, I would probably ruin the project.

Happy blocking, Michael

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Tue, 10 Apr 2007 12:58:12 -0700, Michael Rubinstein
<mSPAM_REMOVEr@m®ubinstein.comwrote:
> Pete, my opinion is that blocking sockets should be used only when
there is a compelling reason for doing so.

I can agree with that. But "compelling reason" is in the eye of the
beholder.
>I must admit, I can't name a single one.

Personally, I see no reason to not use blocking sockets if one is dealing
with a very simple situation (say, peer-to-peer application where you only
ever have one connection). Before .NET, using plain old Winsock, I would
extend this to be a simple situation in which there's no window message
pump, since WSAAsyncSelect is a pretty easy and convenient way to handle
socket i/o without adding a new thread.

In .NET, the async use of Sockets is quite nice and, even more important,
scales very well due to its use of IOCP. But it may be easier for a
programmer to conceptualize his i/o algorithm by dedicating a thread or
two to the socket i/o and using blocking sockets. It is much more
important for the code to be written correctly than for it to be written
using some particular paradigm, and if using blocking sockets advances
this goal of correctness, then it seems to me that's a good reason to use
blocking sockets.

In the simple scenario I mention above, I certainly see no real advantage
to using the Begin/End pattern over straight blocking sockets.

But my main point is that whatever one thinks about blocking versus
non-blocking, I really don't see how the question of whether a connection
can fail or not comes into it.

Pete

Apr 10 '07 #11

P: n/a
On Tue, 10 Apr 2007 16:21:26 -0700, Michael Rubinstein
<mSPAM_REMOVEr@mĀ®ubinstein.comwrote:
Pete, interesting arguments. I am basically on the same path, except
that I took a different turn - you decided to go blocking, me -
non-blocking
I didn't "decide to go blocking". I'm just pointing out situations in
which I believe blocking is fine, or perhaps even preferable.
I am running up to 30 simultaneous connections over the Internet and can
easily scale up to 100 or more if the needed. Poor men's IIS.
That doesn't show scalability. Scalable is being able to handle tens of
thousands or hundreds of thousands of connections.
I am not impressed by .NET Socket class. I rewrote my servers and
clients in .NET, complete rewrite from Win32. I don't see any
benefits on the socket side except for the PR part (oh .NET, good).
I don't see how with only 30 connections you'd see any difference. But if
you are not using IOCP, then as you approach tens of thousands of
connections, .NET Sockets will outperform a straight Winsock
implementation.

If I can find the link, I'll post it. Someone else here in this newsgroup
has used .NET for a truly large system and shown that it scales extremely
well.

In any case, the question isn't whether .NET performs BETTER. It's
whether it performs worse.

If it offers similar performance, but in the programming environment
provided with .NET, then it's better for the person doing .NET
programming. I've seen no indication that .NET performs worse than
Winsock, and because it's easier to take advantage of IOCP under .NET
Sockets than it is to do so under Winsock, most applications will show a
clear performance benefit using .NET Sockets over Winsock, because those
applications wouldn't have been done using IOCP in Winsock (and of course,
of the programmers who do attempt to use IOCP under Winsock, many of them
will get the implementation wrong...again, points for .NET).
There is another thread today:
'BeginAccept Callback problem'. Touches on the problem with asynchronous
socket callbacks. Not much of a problem to worry about, except that under
Win32 it did not exist.
Mainly because the Winsock API doesn't use exceptions. The exception that
thread is complaining about is completely harmless, and is no different
from any other informational exceptions that occur in .NET. .NET uses
exceptions to convey information. The exception mentioned in that thread
is "by design" according to the docs. And personally, I like it that when
you call BeginAccept you know that you'll wind up in your callback
whatever happens. Stops me from wondering if that async result just got
discarded or is still hanging out somewhere when I abort an i/o operation.
Some connections will fail. If you are better of handling failed
connections (on both sides) using blocking sockets, then good for you.
I'm not saying it's better off. I'm saying that one is neither better off
nor worse off. The question is failed connections is irrelevant to the
question of blocking vs. non-blocking.
I have good connection to the Internet on the server side. However,
clients use laptops on wireless lans. They shutdown their machines
literally - fold the laptop and walk out of the office. My servers
handle it quite well. If I would choose the blocking path, I would
probably ruin the project.
There's absolutely no reason that using blocking sockets should "ruin the
project" in the usage scenario you describe. You can handle
disconnections just as easily with blocking sockets as with non-blocking.

Pete
Apr 11 '07 #12

P: n/a
On Tue, 10 Apr 2007 19:05:59 -0700, Peter Duniho
<Np*********@nnowslpianmk.comwrote:
[...]
If I can find the link, I'll post it. Someone else here in this
newsgroup has used .NET for a truly large system and shown that it
scales extremely well.
FYI, here's the article I mentioned (by Chris Mullins):

http://www.coversant.net/Coversant/B...0/Default.aspx
Apr 11 '07 #13

P: n/a
Pete, I appreciate your taking time to explain your point.
Mainly because the Winsock API doesn't use exceptions. The exception that
thread is complaining about is completely harmless, and is no different
from any other informational exceptions that occur in .NET. .NET uses
exceptions to convey information. The exception mentioned in that thread
is "by design" according to the docs. And personally, I like it that when
you call BeginAccept you know that you'll wind up in your callback
whatever happens. Stops me from wondering if that async result just got
discarded or is still hanging out somewhere when I abort an i/o operation.
Here I disagree with you completely. To me a callback that contains a
reference to a disposed object is an indication of a flawed design. Sure the
callback should happen no matter what, but under no circumstances should it
contain a reference to a disposed object. Not by design. In my eyes it is a
serious flaw - there should be a mechanism where the socket would report
that it is not functional and should be disposed. Instead the system
discards the socket, and the program code finds out about it the hard way.
The fact that I can work around, as everybody else does, is not good excuse.
Async Socket under Win32 would always send a message to the registered
window so the program code could respond accordingly and close the socket
upon FD_CLOSE and analyze the wParam to determine whether the connection
was closed gracefully or was interrupted. Under .NET Sockets the only way to
determine on the server side that the client socket gracefully disconnected
is when EndRead() returns 0 bytes. It works, but it is an odd approach. If
the client disconnected disgracefully, then the program code 'finds out'
when the BeginSend() fails or less often, when EndReceive() generates an
exception. It does not happen too often, however under Win32 these
situations did not occur at all. Program code would become 'aware' of the
disconnect and never try an action destined to fail. With 30 connections it
is not a big deal, but with larger number of connections it could become a
problem. Exceptions take much longer time to process, then regular code.

Michael

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Tue, 10 Apr 2007 16:21:26 -0700, Michael Rubinstein
<mSPAM_REMOVEr@m®ubinstein.comwrote:
>Pete, interesting arguments. I am basically on the same path, except
that I took a different turn - you decided to go blocking, me -
non-blocking

I didn't "decide to go blocking". I'm just pointing out situations in
which I believe blocking is fine, or perhaps even preferable.
>I am running up to 30 simultaneous connections over the Internet and can
easily scale up to 100 or more if the needed. Poor men's IIS.

That doesn't show scalability. Scalable is being able to handle tens of
thousands or hundreds of thousands of connections.
>I am not impressed by .NET Socket class. I rewrote my servers and
clients in .NET, complete rewrite from Win32. I don't see any
benefits on the socket side except for the PR part (oh .NET, good).

I don't see how with only 30 connections you'd see any difference. But if
you are not using IOCP, then as you approach tens of thousands of
connections, .NET Sockets will outperform a straight Winsock
implementation.

If I can find the link, I'll post it. Someone else here in this newsgroup
has used .NET for a truly large system and shown that it scales extremely
well.

In any case, the question isn't whether .NET performs BETTER. It's
whether it performs worse.

If it offers similar performance, but in the programming environment
provided with .NET, then it's better for the person doing .NET
programming. I've seen no indication that .NET performs worse than
Winsock, and because it's easier to take advantage of IOCP under .NET
Sockets than it is to do so under Winsock, most applications will show a
clear performance benefit using .NET Sockets over Winsock, because those
applications wouldn't have been done using IOCP in Winsock (and of course,
of the programmers who do attempt to use IOCP under Winsock, many of them
will get the implementation wrong...again, points for .NET).
>There is another thread today:
'BeginAccept Callback problem'. Touches on the problem with asynchronous
socket callbacks. Not much of a problem to worry about, except that under
Win32 it did not exist.

Mainly because the Winsock API doesn't use exceptions. The exception that
thread is complaining about is completely harmless, and is no different
from any other informational exceptions that occur in .NET. .NET uses
exceptions to convey information. The exception mentioned in that thread
is "by design" according to the docs. And personally, I like it that when
you call BeginAccept you know that you'll wind up in your callback
whatever happens. Stops me from wondering if that async result just got
discarded or is still hanging out somewhere when I abort an i/o operation.
>Some connections will fail. If you are better of handling failed
connections (on both sides) using blocking sockets, then good for you.

I'm not saying it's better off. I'm saying that one is neither better off
nor worse off. The question is failed connections is irrelevant to the
question of blocking vs. non-blocking.
>I have good connection to the Internet on the server side. However,
clients use laptops on wireless lans. They shutdown their machines
literally - fold the laptop and walk out of the office. My servers
handle it quite well. If I would choose the blocking path, I would
probably ruin the project.

There's absolutely no reason that using blocking sockets should "ruin the
project" in the usage scenario you describe. You can handle
disconnections just as easily with blocking sockets as with non-blocking.

Pete

Apr 11 '07 #14

P: n/a
Peter, thanks for the link, quite valuable. I handle connections pretty
much same way as described, only on much smaller scale (and budget <g>).

Cheers, Michael

"Peter Duniho" <Np*********@nnowslpianmk.comwrote in message
news:op***************@petes-computer.local...
On Tue, 10 Apr 2007 19:05:59 -0700, Peter Duniho
<Np*********@nnowslpianmk.comwrote:
>[...]
If I can find the link, I'll post it. Someone else here in this
newsgroup has used .NET for a truly large system and shown that it
scales extremely well.

FYI, here's the article I mentioned (by Chris Mullins):

http://www.coversant.net/Coversant/B...0/Default.aspx

Apr 11 '07 #15

This discussion thread is closed

Replies have been disabled for this discussion.