472,127 Members | 1,631 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,127 software developers and data experts.

Problem determining when a remote host closes a socket. using TcpClient

I am using the TcpClient to connect to a web site. I then open a
NetworkStream and read the contents that are being sent back. The
problem is that I have no idea when the remote host is finished
sending data. It is sending a web page but it doesn't use the
Content-Length header to indicate the size.

While I can use the DataAvailable property, it is well known that you
cannot rely on this to know when no more data is available.

What I really want is to know when the host has closed the underlying
socket. I have a loop where I continually read from the host when the
DataAvailable is true. At some point however, I was hoping that the
remote host would close the socket and I would capture an exception
indicating a closed socket. But my try...catch never gets an exception
and I remain forever in the loop.

I can't be 100% certain that the remote host is closing the
connection, but if I have read all of the data, why wouldn't it?

How can I get my code to capture an exception indicating a closed
socket? Maybe TcpClient is the wrong way to go?

Thanks,
Johann
Nov 16 '05 #1
0 1577

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

2 posts views Thread by Nick Zdunic | last post: by
2 posts views Thread by Theo | last post: by
2 posts views Thread by Droopy | last post: by
1 post views Thread by Mr. Beck | last post: by
1 post views Thread by xreload | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.