472,090 Members | 1,270 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,090 software developers and data experts.

block, end, async read/write cause delay?

Will TcpClient.GetStream().Read()/ReadByte() block until at least one byte
of data can be read?

In a Client/Server application, what does it mean at the end of stream/no
more data available? Client could send data once few seconds of minutes. Is
there an "end" at all?

In a C/S application, if server side call BeginginRead() again in EndRead()
to create a endless loop to get message from client, is this a better
approach than "one thread per client" approach?

I understand aync read/write will put job into query and later it uses
system thread pool to execute it. The question is when will it be executed?
Wait until hardware available or plus waiting thread available? If there are
too many clients, will this async approach cause too much delay and client
can not get expected response from server in timely manner? And this can
avoid by "one thread per client" mode?

Thanks!
Ryan
Jun 27 '08 #1
3 2551
On Mon, 19 May 2008 20:58:31 -0700, Ryan Liu <rl**@powercati.comwrote:
Will TcpClient.GetStream().Read()/ReadByte() block until at least one
byte
of data can be read?
Assuming the socket is in blocking mode, yes. Otherwise, you could get an
exception equivalent to the WSAEWOULDBLOCK error.
In a Client/Server application, what does it mean at the end of stream/no
more data available? Client could send data once few seconds of
minutes. Is
there an "end" at all?
The stream you get from a TCP socket will only reach the end when the
connection is closed.
In a C/S application, if server side call BeginginRead() again in
EndRead()
to create a endless loop to get message from client, is this a better
approach than "one thread per client" approach?
Yes. And this is better on the client side too. Using the async API on
the network i/o classes (well, for sure the Socket class...I'm pretty sure
TcpClient, NetworkStream, etc. all follow) will implicitly use IOCP when
available (which is on any NT-based version of Windows), and IOCP is the
most scalable API to use with network i/o.

I personally find the async API more convenient and easier to understand,
but then I'm odd that way. :) YMMV.
I understand aync read/write will put job into query and later it uses
system thread pool to execute it. The question is when will it be
executed?
Wait until hardware available or plus waiting thread available? If there
are
too many clients, will this async approach cause too much delay and
client
can not get expected response from server in timely manner? And this can
avoid by "one thread per client" mode?
As I wrote above, IOCP is the most scalable approach, and that's what the
async API uses. It works by having a pool of threads waiting on i/o
completions. One thread can handle i/o completions for any number of
sockets. This means that as i/o completions are queued by the OS, the
same thread can just keep pulling the completions off the queue and
processing them, avoiding a thread context switch just to process a
different socket.

As for when the thread will get to run, it will mostly run according to
the usual Windows scheduling mechanism. The IOCP threads all sit and wait
on a queue. As soon as something's in the queue, the threads become
runnable. They are scheduled according to the normal round-robin
rotation, so once the first runnable IOCP thread gets to run, it dequeues
the completion and processes it. If there are still i/o completions in
the queue, that one thread will keep dequeuing them. If another CPU core
becomes available and a second IOCP thread reaches the head of the line in
the round-robin scheme, then that second thread will dequeue an i/o
completion and process it.

Each IOCP thread, once it gets a chance to run, will not yield until
either the queue is empty or it's pre-empted. In this way, the threads
are assured of consuming their entire thread quantum as long as there's
actually work to be done.

Finally, Windows knows that IOCP is special and actually does manage
thread scheduling and some other things to help ensure that the CPUs are
used most efficiently with IOCP. For example, if I recall correctly the
Windows scheduler won't switch to another IOCP thread just because one
IOCP thread is done with its quantum. It's smart enough that, once an
IOCP thread gets to run, that thread will keep running through multiple
quantums as long as there aren't non-IOCP threads that are runnable.
Again, this minimizes context switching.

Pete
Jun 27 '08 #2
Thanks a lot, Peter.

Just one more question :

"Peter Duniho" <Np*********@nnowslpianmk.comдÈëÏûÏ¢ÐÂÎÅ:op****** *********@petes-computer.local...
On Mon, 19 May 2008 20:58:31 -0700, Ryan Liu <rl**@powercati.comwrote:
>Will TcpClient.GetStream().Read()/ReadByte() block until at least one
byte
of data can be read?

Assuming the socket is in blocking mode, yes. Otherwise, you could get an
exception equivalent to the WSAEWOULDBLOCK error.

What does Block meaning? Will it consume resouces like CPU? Will it occpuy a
thread? Is it just a hardware thing, e.g. wait for hardware to notify or
actually use thread poll status of stream?

>In a Client/Server application, what does it mean at the end of stream/no
more data available? Client could send data once few seconds of
minutes. Is
there an "end" at all?

The stream you get from a TCP socket will only reach the end when the
connection is closed.
>In a C/S application, if server side call BeginginRead() again in
EndRead()
to create a endless loop to get message from client, is this a better
approach than "one thread per client" approach?

Yes. And this is better on the client side too. Using the async API on
the network i/o classes (well, for sure the Socket class...I'm pretty sure
TcpClient, NetworkStream, etc. all follow) will implicitly use IOCP when
available (which is on any NT-based version of Windows), and IOCP is the
most scalable API to use with network i/o.

I personally find the async API more convenient and easier to understand,
but then I'm odd that way. :) YMMV.
>I understand aync read/write will put job into query and later it uses
system thread pool to execute it. The question is when will it be
executed?
Wait until hardware available or plus waiting thread available? If there
are
too many clients, will this async approach cause too much delay and
client
can not get expected response from server in timely manner? And this can
avoid by "one thread per client" mode?

As I wrote above, IOCP is the most scalable approach, and that's what the
async API uses. It works by having a pool of threads waiting on i/o
completions. One thread can handle i/o completions for any number of
sockets. This means that as i/o completions are queued by the OS, the
same thread can just keep pulling the completions off the queue and
processing them, avoiding a thread context switch just to process a
different socket.

As for when the thread will get to run, it will mostly run according to
the usual Windows scheduling mechanism. The IOCP threads all sit and wait
on a queue. As soon as something's in the queue, the threads become
runnable. They are scheduled according to the normal round-robin
rotation, so once the first runnable IOCP thread gets to run, it dequeues
the completion and processes it. If there are still i/o completions in
the queue, that one thread will keep dequeuing them. If another CPU core
becomes available and a second IOCP thread reaches the head of the line in
the round-robin scheme, then that second thread will dequeue an i/o
completion and process it.

Each IOCP thread, once it gets a chance to run, will not yield until
either the queue is empty or it's pre-empted. In this way, the threads
are assured of consuming their entire thread quantum as long as there's
actually work to be done.

Finally, Windows knows that IOCP is special and actually does manage
thread scheduling and some other things to help ensure that the CPUs are
used most efficiently with IOCP. For example, if I recall correctly the
Windows scheduler won't switch to another IOCP thread just because one
IOCP thread is done with its quantum. It's smart enough that, once an
IOCP thread gets to run, that thread will keep running through multiple
quantums as long as there aren't non-IOCP threads that are runnable.
Again, this minimizes context switching.

Pete

Jun 27 '08 #3
On Tue, 20 May 2008 04:22:49 -0700, Ryan Liu <rl**@powercati.comwrote:
>Assuming the socket is in blocking mode, yes. Otherwise, you could get
an
exception equivalent to the WSAEWOULDBLOCK error.

What does Block meaning? Will it consume resouces like CPU? Will it
occpuy a
thread? Is it just a hardware thing, e.g. wait for hardware to notify or
actually use thread poll status of stream?
To "block" means for the thread to become unrunnable, and thus stop
executing, and wait for something to happen.

A thread that is unrunnable doesn't consume CPU time. To "occupy a
thread" is ambiguous, but if you mean that the thread is stuck and can't
do anything else, then yes...the thread is occupied. But it's not
runnable, so the only resources that are consumed are the memory-based
resources allocated to that thread (and if a thread remains unrunnable for
long enough, those resources will get moved from physical memory to the
swap file, removing contention for that last high-value resource).

It's not possible to answer the question about interrupt vs polling in a
general sort of way. However, you can depend on the OS using
interrupt-driven management as much as possible, and where polling exists
it will be at a very low level in the OS, almost never in a way that would
affect your own code. When your own code blocks on i/o, you can rest
assured that you're essentially consuming no CPU resources at all.

Pete
Jun 27 '08 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by Paul Clinch | last post: by
6 posts views Thread by Vanessa | last post: by
2 posts views Thread by Tom Vandeplas | last post: by
4 posts views Thread by Greg Young | last post: by
11 posts views Thread by atlaste | last post: by
3 posts views Thread by Ryan Liu | last post: by
reply views Thread by leo001 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.