If the client code (i.e WebRequest) is running on NT, then NT I/O completion
port mechanism is used. This will actually improve your performance, because
you are now simultaneously downloading 4 pages at once, instead of each page
sequentially. Also, the completions of the async I/O is being signalled
using I/O Completion ports, which is a very fast way of doing async I/O.
But there are some factors that could affect your performance.
First, if all the requests are going to the same server, and if the server
you are accessing is a HTTP/1.1 server, then there is a connection limit of
2. So, the first two requests will start downloading data, and the third one
will block until one of the first two requests completes. You can change
this behavior by increasing the connection limit using
ServicePoint/ServicePointManager.
Secondly, if the page which download the request is being hit multiple
times, then you might have multiple HTTP requests going on. If the page
being downloaded from the backend server is huge, or if the network
bandwidth is slow, then you might have a lot of outstanding http requests.
So, you might see a lot of handles being created, and some memory being used
in terms of Non Paged Pool.
Third, you will have to make sure that you close the connection once a
request is done. You can do this by calling Close() on the response stream,
or the HttpWebResponse object. If you dont do this, then the connection will
not be freed up, and you will block subsequent requests.
Finally, you dont need to do a THread.Sleep() for the requests to finish.
You can signal an Event from your callback, and wait for the event in your
main thread. THat is a much better way of achieving synchronization.
--
feroze
-----------------
This posting is provided as-is. It offers no warranties and assigns no
rights.
See
http://weblogs.asp.net/feroze_daud for System.Net related posts.
----------------
"Scott Allen" <sc***@nospam.odetocode.com> wrote in message
news:i7********************************@4ax.com...
You'll be using 4 threads instead of 1 thread to service a single user
request - that will limit your scalability. On the oher hand, when the
site isn't highly utilized you'll have a dramatic performance
increase.
I touch a bit on this subject in the following posts:
http://odetocode.com/Blogs/scott/arc...6/16/1656.aspx
http://odetocode.com/Blogs/scott/arc...6/23/1877.aspx
Notice don't need the Sleep call at all - you can simply have the
thread block on EndGetResponse calls. That is the simplest approach.
--
Scott
http://www.OdeToCode.com/blogs/scott/
On Thu, 23 Jun 2005 13:23:02 -0700, "Dave"
<Da**@discussions.microsoft.com> wrote:
I've have several webrequests being called consecutively from an .aspx
that
return XML from sources outside the company. When the page runs it can
take
anywhere between 30-60 seconds.
I heard about making webrequests asynchronosly and found:
http://samples.gotdotnet.com/quickst.../GETAsync.aspx
I modified the code to add a couple requests right after each other as
listed below.
It worked but...
1.) What is the drawback of this technique in terms of performance in the
long run?
2.) Is there a way to determine when all of these webrequests complete and
just flush the response instead of waiting for the Thread.Sleep to
complete?
HttpWebRequest wreq;
IAsyncResult r;
wreq = (HttpWebRequest) WebRequest.Create("http://somexmlsource1");
r = (IAsyncResult) wreq.BeginGetResponse(new
AsyncCallback(this.RespCallback), wreq);
wreq = (HttpWebRequest) WebRequest.Create("http://somexmlsource2");
r = (IAsyncResult) wreq.BeginGetResponse(new
AsyncCallback(this.RespCallback), wreq);
wreq = (HttpWebRequest) WebRequest.Create("http://somexmlsource3");
r = (IAsyncResult) wreq.BeginGetResponse(new
AsyncCallback(this.RespCallback), wreq);
Thread.Sleep(30000); <!--Wait 30 seconds to allow requests to complete