473,385 Members | 1,940 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

StreamReader.Close() response is very slow - please help

All,

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp = "http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1";

WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.

The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data. At this point
I am starting to believe that SR.Close() does not abort reading until
the entire web page has been recieved. This is not desired and in
fact I parse the data and desire to terminate loading because the
entire process is so slow and not necessary all of the time.

Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.

Thanks in advance.

Todd
Nov 16 '05 #1
6 4535

Maybe you should take some programming classes.

Sami
www.capehill.net

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!
Nov 16 '05 #2
Sami Vaaraniemi wrote:
Maybe you should take some programming classes.


Hey! It's an arrogant spammer :-P
Nov 16 '05 #3
Joerg, thanks for taking the time to respond to my request for help.

I'm sorry about the comment. I believe the comment for 256 bytes
belonged to the original example which I copied the code. The 2000
byte buffer is the actual size I was using the in the program from
which I derived the sample code to illustrate my problem in this
newsgroup.

I did not notice that the entire page of data does not download. That
is a good catch. The code I wrote using Studio 6/C++ does not have
that problem (see original post).

I was able to implement the Asynchronous approach using
WebResponse.BeginGetResponse(), and WebResponse.EndGetResponse() that
you suggested and note that it also does not download the entire page
of data as you have described.

What my program does is start to download the data up to a point and
then close the connection. The reason for closing the connection is
because it takes so long to get the entire amount of data and there is
not always a need to get all of it. The implementation I have using
Studio 6/C++ does this and works perfectly. It is very dissapointing
that .Net/C# does not work.

Were you able to get better results using the socket approach?

How about some of you Microsoft gurus taking a look into this problem
and give answers to the following two questions:

1) What do you do to download the entire page of data.
2) What can you do to close the connection with zero delay (after
reading 1 or more 2000 byte buffers of data).

It is easy to set up this experiment by using cut paste with the
sample code I gave and putting it into a butten event of a simple
windows app.

Thanks in advance!
"Joerg Jooss" <jo*********@gmx.net> wrote in message news:<ei*************@tk2msftngp13.phx.gbl>...
No_Excuses wrote:
All,

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp =

"http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&n
um=238&showall=1";

WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.


I doubt that, as the code doesn't do what it advertises ;-)

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

Why a 2 kB buffer, when you're supposedly reading only 256 chars, but you're
specifying 2000 chars for the Read() call?
The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data.


Well, this particular page is an insane 6 MB large... the web server does
not help the client either, as there's no Content-Length header provided,
just Connection: close:

HTTP/1.1 200 OK
Date: Sat, 10 Apr 2004 10:20:31 GMT
Server: Apache/1.3.24 (Unix) mod_throttle/3.1.2 PHP/4.2.0
Connection: close
Content-Type: text/html

Even more interestingly, I cannot even download the entire page at all...
neither WebClient nor WebRequest/WebResponse are able to download that
beast. Both stop downloading at the exact same position -- I guess the
underlying TCP stream is prematurely closed. This must be some WinInet
default behaviour (quirk?), as the same thing happens to me when I download
the page using some ancient old Visual J++ code that uses plain TCP. I think
I'll write some plain HTTP client using System.Net.Sockets and see what
happens.

(Note: If the web server returns a Content-Length header, downloading the
page works just fine.)

[...]
Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.


Use asynchronous I/O -- see WebRequest.Abort(),
WebResponse.BeginGetResponse(), and WebResponse.EndGetResponse().

Cheers,

Nov 16 '05 #4
I am still interested in some help to understand this problem.

You can simply cut and paste the code into the button click event of a
simple windows app. Put a breakpoint on SR.Close(). If you then step
over this statement you will observe the delay I am talking about. If
you uncomment the code to read the entire windows form, stepping over
this statement will be immediate.

Any constructive comments would be greatly appreciated. I'm sure that
someone out there has the answer to my question.

Thanks in advance.

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp = "http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1";

WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.

The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data. At this point
I am starting to believe that SR.Close() does not abort reading until
the entire web page has been recieved. This is not desired and in
fact I parse the data and desire to terminate loading because the
entire process is so slow and not necessary all of the time.

Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.

Thanks in advance.

Todd

Nov 16 '05 #5
I am still interested in some help to understand this problem.

You can simply cut and paste the code into the button click event of a
simple windows app. Put a breakpoint on SR.Close(). If you then step
over this statement you will observe the delay I am talking about. If
you uncomment the code to read the entire windows form, stepping over
this statement will be immediate.

Any constructive comments would be greatly appreciated. I'm sure that
someone out there has the answer to my question.

Thanks in advance.

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp = "http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1";

WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.

The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data. At this point
I am starting to believe that SR.Close() does not abort reading until
the entire web page has been recieved. This is not desired and in
fact I parse the data and desire to terminate loading because the
entire process is so slow and not necessary all of the time.

Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.

Thanks in advance.

Todd

Nov 16 '05 #6
No_Excuses wrote:
All,

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp =
"http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&n
um=238&showall=1";
WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.
I doubt that, as the code doesn't do what it advertises ;-)

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

Why a 2 kB buffer, when you're supposedly reading only 256 chars, but you're
specifying 2000 chars for the Read() call?
The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data.
Well, this particular page is an insane 6 MB large... the web server does
not help the client either, as there's no Content-Length header provided,
just Connection: close:

HTTP/1.1 200 OK
Date: Sat, 10 Apr 2004 10:20:31 GMT
Server: Apache/1.3.24 (Unix) mod_throttle/3.1.2 PHP/4.2.0
Connection: close
Content-Type: text/html

Even more interestingly, I cannot even download the entire page at all...
neither WebClient nor WebRequest/WebResponse are able to download that
beast. Both stop downloading at the exact same position -- I guess the
underlying TCP stream is prematurely closed. This must be some WinInet
default behaviour (quirk?), as the same thing happens to me when I download
the page using some ancient old Visual J++ code that uses plain TCP. I think
I'll write some plain HTTP client using System.Net.Sockets and see what
happens.

(Note: If the web server returns a Content-Length header, downloading the
page works just fine.)

[...] Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.


Use asynchronous I/O -- see WebRequest.Abort(),
WebResponse.BeginGetResponse(), and WebResponse.EndGetResponse().

Cheers,

--
Joerg Jooss
jo*********@gmx.net

Nov 16 '05 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: Andy Mee | last post by:
Hello one and all, I'm developing an Asp.NET system to take a CSV file uploaded via the web, parse it, and insert the values into an SQL database. My sticking point comes when I try to split()...
8
by: Lou | last post by:
I am using the StreamReader to read data from a network pipe. these function hang sr.peek() sr.readAll() sr.read() when there is no more data in the pipe. sr.ReadLine works because there are...
5
by: No_Excuses | last post by:
All, I am interested in reading the text of a web page and parsing it. After searching on this newgroup I decided to use the following: ******************************* START OF CODE...
9
by: oafyuf | last post by:
Hi, I'm having performanbce issues with StreamReader and was wondering what I could do to improve it... The following takes around 3 seconds to process! The content of the response is: ...
2
by: Nad | last post by:
Hello All, I have been trying to get some sort of documantion on how StreamReader.ReadToEnd() finds out if the stream has ended and if anyone can read the following lines and tell me how to...
3
by: redneon | last post by:
I have a program which is constantly reading from a stream and what I'm wanting to do is, if the stream hasn't sent anything after a certain amount of time then do something. I've tried doing this...
9
by: ShadowOfTheBeast | last post by:
Hi, I have got a major headache understanding streamReader and streamWriter relationship. I know how to use the streamreader and streamwriter independently. but how do you write out using the...
13
by: mloichate | last post by:
I must read a very heavy-weight text plain file (usually .txt extension) )and replace a given character with another given character in all text inside the file. My application was working pretty...
5
by: Ismaelf | last post by:
hello!, is there any way that streamreader can be used to read a file published in a web server, ex: "http://localhost/text.txt" or is there any other way of reading such file??, thanks!
4
by: Hexman | last post by:
Code below ---- I've asked a similar question on this forum earlier. This is a slightly different situation. Previous Question ---- I'm trying to save some specific web pages to disk as...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.