Hello All,
A couple of weeks ago, I undertook to write a utility that would loop
through various URLs and test whether they were valid. I got some good help
from this list, and was able to write the utility.
Now, I have run into a problem that is difficult for me to solve. It is
this: When looping through a large set of URLs, if many of the URLS are
bad, the program will time out. Conversely, if most of the URLs are good,
it will perform as expected and complete.
I am including code stubs below that will illustrate this.
Dim req As System.Net.Http WebRequest
Dim resp As System.Net.Http WebResponse
for i = 0 to 1000
s = "www.google.com "
req = System.Net.WebR equest.Create( s )
try
resp = req.GetResponse ()
LinkStatus = Resp.StatusCode .ToString
resp.close()
catch exWeb As System.net.WebE xception
LinkStatus = exWeb.Message
end try
next i
'The preceding block will work because finding www.google.com 1000 times is
not time-consuming.
'But the next block tries to access a non-existent site. Even doing this
"only" 500 times causes the app to time out, evidently because it takes
longer to GetResponse() a non-existent site.
for i = 0 to 500
'This time we try a non-existent site
s = "www.google.edu "
req = System.Net.WebR equest.Create( s )
try
resp = req.GetResponse ()
LinkStatus = Resp.StatusCode .ToString
resp.close()
catch exWeb As System.net.WebE xception
LinkStatus = exWeb.Message
end try
next i
So can anybody provide any pointers or documentation that would help me
solve this problem? I need the program to be able to handle large sets of
invalid URLs.
Thanks very much in advance,
Dave