By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,851 Members | 1,270 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,851 IT Pros & Developers. It's quick & easy.

Perl for checking broken link on a website

P: 5
Hi Guys!


I am new on this forum and I am hoping that I could acquire help to improve my PERL skill.

My goal is to be able to create a report of all broken links from a website. Based from my understanding, PERL can go through to server directory and search for htm/html file and put it on an array. From this array I believe PERL can search for a "http://" string and emulate to click the URL. If the URL contains "Page cannot be found" then create and append to a log file.

My question, what is the best method to do this and what is the required library and syntax?

Also for searching the "htm" file on a server, assuming that I don't have access to get in to server directory (which is unlikely) , how do I get this done? and if I have an access to the server how do I get PERL to login to the server first before PERL search for "htm" file on a specific directory? or do I have to put the PERL .pl file on the server root directory and run it there?

I really appreciate if someone could help. Thank you.
Jan 24 '08 #1
Share this Question
Share on Google+
11 Replies


eWish
Expert 100+
P: 971
Welcome to TSDN!

I would suggest that you use a module like WWW::Mechanize or LWP::UserAgent to achieve that you are wanting. Also, checkout WWW::Mechanize::FAQ.

--Kevin
Jan 24 '08 #2

eWish
Expert 100+
P: 971
You might also check out W3C::LogValidator::LinkChecker . If I read it correctly it will use your server log to locate broken links.

--Kevin
Jan 24 '08 #3

P: 5
Thanks for your feedback Kevin! anyone else?

I will try to post the sample code later on for your feedback and also for someone else to learn. Cheers!
Jan 24 '08 #4

numberwhun
Expert Mod 2.5K+
P: 3,503
Thanks for your feedback Kevin! anyone else?

I will try to post the sample code later on for your feedback and also for someone else to learn. Cheers!
Considering that Kevin pretty much covered it, I doubt anyone else will be throwing out the same links that he did.

Regards,

Jeff
Jan 24 '08 #5

P: 5
Guys I'm a bit stuck here.

I would like to know how to capture a certain word (in this case "http://") in the file content however I don't know what the syntax is. Could anyone help? Thanks.
Expand|Select|Wrap|Line Numbers
  1.  #Capturing .htm or .html filename on root directory
  2. my @root_files = <*.htm>;
  3. #Searching through directory
  4. for ($count=0;$count<@root_files;$count++)
  5. {
  6.     #opening the file one by one
  7.     my $FILENAME = @root_files[$count];
  8.     open(FILEPOINTER, "< $FILENAME") or die "Can't open $filename : $!";
  9.  
  10.     #Searching file content line by line
  11.     do
  12.     {
  13.         my $LINECONTENT = <FILEPOINTER>;
  14.                 # I need to capture only the http link if there is any. the "print" below is just for testing to make sure file content is shown line by line on command prompt
  15.         print $LINECONTENT;
  16.     }
  17.     until eof;
  18.     close FILEOPEN;    
  19. }
  20.  
Jan 25 '08 #6

numberwhun
Expert Mod 2.5K+
P: 3,503
Guys I'm a bit stuck here.

I would like to know how to capture a certain word (in this case "http://") in the file content however I don't know what the syntax is. Could anyone help? Thanks.
Expand|Select|Wrap|Line Numbers
  1.  #Capturing .htm or .html filename on root directory
  2. my @root_files = <*.htm>;
  3. #Searching through directory
  4. for ($count=0;$count<@root_files;$count++)
  5. {
  6.     #opening the file one by one
  7.     my $FILENAME = @root_files[$count];
  8.     open(FILEPOINTER, "< $FILENAME") or die "Can't open $filename : $!";
  9.  
  10.     #Searching file content line by line
  11.     do
  12.     {
  13.         my $LINECONTENT = <FILEPOINTER>;
  14.                 # I need to capture only the http link if there is any. the "print" below is just for testing to make sure file content is shown line by line on command prompt
  15.         print $LINECONTENT;
  16.     }
  17.     until eof;
  18.     close FILEOPEN;    
  19. }
  20.  
Ok, I think that the best thing for you to do is grab a good Perl book and start reading.

I say this because this line:

Expand|Select|Wrap|Line Numbers
  1. my @root_files = <*.htm>;
  2.  
does not do what you are expecting. You cannot just specify "*.htm" and expect all files with .htm on the end to magically be in the array.

First, what OS are you working on? Windows or Unix/Linux? That will help tell us how you are going to get your directory listing.

Regards,

Jeff
Jan 25 '08 #7

P: 5
Thanks for your reply Jeff.

I'm working on Linux and the "my @root_files = <*.htm>" worked as long as I have the .htm file at the same directory as the .pl file. I guess I will also need to improve the code to automatically dig deeper into the directory if it contains more folders (need more time, my perl book is confusing, if you can give me a hint that'll be great!).

Right now I am working on the Regular Expression to grab only the http link

Expand|Select|Wrap|Line Numbers
  1.  
  2. do
  3.     {
  4.         my $LINECONTENT = <FILEPOINTER>;
  5.         $_ = $LINECONTENT;
  6.        #this is where I'm stuck at this stage. The code below list the whole line instead of just the http://
  7.  
  8.         if (/a href="?"/) 
  9.         {
  10.             print $LINECONTENT;
  11.         }
  12.     }
  13.     until eof;
  14.  
Jan 25 '08 #8

KevinADC
Expert 2.5K+
P: 4,059
Hi Guys!


I am new on this forum and I am hoping that I could acquire help to improve my PERL skill.

My goal is to be able to create a report of all broken links from a website. Based from my understanding, PERL can go through to server directory and search for htm/html file and put it on an array. From this array I believe PERL can search for a "http://" string and emulate to click the URL. If the URL contains "Page cannot be found" then create and append to a log file.

My question, what is the best method to do this and what is the required library and syntax?

Also for searching the "htm" file on a server, assuming that I don't have access to get in to server directory (which is unlikely) , how do I get this done? and if I have an access to the server how do I get PERL to login to the server first before PERL search for "htm" file on a specific directory? or do I have to put the PERL .pl file on the server root directory and run it there?

I really appreciate if someone could help. Thank you.
Can't you just use something like FrontPage to do that? I am sure there are many third party programs you can use to report broken site links.
Jan 26 '08 #9

P: 5
Ummm yes I guess you can use 3rd party software to check broken link but you won't learn anything...

I have just finished a very basic script although need a lot of improvement.

Expand|Select|Wrap|Line Numbers
  1.  
  2. #!/usr/bin/perl
  3. use LWP::UserAgent;
  4.  
  5.  
  6. #Capturing .htm file
  7. #Improvement required. How if there are multiple folders
  8. my @root_files = <*.htm>;
  9. #Searching through folder
  10. for ($count=0;$count<@root_files;$count++)
  11. {
  12.     #opening the file one by one
  13.     my $FILENAME = (@root_files[$count]);
  14.     open(FILEPOINTER, "< $FILENAME") or die "Can't open $filename : $!";
  15.  
  16.     #Searching file content line by line
  17.     do
  18.     {
  19.         my $LINECONTENT = <FILEPOINTER>;
  20.         $_ = $LINECONTENT;
  21.         if (/href="?"/i)
  22.         {
  23.             #capturing quoted strings in href
  24.             #Improvement required. How if there are multiple href in each line?
  25.             my $beginstring = index($LINECONTENT,'"');
  26.             my $endstring = index($LINECONTENT, '"', $beginstring + 1);
  27.             my $finalstring = substr($LINECONTENT,($beginstring+1),($endstring-$beginstring-1));
  28.  
  29.             my $ua = LWP::UserAgent->new;
  30.             $ua->timeout(3);
  31.  
  32.             #For each link found check for 404 error and log to text file #Improvement required. How if there are a relative URI?
  33.             my $response = $ua->get($finalstring);
  34.             if ($response->is_error)
  35.             { 
  36.                 #output the link that has 404 error
  37.                 open(TEXT,">>broken_link_output.txt");
  38.                 printf TEXT $finalstring. "\n";                
  39.             }
  40.             close TEXT;
  41.         }
  42.     }
  43.     until eof;
  44.     close FILEPOINTER;    
  45. }
  46.  
  47.  
Jan 26 '08 #10

KevinADC
Expert 2.5K+
P: 4,059
Ummm yes I guess you can use 3rd party software to check broken link but you won't learn anything...
If the goal includes learning how to write perl code then by all means, proceed. But you did not mention that in your openning post. ;)
Jan 26 '08 #11

KevinADC
Expert 2.5K+
P: 4,059
You will want to look into using File::Find to search in all the sub directories. But then you might not learn how to recurse through directories yourself. ;)


For internal links you might consider just checking if the file exists and is readable instead of trying to fetch files with get(). It should be much faster.

For links to external files you have to use get().

There are also bound to be links to the same file more than once, you may want to avoid checking for the same file. You can use a hash table to keep track of what links are already checked.

The "do{}until eof" construct is not necessary. You might want to look into HTML::LinkExtractor (I think thats the name) to find the links in the html documents.
Jan 26 '08 #12

Post your reply

Sign in to post your reply or Sign up for a free account.