JRS: In article <11************ **********@g44g 2000cwa.googleg roups.com>,
dated Wed, 25 May 2005 00:33:29, seen in news:comp.infos ystems.
www.authorin
g.html,
Pa************@ gmail.com posted :
There are many good HTML validators, not least, of course being
w3c's. However, they only handle one page at a time.
Is there a free web based served, or pc program, which will crawl my
site validating the HTML?
Bonus features like CSS validation, link checking, spell checking, etc
are welcome of course, but most important for me at the moment is HTML
validation.
Possible slight help :
CHEKLINX, via sig line 3, is a 16-bit DOS program which scans the local
master of a site checking local links.
It does not otherwise validate, and its scanning is fast and pragmatic; it
does not parse. HREF & NAME must be the first thing after <A to be
recognised.
That version is limited in the size of site it can handle (runs out of
stack); the corresponding EXE via sig line 2 is the same source compiled
for Windows 32-bit; it has not yet met a site size limit.
It checks, on a PII/300, 139 pages in 10 seconds; or four seconds for a
recheck when much is cached.
Summary: 139 files tested ; 2127 anchors seen ; 8008 relative cites seen ;
Relative citations: directories - missing 0, found 5 ;
Relative citations: filenames - missing 0, found 695 ;
Relative citations: anchors - missing 0, found 4220 ;
Local URLs 30 ; Odd A-refs 0 ; Make relatives 0 ; NotHTM 557 ;
Links over 8.3 format 0 ; Links with Upper Case 0 ; Links with "\" 0 ;
Repeated NAMEs 0 ; Unused NAMEs ? ; UnQuoted HREF/NAME/&c. 1 ;
Empty values 0 ; Far URLs 2265 ;
No guarantees.
--
© John Stockton, Surrey, UK. ?@merlyn.demon. co.uk Turnpike v4.00 MIME. ©
Web <URL:http://www.merlyn.demo n.co.uk/> - FAQqish topics, acronyms & links.
PAS EXE TXT ZIP via <URL:http://www.merlyn.demo n.co.uk/programs/00index.htm>.
Do not Mail News to me. Before a reply, quote with ">" or "> " (SoRFC1036)