473,378 Members | 1,156 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,378 software developers and data experts.

Web Crawler - Python or Perl?

Hi all,
I am currently planning to write my own web crawler. I know Python but
not Perl, and I am interested in knowing which of these two are a
better choice given the following scenario:

1) I/O issues: my biggest constraint in terms of resource will be
bandwidth throttle neck.
2) Efficiency issues: The crawlers have to be fast, robust and as
"memory efficient" as possible. I am running all of my crawlers on
cheap pcs with about 500 mb RAM and P3 to P4 processors
3) Compatibility issues: Most of these crawlers will run on Unix
(FreeBSD), so there should exist a pretty good compiler that can
optimize my code these under the environments.

What are your opinions?
Jun 27 '08 #1
12 4271
On Jun 9, 11:48 pm, disappeare...@gmail.com wrote:
Hi all,
I am currently planning to write my own web crawler. I know Python but
not Perl, and I am interested in knowing which of these two are a
better choice given the following scenario:

1) I/O issues: my biggest constraint in terms of resource will be
bandwidth throttle neck.
2) Efficiency issues: The crawlers have to be fast, robust and as
"memory efficient" as possible. I am running all of my crawlers on
cheap pcs with about 500 mb RAM and P3 to P4 processors
3) Compatibility issues: Most of these crawlers will run on Unix
(FreeBSD), so there should exist a pretty good compiler that can
optimize my code these under the environments.

What are your opinions?
It really doesn't matter whether you use Perl or Python for writing
web crawlers. I have used both for writing crawlers. The scenarios you
mentioned (I/O issues, Efficiency, Compatibility) don't differ two
much for these two languages. Both the languages have fast I/O. You
can use urllib2 module and/or beautiful soup for developing crawler in
Python. For Perl you can use Mechanize or LWP modules. Both languages
have good support for regular expressions. Perl is slightly faster I
have heard, though I don't find the difference myself. Both are
compatible with *nix. For writing a good crawler, language is not
important, it's the technology which is important.

regards,
Subeen.
http://love-python.blogspot.com/
Jun 27 '08 #2
di***********@gmail.com wrote:
1) I/O issues: my biggest constraint in terms of resource will be
bandwidth throttle neck.
2) Efficiency issues: The crawlers have to be fast, robust and as
"memory efficient" as possible. I am running all of my crawlers on
cheap pcs with about 500 mb RAM and P3 to P4 processors
3) Compatibility issues: Most of these crawlers will run on Unix
(FreeBSD), so there should exist a pretty good compiler that can
optimize my code these under the environments.
You should rethink your requirements. You expect to be I/O bound, so why do
you require a good "compiler"? Especially when asking about two interpreted
languages...

Consider using lxml (with Python), it has pretty much everything you need for
a web crawler, supports threaded parsing directly from HTTP URLs, and it's
plenty fast and pretty memory efficient.

http://codespeak.net/lxml/

Stefan
Jun 27 '08 #3
subeen wrote:
can use urllib2 module and/or beautiful soup for developing crawler
Not if you care about a) speed and/or b) memory efficiency.

http://blog.ianbicking.org/2008/03/3...r-performance/

Stefan
Jun 27 '08 #4
On Jun 10, 12:15 am, Stefan Behnel <stefan...@behnel.dewrote:
subeen wrote:
can use urllib2 module and/or beautiful soup for developing crawler

Not if you care about a) speed and/or b) memory efficiency.

http://blog.ianbicking.org/2008/03/3...r-performance/

Stefan
ya, beautiful soup is slower. so it's better to use urllib2 for
fetching data and regular expressions for parsing data.
regards,
Subeen.
http://love-python.blogspot.com/
Jun 27 '08 #5
At 11:21 AM -0700 6/9/08, subeen wrote:
>On Jun 10, 12:15 am, Stefan Behnel <stefan...@behnel.dewrote:
> subeen wrote:
> can use urllib2 module and/or beautiful soup for developing crawler

Not if you care about a) speed and/or b) memory efficiency.

http://blog.ianbicking.org/2008/03/3...r-performance/

Stefan

ya, beautiful soup is slower. so it's better to use urllib2 for
fetching data and regular expressions for parsing data.
regards,
Subeen.
http://love-python.blogspot.com/
--
http://mail.python.org/mailman/listinfo/python-list
Beautiful Soup is a bit slower, but it will actually parse some of
the bizarre HTML you'll download off the web. We've written a couple
of crawlers to run over specific clients sites (I note, we did _not_
create the content on these sites).

Expect to find html code that looks like this:

<ul>
<li>
<form>
</li>
</form>
</ul>
[from a real example, and yes, it did indeed render in IE.]

I don't know if some of the quicker parsers discussed require
well-formed HTML since I've not used them. You may want to consider
using one of the quicker HTML parsers and, when they throw a fit on
the downloaded HTML, drop back to Beautiful Soup -- which usually
gets _something_ useful off the page.

--Ray

--

Raymond Cote
Appropriate Solutions, Inc.
PO Box 458 ~ Peterborough, NH 03458-0458
Phone: 603.924.6079 ~ Fax: 603.924.8668
rgacote(at)AppropriateSolutions.com
www.AppropriateSolutions.com
Jun 27 '08 #6
subeen <ta************@gmail.comat Montag 09 Juni 2008 20:21:
On Jun 10, 12:15 am, Stefan Behnel <stefan...@behnel.dewrote:
>subeen wrote:
can use urllib2 module and/or beautiful soup for developing crawler

Not if you care about a) speed and/or b) memory efficiency.

http://blog.ianbicking.org/2008/03/3...r-performance/

Stefan

ya, beautiful soup is slower. so it's better to use urllib2 for
fetching data and regular expressions for parsing data.
BeautifulSoup is implemented on regular expressions. I doubt, that you can
achieve a great performance gain by using plain regular expressions, and
even if, this gain is certainly not worth the effort. Parsing markup with
regular expressions is hard, and the result will most likely not be as fast
and as memory-efficient as lxml.html.

I personally am absolutely happy with lxml.html. It's fast, memory
efficient, yet powerful and easy to use.

--
Freedom is always the freedom of dissenters.
(Rosa Luxemburg)
Jun 27 '08 #7
di***********@gmail.com <di***********@gmail.comwrote:
I am currently planning to write my own web crawler. I know Python but
not Perl, and I am interested in knowing which of these two are a
better choice given the following scenario:

1) I/O issues: my biggest constraint in terms of resource will be
bandwidth throttle neck.
2) Efficiency issues: The crawlers have to be fast, robust and as
"memory efficient" as possible. I am running all of my crawlers on
cheap pcs with about 500 mb RAM and P3 to P4 processors
3) Compatibility issues: Most of these crawlers will run on Unix
(FreeBSD), so there should exist a pretty good compiler that can
optimize my code these under the environments.

What are your opinions?
Use python with twisted.

With a friend I wrote a crawler. Our first attempt was standard
python. Our second attempt was with twisted. Twisted absolutely blew
the socks off our first attempt - mainly because you can fetch 100s or
1000s of pages simultaneously, without threads.

Python with twisted will satisfy 1-3. You'll have to get your head
around its asynchronous nature, but once you do you'll be writing a
killer crawler ;-)

As for Perl - once upon a time I would have done this with perl, but I
wouldn't go back now!

--
Nick Craig-Wood <ni**@craig-wood.com-- http://www.craig-wood.com/nick
Jun 27 '08 #8
Ray Cote wrote:
Beautiful Soup is a bit slower, but it will actually parse some of the
bizarre HTML you'll download off the web.
[...]
I don't know if some of the quicker parsers discussed require
well-formed HTML since I've not used them. You may want to consider
using one of the quicker HTML parsers and, when they throw a fit on the
downloaded HTML, drop back to Beautiful Soup -- which usually gets
_something_ useful off the page.
So does lxml.html. And if you still feel like needing BS once in a while,
there's lxml.html.soupparser.

http://codespeak.net/lxml/elementsoup.html

Stefan
Jun 27 '08 #9
As to why as opposed to what, I am attempting to build a search engine
right now that plans to crawl not just html but other things too.

I am open to learning, and I don't want to learn anything that doesn't
really contribute to building my search engine for the moment. Hence I
want to see whether learning PERL will be helpful to the later parts
of my search engine.

Victor
Jun 27 '08 #10
di***********@gmail.com wrote:
As to why as opposed to what, I am attempting to build a search engine
right now that plans to crawl not just html but other things too.

I am open to learning, and I don't want to learn anything that doesn't
really contribute to building my search engine for the moment. Hence I
want to see whether learning PERL will be helpful to the later parts
of my search engine.
I honestly don't think there's anything useful in Perl that you can't do in
Python. There's tons of ugly ways to write unreadable code, though, so if you
prefer that, that's something that's harder to do in Python.

Stefan
Jun 27 '08 #11
On Mon, 09 Jun 2008 10:48:03 -0700, disappearedng wrote:
I know Python but not Perl, and I am interested in knowing which of
these two are a better choice.
I'm partial to *Python*, but, the last time I looked, *urllib2* didn't
provide a time-out mechanism that worked under all circumstances. My
client-side scripts would usually hang when the server quit
responding, which happened a lot.

You can get around this by starting an *html* retrieval in its own
thread, giving it a deadline, and killing it if it doesn't finish
gracefully.

A quicker and considerably grittier solution is to supply timeout
parms to the *curl* command through the shell. Execute the command
and retrieve its output through the *subprocess* module.

--
... Chuck Rhode, Sheboygan, WI, USA
... 1979 Honda Goldwing GL1000 (Geraldine)
... Weather: http://LacusVeris.com/WX
... 64° — Wind SE 5 mph — Sky partly cloudy.
Jun 27 '08 #12
On Jun 13, 1:26 am, Chuck Rhode <CRh...@LacusVeris.comwrote:
On Mon, 09 Jun 2008 10:48:03 -0700, disappearedng wrote:
I knowPythonbut notPerl, and I am interested in knowing which of
these two are a better choice.

I'm partial to *Python*, but, the last time I looked, *urllib2* didn't
provide a time-out mechanism that worked under all circumstances. My
client-side scripts would usually hang when the server quit
responding, which happened a lot.
You can avoid the problem using the following code:
import socket

timeout = 300 # seconds
socket.setdefaulttimeout(timeout)

regards,
Subeen.
http://love-python.blogspot.com/
Jun 27 '08 #13

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

68
by: Lad | last post by:
Is anyone capable of providing Python advantages over PHP if there are any? Cheers, L.
2
by: Tim Johansson | last post by:
Is it an idea to start making a link crawler in C++? I know that Perl fits the purpose very well, but I'd rather do it in C++ as an excersise.
3
by: Bill | last post by:
Has anyone used/tested Request.Browser.Crawler ? Is it reliable, or are there false positives/negatives? Thanks!
13
by: abhinav | last post by:
Hi guys.I have to implement a topical crawler as a part of my project.What language should i implement C or Python?Python though has fast development cycle but my concern is speed also.I want to...
5
by: Oscarian | last post by:
Does anyone have any tips/tutorials on how to write a web crawler using C? On that note, is C a "good" language to use for a web crawler? -O
3
rhitam30111985
by: rhitam30111985 | last post by:
hi all,,, i am testing a web crawler on a site passsed as a command line argument.. it works fine until it finds a server which is down or some other error ... here is my code: #!...
3
by: mh121 | last post by:
I am trying to write a web crawler (for academic research purposes) that grabs the number of links different websites/domain names have from other websites, as listed on Google (for example, to get...
4
by: sonich | last post by:
I need simple web crawler, I found Ruya, but it's seems not currently maintained. Does anybody know good web crawler on python or with python interface?
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...
0
by: ryjfgjl | last post by:
In our work, we often need to import Excel data into databases (such as MySQL, SQL Server, Oracle) for data analysis and processing. Usually, we use database tools like Navicat or the Excel import...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.