468,525 Members | 2,304 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,525 developers. It's quick & easy.

Using Python for programming algorithms

Hello.

I am new to Python. It seems a very interesting language to me. Its
simplicity is very attractive.

However, it is usually said that Python is not a compiled but
interpreted programming language —I mean, it is not like C, in that
sense.

I am working on my PhD Thesis, which is about Operations Research,
heuristic algorithms, etc., and I am considering the possibility of
programming all my algorithms in Python.

The usual alternative is C, but I like Python more.

The main drawbacks I see to using Python are these:

* As far as I understand, the fact that Python is not a compiled
language makes it slower than C, when performing huge amounts of
computations within an algorithm or program.

* I don't know how likely it is to find libraries in Python related to
my research field.

* I know Python is a "serious" and mature programming language, of
course. But I do not know if it is seen as "just funny" in a research
context. Is Python considered as a good programming language for
implementing Operations Research algorithms, such as heuristics and
other soft-computing algorithms?

Maybe this is not the right forum, but maybe you can give me some
hints or tips...

Thank you in advance.
Jun 27 '08
53 4531
I reply to myself!
>
Boost.Python is also very known (but never tested by myself).
http://www.boost.org/doc/libs/1_35_0...ml/index..html
here the example. I know that it has been made to simplify the
CPython's use and this is based on CPython.

Frédéric
Jun 27 '08 #51
On May 21, 12:01 pm, Bruno Desthuilliers <bruno.
42.desthuilli...@websiteburo.invalidwrote:
C has proven very difficult to optimize, particularly because pointer
aliasing prevents efficient register allocation.

Does this compare to optimizing something like Python ? (serious
question, but I think I already know part of the answer).
In Python different references can alias the same object. But the
objects for which this is important in C, is typically elementary
types like ints and floats. Pointer aliasing has the consequence that
the compiler often cannot choose to keep an int or a float in a
register inside a tight loop. This will often have serious
consequences for numerical computing. But these elementary types are
immutable in Python, so Python is actually somewhat immune to this.
This is one of several reasons why RPython sometimes can be "faster
than C".

But there are other much more important issues if you are to generate
efficient machine code from Python, e.g. getting rid of redundant
attribute lookups.

Jun 27 '08 #52
On May 21, 11:59 am, Bruno Desthuilliers <bruno.
42.desthuilli...@websiteburo.invalidwrote:
Strange enough, no one calls Java or C# 'interpreted languages', while
they (or, to be more exact, their reference implementations) both use
the same byte-code/VM scheme[1].
Java interprets the bytecode in a virtual machine by default. Only
code 'hotspots' are JIT compiled to native machine code.
Microsoft .NET compiles bytecode to native code on the first
invocation, and caches the machine code for later use. Nothing is
interpreted.

Java can benefit from more runtime information when generating machine
code, but incurs the penalty from running an interpreter most of the
time. MS .NET is more similar to a static compiler. They currently
perform about equally well, sometimes approximating traditional
compiled languages like C++.

But they do not use the same bytecode VM scheme. Particularly,
Microsoft .NET has no virtual machine.
You know, Common Lisp is also an highly dynamic language, and there are
now some optimizing native-code Common Lisp compilers that generate very
efficient binary code. It only tooks about 30 years and way more
ressources than CPython ever had to get there...
The speed of Common Lisp with compilers like SBCL and CMUCL comes from
optional static typing. This no more magical than what Cython and
Pyrex already can do. If we remove the interpreter when Cython or
Pyrex supports all features of the Python language, and instead rely
on "JIT compilation" by one oth these compilers, we are already there.


Jun 27 '08 #53
In article <71**********************************@y21g2000hsf. googlegroups.com>,
sturlamolden <st**********@yahoo.nowrote:
>On May 18, 5:46 am, "inhahe" <inh...@gmail.comwrote:
>The numbers I heard are that Python is 10-100 times slower than C.

Only true if you use Python as if it was a dialect of Visual Basic. If
you use the right tool, like NumPy, Python can be fast enough. Also
note that Python is not slower than any other language (including C)
if the code is i/o bound. As it turns out, most code is i/o bound,
even many scientific programs.

In scientific research, CPU time is cheap and time spent programming
is expensive. Instead of optimizing code that runs too slowly, it is
often less expensive to use fancier hardware, like parallell
computers. For Python, we e.g. have mpi4py which gives us access to
MPI. It can be a good advice to write scientific software
parallelizable from the start.
Jun 27 '08 #54

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

33 posts views Thread by Joe Cheng | last post: by
226 posts views Thread by Stephen C. Waterbury | last post: by
68 posts views Thread by Lad | last post: by
20 posts views Thread by xeys_00 | last post: by
13 posts views Thread by abhinav | last post: by
27 posts views Thread by hacker1017 | last post: by
2 posts views Thread by Xiao Jianfeng | last post: by
18 posts views Thread by Jens | last post: by
reply views Thread by NPC403 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.