473,322 Members | 1,690 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,322 software developers and data experts.

Why I love python.

There is an amazing article by paul graham about python, and an even
better discussion about it on slashdot. The reason I point this out,
is the more I read both articles, the more I realised how we would be
mutilating the language with that god forsaken @ decorator.
I don't know about the rest of you, but I learned python and fell in
love with its syntax and simplicity. Python - just works. So please
GVR. Don't complicate it. Leave it as is. Work on making it faster,
not uglier. Work on - in some cases - better algorithms for certain
modules, not for it to even closely resemble C or perl or god knows
whateverotherlanguagethereisoutthere. Am i the only one with a
visceral reaction to this thing???

paul Graham article: http://www.paulgraham.com/pypar.html

Slashdot discussion:
http://developers.slashdot.org/devel...id=156&tid=218
Jul 18 '05 #1
47 3598
"Michael Scarlett" <bi*******@yahoo.com> wrote in message
news:ce**************************@posting.google.c om...
There is an amazing article by paul graham about python, and an even
better discussion about it on slashdot. The reason I point this out,
is the more I read both articles, the more I realised how we would be
mutilating the language with that god forsaken @ decorator.
I don't know about the rest of you, but I learned python and fell in
love with its syntax and simplicity. Python - just works. So please
GVR. Don't complicate it. Leave it as is. Work on making it faster,
not uglier. Work on - in some cases - better algorithms for certain
modules, not for it to even closely resemble C or perl or god knows
whateverotherlanguagethereisoutthere. Am i the only one with a
visceral reaction to this thing???


Nope I have the same reaction.
Jul 18 '05 #2
Michael Scarlett wrote:
I don't know about the rest of you, but I learned python and fell in
love with its syntax and simplicity.
That's the funny thing about Python. It really isn't simple, but it sure
seems like it is. There's tons of little niggling rules about scoping,
inheritance, and what have you--but you can be blissfully ignorant of
most of them and still get work done. That's pretty unique.

I think that's why people are so concerned about @pie: it has the
*potential* to be an obstacle early in the learning process, instead of
after someone is already hooked.
Am i the only one with a visceral reaction to this thing???


Goodness, no! Why do you think we've all been pissing and moaning so much?

-- Mark
Jul 18 '05 #3
On 2004-08-13, Michael Scarlett <bi*******@yahoo.com> wrote:
There is an amazing article by paul graham about python, and an even
better discussion about it on slashdot.
Yes, the article is very good
Don't complicate it. Leave it as is. Work on making it faster, not
uglier.


Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma. The only way to get these improvements is
making it possible for a python implementation to produce *efficient*
*compiled* code. At the same time the dynamic-typing nature of the
language is one of its most valuable characteristics. And this is one
of the hardest problems when trying to write a decent python
compiler. If you define a function like:

def sum (a, b):
return a + b

How can the compiler know what code to produce? It could trace all the
applications of sum(), and decide what types of arguments sum() is
actually applied on. But this is not easy, and sometimes it is
straight-out impossible.

A different approach would be for the programmer to *suggest* what
kind of types the function will *most probably* be applied on. The
programmer might suggest to the compiler that "a" and "b" will *most
probably* be integers or floats, so the compiler will have to produce
code for a function that handles these cases (or code for two
"functions", one for each case). One might say that the function could
be "decorated" by the programmer regarding the type of its
arguments. Notice that in such a case the decoration does not alter
the behavior of the function in any way! The function can still be
called with string arguments, in which case execution will be
dispatched to an "interpreted" version thereof.

So I believe is that "making it faster" requires some fundamental
work, and not simply devising "better algorithms for some
modules". Better algorithms for some modules will give you something
like point-something improvement in performance. Being able to produce
efficient compiled code will give you improvement between a factor of
10 and an order of magnitude (depending on the type of the
program). The same it true for "making it more secure" (e.g by
providing the programmer a way to specify what types of arguments are
*allowed* to be passed to a function).

In general, python must break free from its perl-ish adolescence
(language for small things, which doesn't have to be very fast or very
safe), but without loosing its agility. Decorators might be a step in
the right direction, or at least they might allow some experimentation
with such matters.

Because of this they are welcome.

Just my 2c
/npat

Jul 18 '05 #4

"Nick Patavalis" <np**@efault.net> wrote in message
news:sl*****************@gray.efault.net...
On 2004-08-13, Michael Scarlett <bi*******@yahoo.com> wrote:

Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma.
More performance would be helpful. There are a number
of projects that are working toward that end, of which
the most visible is the PyPy project. Jim Hughnin claims
that he's getting substantial improvements with his port
to the .Net framework, but see Fredrick Lundh's August
4 post on the subject.

As far as I'm aware, the biggest current performance
sink is function and method call overhead. Lookup for
module and built-in level variables is also a significant
time sink - and both module level and builtin identifiers
are used quite frequently.

Another thing to notice is the garbage collection
algorithm. Python uses reference counting as the
basic algorithm, which wasn't that bad a choice
a decade ago. Today real garbage collection
technology has outstripped it so that maintaining
the reference counts is another time sink.

The descriptor technology in new style classes
is a stunning techincal achievement, but in the
worst case it requires a full scan of the class
hierarchy before the mechanism can decide if
it's appropriate to insert an attribute into an
instance or invoke a property.
The only way to get these improvements is
making it possible for a python implementation
to produce *efficient* *compiled* code.
I think there are lots of people that would dispute
you on that. Current Java environments run close
to C++ performance due to the JIT compilers
that are built into the runtimes. Current JIT
technology doesn't require pre-declaration of
variable types; it's perfectly happy to insert checks
at appropriate points so that it can reuse code when
the object types don't change (which they don't
most of the time.)

John Roth

Just my 2c
/npat

Jul 18 '05 #5
On Thu, Aug 12, 2004 at 08:22:01PM -0400, Mark Bottjer wrote:
Michael Scarlett wrote:
I don't know about the rest of you, but I learned python and fell in
love with its syntax and simplicity.


That's the funny thing about Python. It really isn't simple, but it sure
seems like it is. There's tons of little niggling rules about scoping,
inheritance, and what have you--but you can be blissfully ignorant of
most of them and still get work done. That's pretty unique.

I think that's why people are so concerned about @pie: it has the
*potential* to be an obstacle early in the learning process, instead of
after someone is already hooked.

Agreed, python isn't simple and those hidden things are actually useful for
getting real work done. I've been using python industrially for three years
and I'm a big fan of decorators; decorators would help me get things done.
I liked the look of [decorators] before colon option more, but the current
situation of

def foo(a,b,c):
#
# 60 lines of code here
#
foo = mutate(foo) # oh, and by the way the 'def foo'
# signature might be misleading

'foo = mutate(foo)' It is boilerplate, python is nice because it eschews
boilerplate.

While the decorator syntax might not be obvious to newbies they won't see
it in simple code. When then do see it having @mutate right next to the
func def has to be more of a clue than 'foo=mutate(foo)' lines or screens away.

-Jack
Jul 18 '05 #6
On 2004-08-13, John Roth <ne********@jhrothjr.com> wrote:

"Nick Patavalis" <np**@efault.net> wrote in message
news:sl*****************@gray.efault.net...
On 2004-08-13, Michael Scarlett <bi*******@yahoo.com> wrote:

Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma.
More performance would be helpful. There are a number
of projects that are working toward that end, of which
the most visible is the PyPy project.


Yes, I know about PyPy, and I think what they are trying to do is
write Python itself in a Python-subset that can be efficiently
compiled, or something along these lines. This is interesting (to say
the least).

As far as I'm aware, the biggest current performance
sink is function and method call overhead [...]

Another thing to notice is the garbage collection
algorithm [...]
Both very true!
The only way to get these improvements is
making it possible for a python implementation
to produce *efficient* *compiled* code.
I think there are lots of people that would dispute
you on that. Current Java environments run close
to C++ performance due to the JIT compilers
that are built into the runtimes.


You 're right, I was maybe a bit too dogmatic on my point. But you
must accept that JIT-compilers are, nevertheless, compilers! They may
be more intelligent and more flexible than traditional "ahead of time"
compilers, but still they are fundamentally compilers. Furthermore,
for most cases, it might be possible for an AOT compiler to produce a
"binary" that doesn't contain the compiler itself.
Current JIT technology doesn't require pre-declaration of variable
types; it's perfectly happy to insert checks at appropriate points
so that it can reuse code when the object types don't change (which
they don't most of the time.)


What you mean I guess, is that the first time a function is applied,
it is compiled to native-code, and a signature for the application is
generated. The next time, the application is checked against the
signature and if they match, the existing code is used, otherwise the
function is re-compiled (preserving the previously compiled one too,
is some sort of "cache"). Or am I getting it wrong? Even in such a
case though per-declarations would help.

Do you happen to know of any efforts to build such "AOT"/"JIT"
compilation/execution environments for Python?

Regards
/npat

Jul 18 '05 #7
On Fri, 13 Aug 2004 01:11:54 +0000 (UTC), Nick Patavalis
<np**@efault.net> wrote:
Don't complicate it. Leave it as is. Work on making it faster, not
uglier.


Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma.


I'm biased, having done a paper on this at the most recent PyCon, but
I firmly believe that much of the "Python is too slow" arguments can be
answered with "too slow for what?" See the pycon proceedings, but I've
been doing VoIP in Python, complete with audio mixing, and it's been
more than fast enough.

Another large app is a database ETL tool - Python is more than
adequate for flinging around a very large number of rows of data.
Indeed, it could be 4-5 times as slow, and Oracle would still be the
bottleneck.

Sure, you're not going to get great performance for your numerical
computation in Python, but luckily, we have numarray for this.

Yes, additional performance would be a nice-to-have, but I've not
really found the existing interpreter's performance to be that much
of a problem. I suspect that one of the many new implementations
will provide us with some wins here.
Jul 18 '05 #8
On 12 Aug 2004 17:05:34 -0700, Michael Scarlett <bi*******@yahoo.com> wrote:
[ pie decorators ]
Am i the only one with a
visceral reaction to this thing???


So did you have a similar reaction on first hitting the indentation for
blocks? I know I dimly recall thinking that this was very strange and
horrible (dimly, because it was 1992 or 1993).
Jul 18 '05 #9
On Fri, 13 Aug 2004 16:35:58 +1000,
Anthony Baxter <an***********@gmail.com> wrote:
On Fri, 13 Aug 2004 01:11:54 +0000 (UTC), Nick Patavalis
<np**@efault.net> wrote:
> Don't complicate it. Leave it as is. Work on making it faster, not
> uglier.


Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma.


I'm biased, having done a paper on this at the most recent PyCon, but
I firmly believe that much of the "Python is too slow" arguments can be
answered with "too slow for what?" See the pycon proceedings, but I've
been doing VoIP in Python, complete with audio mixing, and it's been
more than fast enough.


I've also been doing rtp voice in python - on an iPAQ H3800... I'm using
your rtp.py code (I'm not doing sip), so I can't take credit for it
though :)

I dont know if it's "more than fast enough", but it's "fast enough".

--
Sam Holden
Jul 18 '05 #10
On 13 Aug 2004 06:54:11 GMT, Sam Holden <sh*****@flexal.cs.usyd.edu.au> wrote:
I've also been doing rtp voice in python - on an iPAQ H3800... I'm using
your rtp.py code (I'm not doing sip), so I can't take credit for it
though :)

I dont know if it's "more than fast enough", but it's "fast enough".


Neat! Well, on that sort of extremely limited hardware, it's not suprising
that it's more of a struggle. Someone else got the full shtoom working on
WinCE or similar.
Jul 18 '05 #11
Nick Patavalis wrote:

<snip>
Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma. The only way to get these improvements is
making it possible for a python implementation to produce *efficient*
*compiled* code. At the same time the dynamic-typing nature of the
language is one of its most valuable characteristics. And this is one
of the hardest problems when trying to write a decent python
compiler. If you define a function like:

def sum (a, b):
return a + b

How can the compiler know what code to produce?


I know of at least one language which has solved this problem, Ocaml

http://www.ocaml.org/

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo no****@mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
Never argue with stupid people. They'll just drag you down to
their level and beat you with experience
Jul 18 '05 #12
Erik de Castro Lopo wrote:
def sum (a, b):
return a + b

How can the compiler know what code to produce?

I know of at least one language which has solved this problem, Ocaml

http://www.ocaml.org/

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.


You are comparing apples and oranges. The programmer provides OCaml
with additional information that allows it to interfer the type.
Looking at the example above, the OCaml equivalent would be:

let sum x,y = x + y;;

But this function would only work for integers because the * operator
only applies to integers. If you wanted to multipy floats then you
would write:

let sum x,y = x +. y;;

So there is no magic in OCaml, just a different way of providing type
information.

Cheers,
Brian
Jul 18 '05 #13
Erik de Castro Lopo wrote:
Nick Patavalis wrote:


<snip>
Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma. The only way to get these improvements is
making it possible for a python implementation to produce *efficient*
*compiled* code. At the same time the dynamic-typing nature of the
language is one of its most valuable characteristics. And this is one
of the hardest problems when trying to write a decent python
compiler. If you define a function like:

def sum (a, b):
return a + b

How can the compiler know what code to produce?


I know of at least one language which has solved this problem, Ocaml

http://www.ocaml.org/

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.


Refer to the task "Typed Python" somewhere in the past if you want more
information about Python and Type inferencing.

Reinhold

--
Wenn eine Linuxdistribution so wenig brauchbare Software wie Windows
mitbrächte, wäre das bedauerlich. Was bei Windows der Umfang eines
"kompletten Betriebssystems" ist, nennt man bei Linux eine Rescuedisk.
-- David Kastrup in de.comp.os.unix.linux.misc
Jul 18 '05 #14
Reinhold Birkenfeld wrote:
Erik de Castro Lopo wrote:
Nick Patavalis wrote:


<snip>
Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma. The only way to get these improvements is
making it possible for a python implementation to produce *efficient*
*compiled* code. At the same time the dynamic-typing nature of the
language is one of its most valuable characteristics. And this is one
of the hardest problems when trying to write a decent python
compiler. If you define a function like:

def sum (a, b):
return a + b

How can the compiler know what code to produce?


I know of at least one language which has solved this problem, Ocaml

http://www.ocaml.org/

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.


Refer to the task "Typed Python" somewhere in the past if you want more
information about Python and Type inferencing.


s/task/thread/

Reinhold

--
Wenn eine Linuxdistribution so wenig brauchbare Software wie Windows
mitbrächte, wäre das bedauerlich. Was bei Windows der Umfang eines
"kompletten Betriebssystems" ist, nennt man bei Linux eine Rescuedisk.
-- David Kastrup in de.comp.os.unix.linux.misc
Jul 18 '05 #15
Erik de Castro Lopo wrote:
I know of at least one language which has solved this problem, Ocaml

http://www.ocaml.org/

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.


That's actually the kind of thing that is planned for Python with
Starkiller, however silly a project name that might be.

--
__ Erik Max Francis && ma*@alcyone.com && http://www.alcyone.com/max/
/ \ San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
\__/ If love is the answer, could you rephrase the question?
-- Lily Tomlin
Jul 18 '05 #16

"Nick Patavalis" <np**@efault.net> wrote in message
news:sl*****************@gray.efault.net...
On 2004-08-13, John Roth <ne********@jhrothjr.com> wrote:

"Nick Patavalis" <np**@efault.net> wrote in message
news:sl*****************@gray.efault.net...
On 2004-08-13, Michael Scarlett <bi*******@yahoo.com> wrote:

Python needs drastic performance improvement if it is to scrap-off the
"scripting language" stigma.
More performance would be helpful. There are a number
of projects that are working toward that end, of which
the most visible is the PyPy project.

The only way to get these improvements is
making it possible for a python implementation
to produce *efficient* *compiled* code.


I think there are lots of people that would dispute
you on that. Current Java environments run close
to C++ performance due to the JIT compilers
that are built into the runtimes.


You 're right, I was maybe a bit too dogmatic on my point. But you
must accept that JIT-compilers are, nevertheless, compilers! They may
be more intelligent and more flexible than traditional "ahead of time"
compilers, but still they are fundamentally compilers. Furthermore,
for most cases, it might be possible for an AOT compiler to produce a
"binary" that doesn't contain the compiler itself.


It's generally regarded as not worth doing, simply because
JITs might compile different code for each time through a
method if the signature changes dynamically.
Current JIT technology doesn't require pre-declaration of variable
types; it's perfectly happy to insert checks at appropriate points
so that it can reuse code when the object types don't change (which
they don't most of the time.)


What you mean I guess, is that the first time a function is applied,
it is compiled to native-code, and a signature for the application is
generated. The next time, the application is checked against the
signature and if they match, the existing code is used, otherwise the
function is re-compiled (preserving the previously compiled one too,
is some sort of "cache"). Or am I getting it wrong? Even in such a
case though per-declarations would help.


Exactly, although the scope is smaller than a function - it has
to check other variables that the method might refer to.
Declarations don't help unless they can provide a solid
guarantee of the variable's type. If they can't, they're
useless because the JIT has to insert the type checking
code anyway.

Do you happen to know of any efforts to build such "AOT"/"JIT"
compilation/execution environments for Python?
That's part of the plan for PyPy.

John Roth
Regards
/npat

Jul 18 '05 #17
On 2004-08-13, John Roth <ne********@jhrothjr.com> wrote:

Nick Patavalis <np**@efault.net> wrote:

You 're right, I was maybe a bit too dogmatic on my point. But you
must accept that JIT-compilers are, nevertheless, compilers! They may
be more intelligent and more flexible than traditional "ahead of time"
compilers, but still they are fundamentally compilers. Furthermore,
for most cases, it might be possible for an AOT compiler to produce a
"binary" that doesn't contain the compiler itself.
It's generally regarded as not worth doing, simply because
JITs might compile different code for each time through a
method if the signature changes dynamically.


What is regarded as not worth doing? I don't really understand this
remark?

Declarations don't help unless they can provide a solid
guarantee of the variable's type. If they can't, they're
useless because the JIT has to insert the type checking
code anyway.


Agreed! The only way to avoid type-checking at runtime, it to have
static typing, but nobody wants this, do they? Declarations though can
help by indication to the compiler what types of applications it's
worths to optimize (i.e. do the best you can for strings, but for ints
and foats I do want this code to be fast).

/npat
Jul 18 '05 #18
On 2004-08-13, Erik Max Francis <ma*@alcyone.com> wrote:
Erik de Castro Lopo wrote:

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.


That's actually the kind of thing that is planned for Python with
Starkiller, however silly a project name that might be.


Correct me if I'm wrong, but I thing that starkiller produces
optimized code (i.e. native code) only if it can unambiguously
inference the types a-priori, and there are cases (in a dymanically
typed language like python) where this is impossible. In these cases,
I believe, starkiller does nothing. Are there any plans for treating
such cases? And how?

/npat
Jul 18 '05 #19
On 2004-08-13, Anthony Baxter <an***********@gmail.com> wrote:

I'm biased, having done a paper on this at the most recent PyCon, but
I firmly believe that much of the "Python is too slow" arguments can be
answered with "too slow for what?" See the pycon proceedings, but I've
been doing VoIP in Python, complete with audio mixing, and it's been
more than fast enough.

Yes but what parts of it were done in python, and what parts were done
inside modules written in C?

Do you believe, for example, that a web-server written in python could
outperform apache? How about an H323 implementation, or a TCP/IP
stack? Or a font renderer? Or a ray-tracer? A gate-level circuit
simulator? A web-browser? A relational database?

Sure, you're not going to get great performance for your numerical
computation in Python, but luckily, we have numarray for this.


If numarray was written *in Python* I would be delighted. But even
with numarray, if you want to do FFT, you do it in C, not in
Python. And if FFT is not good for you and you need DCT, again in
C. And if the FFT of numarray is not sufficient (e.g. you want an
integer version with certain bit-exact properties), hmmm sory, you
have to do it in C.

At this moment Python is an excelent *glue* language for stuff written
in low-level laguages. It is also an exelent prototyping language. It
has a long way to go before becomming a true "production" language (in
the sense outlined above). Most of this way has to do with Python
*implementations* and not with Python-the-Language. But it seems that
there are some steps that must be taken by the language itself in
order to open the road to efficient implementations.

/npat
Jul 18 '05 #20
Brian Quinlan wrote:

You are comparing apples and oranges.
Yes, Ocaml and Python are very different languages but ...
The programmer provides OCaml
with additional information that allows it to interfer the type.
But Ocaml does have parametric polymorphism, ...
Looking at the example above, the OCaml equivalent would be:

let sum x,y = x + y;;

But this function would only work for integers because the * operator
only applies to integers. If you wanted to multipy floats then you
would write:

let sum x,y = x +. y;;

So there is no magic in OCaml, just a different way of providing type
information.


Using ints and floats is a bad example because Ocaml has different
operators for float and int. A better example might be a function to
do the Python eqivalent of

string.join (list_of_strings, ", ")

Ie:

(* The list version *)
let rec comma_join lst =
match lst with
[] -> ""
| hd :: [] -> hd
| hd :: tl -> hd ^ ", " ^ (comma_join tl)
;;

(* The array version, an example only. *)
let comma_join ary =
comma_join (Array.to_list ary)
;;
Ocaml has no problem differentiating two functions with the same
name by looking at how the function arguments are used and assuming
the function is generic if insufficient information is available.

Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo no****@mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
"One World, one Web, one Browser." - Microsoft promotion
"Ein Volk, ein Reich, ein Fuhrer." - Adolf Hitler
Jul 18 '05 #21
Nick Patavalis wrote:
On 2004-08-13, Anthony Baxter <an***********@gmail.com> wrote:
I'm biased, having done a paper on this at the most recent PyCon, but
I firmly believe that much of the "Python is too slow" arguments can be
answered with "too slow for what?" See the pycon proceedings, but I've
been doing VoIP in Python, complete with audio mixing, and it's been
more than fast enough.

Yes but what parts of it were done in python, and what parts were done
inside modules written in C?

Do you believe, for example, that a web-server written in python could
outperform apache?


Yes. Apache is not that fast, and web servers are often more network
bound than CPU bound.
How about an H323 implementation, or a TCP/IP
stack? Or a font renderer? Or a ray-tracer? A gate-level circuit
simulator? A web-browser? A relational database?


Nobody is arguing that Python is as fast as C. But being slower does not
imply that Python is unsuitable for those tasks. I'd consider your list
to be pretty atypical of normal development (how many TCP stacks and
relational databases need to get written each year?), but even so most
of the items on your above list _have_ been done in Python and have done
pretty well for a number of applications. I'd wager that the vast
majority of programs written have at their disposal more CPU than they
need, so using more of that spare CPU power (by using a higher level
language like Python) is a cost many people are ready to pay for many,
many applications.

Note also that all or most of those programs on your last at one time
had to be partially implemented in assembly language even if the main
language was C or C++, and yet that didn't make C or C++ unsuitable
development languages for the task (nor did it make them only "glue
languages"). The same can hold true for Python in many cases - if a
small portion needs to be developed in a lower-level language you can
still derive great benefit from doing the rest of the application in
Python.

In nearly all of the cases where I was sure I'd have to later recode a
portion in C for performance, that day never arrived. For some reason
additional performance is always welcome, but the lack thereof rarely
ends up becoming a big deal. (And this is not just in my own projects -
when other people/companies are driving the requirements they are pretty
much always more interested in getting it to market and adding new
features. On one project in particular I have on my todo list to go
rewrite the performance "critical" core in C and it's been on my todo
list for a couple of _years_ now because I'm the only one left who cares
that it could be faster - everyone else is focused on feature set. And
since the core is partially CPU bound its performance has more than
doubled during that time due to faster CPUs - here I am sitting still
and the problem is going away :) ).
Sure, you're not going to get great performance for your numerical
computation in Python, but luckily, we have numarray for this.

If numarray was written *in Python* I would be delighted. But even
with numarray, if you want to do FFT, you do it in C, not in
Python. And if FFT is not good for you and you need DCT, again in
C. And if the FFT of numarray is not sufficient (e.g. you want an
integer version with certain bit-exact properties), hmmm sory, you
have to do it in C.

At this moment Python is an excelent *glue* language for stuff written
in low-level laguages. It is also an exelent prototyping language. It
has a long way to go before becomming a true "production" language (in
the sense outlined above).


I have to disagree - we use it as our main production language for so
many different things it's hard for me to sit still when it's
pigeonholed as just a glue language (*especially* when a lot of our
Python programs sit idle for large blocks of time waiting for e.g. the
database to get done or for the network pipe to become less saturated).

Maybe it all comes down to domain, but for me the cases you describe are
rare and oddball enough that if a little C is needed to get the job done
then it's no big deal because they make up such a tiny minority of all
the problems we're solving.

C/C++ are becoming less and less suitable for production use - their
main remaining advantage is performance and that becomes a smaller and
smaller issue each year. Everything from manual memory management to
hacky, primitive data structures (even _with_ C++/STL) make them more of
a liability than an asset - development in them is slow, error-prone,
and therefore too expensive. For personal projects I don't have enough
spare time to waste it coding in something so low-level as C, and for
professional projects the raw speed is generally valued but much less so
than time-to-market and cost of change so I can't justify C/C++/etc
there either.

99% of the time the tradeoff for using Python comes down to this:

Benefits: low cost, fast time to market, cheap addition of new features,
fewer bugs

Costs: use CPU cycles that were already idle anyway

Score!

-Dave
Jul 18 '05 #22
In article <sl*****************@gray.efault.net>,
Nick Patavalis <np**@efault.net> wrote:

At this moment Python is an excelent *glue* language for stuff written
in low-level laguages. It is also an exelent prototyping language. It
has a long way to go before becomming a true "production" language (in
the sense outlined above). Most of this way has to do with Python
*implementations* and not with Python-the-Language. But it seems that
there are some steps that must be taken by the language itself in
order to open the road to efficient implementations.


Let me play the Devil's advocate for a moment here.

Why is it important to write an entire program in a single language
(e.g., Python), versus using a hybrid approach? If you can use Python
at all, that means your platform already has good support for a C
compiler, so there is no reason not to use C extensions if you really
need performance.

Now, perhaps you'll argue that C extensions are not as portable as
Python ones. And yet, portability failures usually arise from
differences in how you access hardware (e.g., graphics cards, audio
hardware, input devices) or operating system API's, and those
differences are going to crop up in Python as well. If you are going to
have to write system-specific code anyway, and assuming you are very
concerned about "high performance," you might as well just provide
multiple C extensions to accommodate the difference, and let the Python
glue code remain the same.

By this view, I would argue that Python is a much better "production"
language than many other languages currently being used in that role.
It is no harder to write extensions for Python than to write native
methods for Java (and, I would argue, easier for several common cases).
Furthermore, Python can be stripped down and imbedded without too much
pain, so that the developer is not forced to maintain a single
monolithic code-base for their entire application in Python, simply to
take advantage of a few of its powerful features.

In short, I would argue that Python's ability to play nicely in a
multipe-language development project is actually a sign of its maturity
as a production tool. More cool languages are killed by their lack of
ability to interface nicely with other cool languages, than all other
reasons combined.

-M

--
Michael J. Fromberger | Lecturer, Dept. of Computer Science
http://www.dartmouth.edu/~sting/ | Dartmouth College, Hanover, NH, USA
Jul 18 '05 #23
I guess you are looking for type inference or something along these
lines.
There's a very amibicious project called "Starkiller" which is a
static type inferencer and a c++ compiler for Pyhon.
It's being developed by Michael Salib, an MIT graduate, and as far as
I know it will be released very soon.

Preliminary results show speedups by a factor of 60.
http://www.python.org/pycon/dc2004/p...esentation.pdf
Jul 18 '05 #24
Nick Patavalis wrote:
On 2004-08-13, Erik Max Francis <ma*@alcyone.com> wrote:
Erik de Castro Lopo wrote:

Its called type inferencing and since there is at least one working
implementation, it can't be THAT hard.


That's actually the kind of thing that is planned for Python with
Starkiller, however silly a project name that might be.


Correct me if I'm wrong, but I thing that starkiller produces
optimized code (i.e. native code) only if it can unambiguously
inference the types a-priori, and there are cases (in a dymanically
typed language like python) where this is impossible. In these cases,
I believe, starkiller does nothing. Are there any plans for treating
such cases? And how?


I think the dynamic nature does make it impossible to do anything in
such cases at the first place. Consider:

klass = raw_input()
classobj = eval(klass + "()")
print classobj.whatami

A compiler can tell absolutely _nothing_ about the resulting class
object since typing information is not contained in the program.

One would have to tell the "compiler" explicitly which types the
variable will be allowed to hold, such as:

klass = raw_input()
classobj as (FooObject, BarObject, BazInterface) = eval(klass + "()")
print classobj.whatami

But that requires "typed Python" extensions, and as such isn't pure type
inferencing any more.

Reinhold

--
Wenn eine Linuxdistribution so wenig brauchbare Software wie Windows
mitbrächte, wäre das bedauerlich. Was bei Windows der Umfang eines
"kompletten Betriebssystems" ist, nennt man bei Linux eine Rescuedisk.
-- David Kastrup in de.comp.os.unix.linux.misc
Jul 18 '05 #25
On 2004-08-13, Dave Brueck <da**@pythonapocrypha.com> wrote:

Yes. Apache is not that fast, and web servers are often more network
bound than CPU bound.

We 're obviously interested in cases where the problem is
CPU-bound. In a network-bound server, there is no *meaning* in
speaking about performance (with respect to the implementation
language).
Nobody is arguing that Python is as fast as C. But being slower does
not imply that Python is unsuitable for those tasks. I'd consider
your list to be pretty atypical of normal development (how many TCP
stacks and relational databases need to get written each year?),
I also mentioned web-browsers, ray-tracers, circuit-simulators. I
could add word-processor, spreadsheets, video editing programs, and
GUI toolkits to the list. Are they still too exotic? To cut the thread
short, what I mean is that an application that has to do something
like:

for i in range(N):
a[i] = b[i] + c[i]

is bound to be 10 to 100 times slower than the equivalent coded in
C. Which means that the cost of doing *computation* in Python is
prohibitively high! Have you ever seen, say, an AVL-tree
implementation in production Python code? Probably not. Have you ever
seen someone implementing some sort of string-lookup algorithm in
Python (instead of using the build-in dictionaries)? Again no. Is it
because Python has found the "one-size-fits-all",
"best-algorithm-ever-devised" solution? Or is it because the weight of
the language itself is such that even a suboptimal algorithm
implemented in C will never be matched by a python implementation?

The very fact that the python interpreter itself in implemented in C
(and not in Python) is indicative.

Note also that all or most of those programs on your last at one time
had to be partially implemented in assembly language even if the main
language was C or C++, and yet that didn't make C or C++ unsuitable
development languages for the task (nor did it make them only "glue
languages").

No, but the performance difference between C and Assembly was
*small*. And at some point the C compilers became so good, that you
couldn't beat them by hand coding something (of considerable length)
in assembly. As for C++, one of its primary design-goals were "zero
unneeded overhead"; so it *is* possible to write a C++ program that is
as fast as a C program, if you want to do it.
The same can hold true for Python in many cases - if a small portion
needs to be developed in a lower-level language you can still derive
great benefit from doing the rest of the application in Python.


Of course you can! Nobody argued that Python is useless. Python is one
of the cleanest, most pleasant, and most productive languages one
could wish for. For me it would not be an exaggeration to say that
Python has brought a lot of fun back in programming (and in many
ways). The reason I'm writing this is that *I also* hate to see it
pigeon-holed as a "glue" or "scripting" language. Our difference, I
guess, is that I believe that there is *some* truth in such missives;
and this has to do with the current, immature, state of the Python
*environments*. So my point is that we should not relax in the cozy
feeling that "Python is great for most applications, even if it's a
little slow, but who cares". I want to be able to write signal
processing functions in Python, or implement that optimized
special-case search algorithm, and I want to be sure that---by guiding
the compiler properly---it will produce code that is as efficient as a
well-written C program, or hand-coded assembly (or at least close to
that). I want the next GUI toolkit I use to be written in Python
(instead of written in C++ and simply wrapped in Python). And I
believe that this *is* possible, provided that we don't ignore all the
years that have been spent advancing compiler technology, and that we
don't treat the current Python environments as the "end of the
line". CPython is a good proof that Python works, and that it is a
great language. For Python to become a "primary" language, there's
still much work to be done. Most of this work, as I said before, has
to do with the environments (interpreters, AOT/JIT compilers,
optimizers, runtime modules, etc). But some of it has to do with
ensuring---at the language level---that efficient environments are
possible. Considering CPython and Python one and the same leads
straight to a Perl-ish hell!

/npat
Jul 18 '05 #26
On 2004-08-13, Michael J. Fromberger
<Mi******************@Clothing.Dartmouth.EDU> wrote:

Why is it important to write an entire program in a single language
(e.g., Python), versus using a hybrid approach? If you can use Python
at all, that means your platform already has good support for a C
compiler, so there is no reason not to use C extensions if you really
need performance.


If this is the best thing possible then yes, this is a solution. But
you see, I believe that even for these performance-sensitive parts,
Python has a lot to offer in terms of expressive power. The question
is: is it possible to make a Python environment fast-enough to be able
to produce code as efficient as a C compiler? I believe the answer is
yes, and we shouldn't be satisfied with the approach "do these glue
things in Python, and for the computationally expensive ones, well
code them in C".

/npat

Jul 18 '05 #27
On 2004-08-13, Reinhold Birkenfeld
<re************************@wolke7.net> wrote:

I think the dynamic nature does make it impossible to do anything in
such cases at the first place. Consider:

klass = raw_input()
classobj = eval(klass + "()")
print classobj.whatami


Yes, that's exactly what I meant. The only solution in such a case
would be for the environment to call the compiler at run time, and
compile classobj then. This means of course that in such cases the
compiler must be included in the "executable".

I believe this has been done in other dynamic languages.

Typed-extensions, as you mention, would also help.

/npat
Jul 18 '05 #28
On Friday 13 August 2004 12:33 pm, Nick Patavalis wrote:
On 2004-08-13, Reinhold Birkenfeld Yes, that's exactly what I meant. The only solution in such a case
would be for the environment to call the compiler at run time, and
compile classobj then. This means of course that in such cases the
compiler must be included in the "executable".

Why is there a need for a stand alone executable? At least on all the unixes
whether something is executable is just determined by the executable bit on
the file. I can execute a python program just as transparently as one in
compiled c, c++, etc. I really don't see the point of that.

Overall I would rather that there was more reliance on runtimes and that psyco
was improved to the point that it was just part of python and could save its
jited versions of code for reuse later. That way I can upgrade libraries, the
runtime etc and as long as the system is still source compatible the
application would still work and it would speed up as it ran as things where
compiled to optimized code as needed.

Overall I think that standalone binaries are bad long term. I would prefer
source compatibility since that is more flexible long term. With a jit the
code should run just as fast but it would make things like security update
and updating pieces of the system simpler.
I believe this has been done in other dynamic languages.

Typed-extensions, as you mention, would also help.

/npat

Jul 18 '05 #29
Nick Patavalis wrote:
On 2004-08-13, Dave Brueck <da**@pythonapocrypha.com> wrote:
Nobody is arguing that Python is as fast as C. But being slower does
not imply that Python is unsuitable for those tasks. I'd consider
your list to be pretty atypical of normal development (how many TCP
stacks and relational databases need to get written each year?),

I also mentioned web-browsers, ray-tracers, circuit-simulators. I
could add word-processor, spreadsheets, video editing programs, and
GUI toolkits to the list. Are they still too exotic?


No - but I still don't think they reflect anywhere near a majority of
the development that goes on. I'm not at all saying that there aren't
applications where performance matters, just that (1) it tends to be far
less common than most people believe/realize (2) when it does matter, it
usually matters in a very tiny portion of the total code, and (3) rarely
in today's development do you need to build from scratch everything down
to the building blocks themselves anyway (so if you're e.g. building a
web browser, in many cases you won't even be dealing with lowest level
data anyway - you won't be handling individual pixels but will be
calling a library somewhere else to render an image for you - and as the
developer of the web browser, that's just peachy)

IOW, it'd be lovely to have a blazingly fast-as-C Python, but the lack
of fast-as-C performance is rarely the most important problem in practice.
To cut the thread
short, what I mean is that an application that has to do something
like:

for i in range(N):
a[i] = b[i] + c[i]

is bound to be 10 to 100 times slower than the equivalent coded in
C. Which means that the cost of doing *computation* in Python is
prohibitively high!
Not necessarily, and that's the point. You're making the assumption that
10-100 times slower is too slow. In some cases it most definitely is.
In many cases it most definitely is not.
Have you ever seen, say, an AVL-tree
implementation in production Python code? Probably not. Have you ever
seen someone implementing some sort of string-lookup algorithm in
Python (instead of using the build-in dictionaries)? Again no. Is it
because Python has found the "one-size-fits-all",
"best-algorithm-ever-devised" solution?
Or is it because in 99% of the cases what is there works good enough, so
much so that obtaining the difference is not worth the opportunity cost
of working on something else?

If you have an overabundance of a particular resource, and you can gain
some advantage in exchange for some of that abundance, it's nearly
always worth the trade. Such is the case with CPU - for many, many
programs we have oodles of excess CPU time lying around, so it's a
worthwhile trade. And that's exactly why everybody would love a faster
Python but most people aren't willing to invest time working on it.

I welcome any speed boost we see from people working on improving
performance, but it's just not what's standing between me and most of my
development goals. Heck, right now I'm writing this message, I've got my
mail & IM messages going, a couple of code editors open, a virtual PC
instance running my database and webserver, and I'm building web pages
off content in the database and saving them to disk. CPU usage is
hovering around 5%. If I were to increase the speed of all of these
applications by a factor of 1000 I wouldn't even notice. Add a few
features to any of them, and I would.
Or is it because the weight of
the language itself is such that even a suboptimal algorithm
implemented in C will never be matched by a python implementation?
Practice has shown that, not only is this not true, but that a lot of
times working in a higher level language is also worth it because the
cost of discovering and implementing a better algorithm is cheaper, so
you could end up with better performance than going to C. Or, you'd
arrive at plenty-fast-enough sooner.
No, but the performance difference between C and Assembly was
*small*.
Over time, yes, but certainly not initially. I still remember how
appallingly slow my graphics routines were in C - way too much overhead
- while in assembly they had no problems at all.
And at some point the C compilers became so good, that you
couldn't beat them by hand coding something (of considerable length)
in assembly.
That happened later, at least on PCs. The real transition happened as
CPU speed grew so much that the difference between e.g. 4.77 MHz and 10
MHz was boring.
As for C++, one of its primary design-goals were "zero
unneeded overhead"; so it *is* possible to write a C++ program that is
as fast as a C program, if you want to do it.
It was a design goal, but (1) implementations didn't achieve it very
well initially and (2) in the end it didn't matter that they didn't
achieve it. More and more people migrated to C++ because the cost of
doing so (overhead) fell steadily over time - even more quickly than the
compilers improved.
and I want to be sure that---by guiding
the compiler properly---it will produce code that is as efficient as a
well-written C program, or hand-coded assembly (or at least close to
that).
Ugh - any time you spend guiding the compiler is time you could have
spent actually solving a problem. I find it interesting that when
programming in C we don't use the 'register' compiler hint anymore. Why?
It's a combination of smarter compilers and faster computers, but either
way its existence was awful IMO - don't distract the programmer like that.

I *love* it whenever I see that Pystone benchmarks are improving, or
that any of the various VM implementations are making headway, and I'll
gladly use them. But at the same time, I have to admit that they aren't
solving any problems I encounter on a daily basis.
For Python to become a "primary" language, there's
still much work to be done.
Couldn't disagree more. Yes, things like interpreters, compilers, etc.
could use more maturity and will continue to evolve over time, but even
_lacking_ those things it's still far enough ahead of other "primary"
languages to make the _net_ result a huge advantage. With the evolution
of those things the advantage will just become more pronounced.
But some of it has to do with
ensuring---at the language level---that efficient environments are
possible.


Again, I disagree. IMO one of the benefits of higher level languages is
that they underlying implementation technology takes care of details
that the developer need not be concerned with.

Overall, every little performance improvement in a Python implementation
extends the domain in which Python is a good tool for the job, and
that's great. But AFAICT it's already a terrific fit for a massive chunk
of real-world development, so much so that increasing its speed by, say,
a factor of 10 isn't going to even double its domain of usability.

-Dave
Jul 18 '05 #30
On Fri, 13 Aug 2004, kosh wrote:
Why is there a need for a stand alone executable? At least on all the unixes
whether something is executable is just determined by the executable bit on
the file.
Not if you don't have the interpreter installed.
I can execute a python program just as transparently as one in
compiled c, c++, etc. I really don't see the point of that.


Indeed, you can do that just as easily on Windows, too. The point of a
stand-alone executable is not to make running the script easier, but to
make distribution easier. Users don't need to install Python to run a
Python script if it's a stand-alone executable.

Jul 18 '05 #31
Nick Patavalis <np**@efault.net> writes:

| To cut the thread short, what I mean is that an application that has
| to do something like:
|
| for i in range(N):
| a[i] = b[i] + c[i]
|
| is bound to be 10 to 100 times slower than the equivalent coded in
| C.

Although note that (at least by my timing)

a = map( operator.add, b, c )

is 3 times as fast as your Python version, bringing the code up to
being only 3 to 33 times slower than C.

I'm sure that pyrex could bring the speed up a heck of a lot more, too.

--
http://www.dfan.org
Jul 18 '05 #32
On Friday 13 August 2004 1:34 pm, Christopher T King wrote:
On Fri, 13 Aug 2004, kosh wrote:
Why is there a need for a stand alone executable? At least on all the
unixes whether something is executable is just determined by the
executable bit on the file.


Not if you don't have the interpreter installed.


So install the runtime. If you want to run the .NET stuff you need to have the
the .NET CLR or Mono installed. If you want to run java apps you need the jvm
etc. Overall once a runtime is installed it makes distributing apps a lot
easier since the actual thing you need to send someone is tiny. Also at least
on unixes I have not run into a box in about 6 years or so that did not have
python and perl installed so in practice I have not run into that problem.
I can execute a python program just as transparently as one in
compiled c, c++, etc. I really don't see the point of that.


Indeed, you can do that just as easily on Windows, too. The point of a
stand-alone executable is not to make running the script easier, but to
make distribution easier. Users don't need to install Python to run a
Python script if it's a stand-alone executable.


Overall it would be better if there was an easy way on windows to get the
runtime installed since then you can send users far smaller files, smaller
updates and it makes it easier for people to patch their systems. I have seen
more then a few cases where a bug like temp file creation was found to be a
problem in python and in some c code. However the difference is that you can
update the python runtime and all affected python programs are fixed. The
same is not true of the c versions.

One of them I have run into which is a pain is stuff like openssl. When that
gets updated it seems a whole bunch of programs have to be compiled to work
with it again. The change is source compatible but for whatever reason the
bug fix breaks binary compatibility on a number of apps. Just update the
runtime though for things like python,java, etc and all apps on those
runtimes just become fixed.
Jul 18 '05 #33
On Fri, 13 Aug 2004, kosh wrote:
On Friday 13 August 2004 1:34 pm, Christopher T King wrote:
On Fri, 13 Aug 2004, kosh wrote:
Why is there a need for a stand alone executable? At least on all the
unixes whether something is executable is just determined by the
executable bit on the file.
Not if you don't have the interpreter installed.


So install the runtime. If you want to run the .NET stuff you need to
have the the .NET CLR or Mono installed. If you want to run java apps
you need the jvm etc.


And if you want to run Python scripts you need the Python interpreter
installed.
Overall once a runtime is installed it makes distributing apps a lot
easier since the actual thing you need to send someone is tiny.
Oftentimes users will only have one Python app. They'd much rather
download a 2MB ZIP file and dump it somewhere, than download a 20MB Python
distribution and install it somewhere, download XMB of extension modules
needed by the app (e.g. PIL, numarray, to name a few), and then finally
download and install your script.
Also at least on unixes I have not run into a box in about 6 years or so
that did not have python and perl installed so in practice I have not
run into that problem.
And hence the lack of an executablization program for Unix.
Overall it would be better if there was an easy way on windows to get the
runtime installed since then you can send users far smaller files, smaller
updates and it makes it easier for people to patch their systems.
True. I don't see that happening anytime soon, though.
I have seen more then a few cases where a bug like temp file creation
was found to be a problem in python and in some c code. However the
difference is that you can update the python runtime and all affected
python programs are fixed. The same is not true of the c versions.
What? C programs use a runtime library, just the same as any other
language. Google for "libc.so" or "msvcrt.dll" if you don't believe me.
One of them I have run into which is a pain is stuff like openssl. When that
gets updated it seems a whole bunch of programs have to be compiled to work
with it again. The change is source compatible but for whatever reason the
bug fix breaks binary compatibility on a number of apps.


That sounds like an openssl-specific problem, perhaps relating to
configuration issues. Since C libraries are linked dynamically, source
compatibility inherently translates to binary compatibility, assuming
functions were not moved to different libraries or rewritten as macros (or
vice-versa).

In a perfect world, all OSes would use proper package management systems,
and single-executable programs would not be needed. Unfortunately, this
isn't true.

Jul 18 '05 #34
On 2004-08-13, kosh <ko**@aesaeion.com> wrote:

Why is there a need for a stand alone executable? At least on all the unixes
whether something is executable is just determined by the executable bit on
the file. I can execute a python program just as transparently as one in
compiled c, c++, etc. I really don't see the point of that.


Perhaps you target system has no Python environment installed. And
perhaps it has no resources to have a complete python environment
installed (appart from the fact that it might not need one). Don't
think of your 2GHz / 512MB desktop. Think of your cell-phone.
Jul 18 '05 #35
On Friday 13 August 2004 2:03 pm, Christopher T King wrote:
On Fri, 13 Aug 2004, kosh wrote:
I have seen more then a few cases where a bug like temp file creation
was found to be a problem in python and in some c code. However the
difference is that you can update the python runtime and all affected
python programs are fixed. The same is not true of the c versions.


What? C programs use a runtime library, just the same as any other
language. Google for "libc.so" or "msvcrt.dll" if you don't believe me.


I do know that c programs can be dynamically linked and most of the time on
unixes they seem to be. But it seems to be far too common that the library
changes in some way that requires that the program be recompiled. I have seen
it with both kde and gnome where just recompiling them with no changes to
their code at all fixed library problems that where being reported.

I have never seen that kind of thing in python.
One of them I have run into which is a pain is stuff like openssl. When
that gets updated it seems a whole bunch of programs have to be compiled
to work with it again. The change is source compatible but for whatever
reason the bug fix breaks binary compatibility on a number of apps.


That sounds like an openssl-specific problem, perhaps relating to
configuration issues. Since C libraries are linked dynamically, source
compatibility inherently translates to binary compatibility, assuming
functions were not moved to different libraries or rewritten as macros (or
vice-versa).


I don't know why they break. I know that they do.
In a perfect world, all OSes would use proper package management systems,
and single-executable programs would not be needed. Unfortunately, this
isn't true.


Well in my world everything is written for unixes and deployed on unixes and
most of them are linux boxes which have good package systems.
Jul 18 '05 #36
On Friday 13 August 2004 2:19 pm, Nick Patavalis wrote:
On 2004-08-13, kosh <ko**@aesaeion.com> wrote:
Why is there a need for a stand alone executable? At least on all the
unixes whether something is executable is just determined by the
executable bit on the file. I can execute a python program just as
transparently as one in compiled c, c++, etc. I really don't see the
point of that.


Perhaps you target system has no Python environment installed. And
perhaps it has no resources to have a complete python environment
installed (appart from the fact that it might not need one). Don't
think of your 2GHz / 512MB desktop. Think of your cell-phone.


Cell phones should be cell phones not multifunction device that can run all
kinds of apps, need virus scanners etc. I don't want my cell phone to run
python, java, ruby, c# etc etc. I want it to just be a telephone and do that
job well. Most of the modern cell phones are crap if you want all of that
stuff get a pda and get a virus scanner for it.
Jul 18 '05 #37
Anthony Baxter <an***********@gmail.com> wrote in message news:<ma**************************************@pyt hon.org>...
On 12 Aug 2004 17:05:34 -0700, Michael Scarlett <bi*******@yahoo.com> wrote:
[ pie decorators ]
Am i the only one with a
visceral reaction to this thing???


So did you have a similar reaction on first hitting the indentation for
blocks? I know I dimly recall thinking that this was very strange and
horrible (dimly, because it was 1992 or 1993).

one of the first books I read on python was Magnus lie Hetland's
Practical python.
http://hetland.org/writing/practical-python/
in the introduction he had a few quotes.
"A C program is like a fast dance on a newly waxed floor by people
carrying razors"

"C++: hard to learn and built to stay that way"

"Java is, in many ways, C++"

"And now for something completely different....."

the last was his intro to learning Python. When the trs-80 was out
from radio shack, i used to code in basic on it. I lost interest in
computers and only picked it up a few years ago. I investigated python
and fell in love with the language. It's elegance, its simplicity (at
least for the programmer) and its sheer delight to code in. I don't
work in the IT field, and programming isn't my bread and butter. But
just because its so fun, python brought me back into computers - i've
created a a few websites with python on the back end, manipulated
files and a few other projects for intellectual curiousity and for my
day to day work and home life easier. As a result of learning and
coding in python, I wanted to learn more about it, and so I turned to
C, and am now actively learning it simply to learn how to integrate
and build on pyton. I tolerate C's ugliness because i know the end
result is a labour of love? when I can work with it and python. Silly
maybe, but i'm going on emotion here, not logic. Python is simply fun
to code in, and when its fun your more productive and excited to learn
and tackle new problems cause your not bogged down in remebering how
the increment(++) operator works for pointers in a particular function
that supposed to dynamically allocate memory. terminating every
statement with a ";" or manipulating fgets to discard the '\n'. yada
yada yada.....
The point is python just works.
someone once said python is runable pseudocode. And it works, and
works good at that. I couldn't agree more. I think and then I code.
simple.

addenum: i've had this article bookmarked for some time because once
you read it, you have to wonder are they talking about python. because
you realise python is there already. for those of you interested to
read it and comment on it:

http://archive.gamespy.com/legacy/ar...devweek_b.shtm

thats my $0.02
Jul 18 '05 #38
On 2004-08-13, Dave Brueck <da**@pythonapocrypha.com> wrote:
Nick Patavalis wrote:

I also mentioned web-browsers, ray-tracers, circuit-simulators. I
could add word-processor, spreadsheets, video editing programs, and
GUI toolkits to the list. Are they still too exotic?


No - but I still don't think they reflect anywhere near a majority of
the development that goes on.


Yes, the majority of development goes on little *glue programs* that
take data from a database, and format is as XML/HTML, or aggregate and
analyze data stored in a database, and stuff like that. But for all
these to be possible a massive amount of *infrastructure* is
required. And this infrastructure cannot be created in Python. So you
don't say that Python isn't a glue language, but that the greatest
percentage of development that currently goes-on *is* glue-stuff
development.

This of-course presupposes that the infrastructure *is* available,
that it is stable, and that it doesn't have to be modified or
augmented. For me a "primary" language is not the language in which
you develop most of the software, but the language in which you
develop the current and future software
*infrastructure*. Quantitatively most of the software is glue-stuff
anyway!

Put yourself in this position: Its a few years ago (say 1998 or 1999),
and no graphical web-browser exists for Linux. You are planning to
develop the "iso-standard" web-browser for this operating
system. Would you do it in Python? Remember that no HTML parsers
exist, no decent HTML renderers, the GUI toolkit is more or less
primitive, and the low-end desktops runs at about 200-something
MHz. You might argue "this is not the case today", but how can you
preclude that *similar* challenges do not occur today, or will not
occur in the future? Are you saying that all the computationally hard
problems have been already solved? Or are you saying that, as a Python
programmer you don't want to deal with them? Another example: It's
2004 again, and you decide to scrap and replace the age-old X11 window
system; do away with it and start from scratch. Build a moder
windowing system; 3D all over, fully network transparent, with widget
support on the server-side, fully object oriented interface, and so
on. How much of it would you be able to code in Python? How much
*more* would you rather be able to code in Python?

/npat
Jul 18 '05 #39
On 2004-08-13, kosh <ko**@aesaeion.com> wrote:

I don't want my cell phone to run python, java, ruby, c# etc etc. I
want it to just be a telephone and do that job well.


I understand. You want a nice analog cell-phone, with a large rotary
dial, and very long cord. Sorry but resistors, capacitors, and diodes
can only go that far. For everything else you need large clusters of
transistors (integrated-ccircuits they are called by some) and a lot
of them need (God forbid!) "software".

/npat

P.S. I *have* to sign with this :)

--
I have always wished that my computer would be as easy to use as my
telephone. My wish has come true. I no longer know how to use my
telephone.
-- Bjarne Stroustrup
Jul 18 '05 #40
Nick Patavalis wrote:
On 2004-08-13, Dave Brueck <da**@pythonapocrypha.com> wrote:
Nick Patavalis wrote:
I also mentioned web-browsers, ray-tracers, circuit-simulators. I
could add word-processor, spreadsheets, video editing programs, and
GUI toolkits to the list. Are they still too exotic?
No - but I still don't think they reflect anywhere near a majority of
the development that goes on.

Yes, the majority of development goes on little *glue programs* that
take data from a database, and format is as XML/HTML, or aggregate and
analyze data stored in a database, and stuff like that. But for all
these to be possible a massive amount of *infrastructure* is
required. And this infrastructure cannot be created in Python. So you
don't say that Python isn't a glue language, but that the greatest
percentage of development that currently goes-on *is* glue-stuff
development


Well, at my current company we've implemented web and file servers,
cache log parsers, web applications, Windows desktop applications,
Windows COM objects including web browser plugins, video splicers, and a
host of tools, all in Python. These aren't "little glue programs" - they
are just programs; normal programs we'd have to write in _some_ language
and Python turned out to be the best fit. What's more, a whole heck of a
lot of the functionality implemented in Python would most definitely
fall under the category of infrastructure, not glue.
Put yourself in this position: Its a few years ago (say 1998 or 1999),
and no graphical web-browser exists for Linux. You are planning to
develop the "iso-standard" web-browser for this operating
system. Would you do it in Python? Remember that no HTML parsers
exist, no decent HTML renderers, the GUI toolkit is more or less
primitive, and the low-end desktops runs at about 200-something
MHz. You might argue "this is not the case today", but how can you
preclude that *similar* challenges do not occur today, or will not
occur in the future?
This is so far removed from what I'm trying to say that I don't even
know how to respond. I'm not arguing that at all.
2004 again, and you decide to scrap and replace the age-old X11 window
system; do away with it and start from scratch. Build a moder
windowing system; 3D all over, fully network transparent, with widget
support on the server-side, fully object oriented interface, and so
on. How much of it would you be able to code in Python?
Python is not the right tool for every job. There do exist cases where
more performance is required than can be delivered.

Having said that, the cases where it is too slow make up a small (and
shrinking) portion of the total amount of development going on. As such,
better performance is always welcome, but it won't benefit as many
people and as much as other things can. IOW, anytime you say, "wouldn't
it be great if Python were faster?" you can easily get everybody to
respond, "sure!" - everybody's on board with you there. It's when you
try to fit it into the list of priorities that you'll find that there
are other things higher on the list. Higher because they provide more
benefit overall and/or to the people who are interesting in helping out.

To your specific question, I actually _would_ use Python for most of an
X11 replacement system. Obviously low-level stuff would be handled by
one of the existing 3D libraries out there (no need to reinvent the
wheel there - the goal is to build a better window system, and we'd want
a good hardware abstraction lay). I've seen a few proof-of-concepts with
Pygame that lead me to believe that it's doable - their performance is
already better than X11's performance for the first decade or so of its
existence.
How much *more* would you rather be able to code in Python?


You can't consider the benefit in isolation. You have to take into
account the costs involved.

-Dave
Jul 18 '05 #41

"Michael Scarlett" <bi*******@yahoo.com> wrote in message
someone once said python is runable pseudocode.


or 'executable pseudocode' (which I originally misspelled :-( )

http://groups.google.com/groups?selm...0news.udel.edu

Terry J. Reedy

Jul 18 '05 #42

"Michael Scarlett" <bi*******@yahoo.com> wrote in message
news:ce**************************@posting.google.c om...
There is an amazing article by paul graham about python, and an even
better discussion about it on slashdot. The reason I point this out,
is the more I read both articles, the more I realised how we would be
mutilating the language with that god forsaken @ decorator.
I don't know about the rest of you, but I learned python and fell in
love with its syntax and simplicity. Python - just works. So please
GVR. Don't complicate it. Leave it as is. Work on making it faster,
not uglier. Work on - in some cases - better algorithms for certain
modules, not for it to even closely resemble C or perl or god knows
whateverotherlanguagethereisoutthere. Am i the only one with a
visceral reaction to this thing???

This was precisely the motive behind my starting the thread "Going the PL1/1
way"
I also worry about Python's healthy simplicity.

Best regards,
Miklós


paul Graham article: http://www.paulgraham.com/pypar.html

Slashdot discussion:

http://developers.slashdot.org/devel...ml?tid=156&tid
=218
Jul 18 '05 #43
On 2004-08-13 Dave Brueck <da**@pythonapocrypha.com> wrote:

Having said that, the cases where it is too slow make up a small
(and shrinking) portion of the total amount of development going
on. As such, better performance is always welcome, but it won't
benefit as many people and as much as other things can. IOW, anytime
you say, "wouldn't it be great if Python were faster?" you can
easily get everybody to respond, "sure!" - everybody's on board with
you there. It's when you try to fit it into the list of priorities
that you'll find that there are other things higher on the
list. Higher because they provide more benefit overall and/or to the
people who are interesting in helping out


I believe there is no point to continue our little "fight", or we risk
becoming repetive and boring. Our arguments have, I think, become
clear, and only time will tell who was right and who was wrong; if
after-all the is a "right" and a "wrong" side in this argument.

So lets drop it here, and wish the best for the language we both enjoy
programming in.

/npat
Jul 18 '05 #44

:> Python needs drastic performance improvement if it is to scrap-off the
:> "scripting language" stigma. The only way to get these improvements is
:> making it possible for a python implementation to produce *efficient*
:> *compiled* code. At the same time the dynamic-typing nature of the
:> language is one of its most valuable characteristics. And this is one
:> of the hardest problems when trying to write a decent python
:> compiler. If you define a function like:
:>
:> def sum (a, b):
:> return a + b
:>
:> How can the compiler know what code to produce?

: I know of at least one language which has solved this problem, Ocaml
I'm not quite sure if this is true. In contrast to Python, a lot of
the type information in OCaml is attached to the operators. This is
to make the type-inferencing work efficiently. For example, OCaml's
addition operator is hardcoded to work with integers:

(******)
[dyoo@shoebox dyoo]$ ocaml
Objective Caml version 3.07+2

# (+) ;;
- : int -> int -> int = <fun>
(******)

and there's a separate operator for adding floats to floats:

(*** OCaml ***)
# ( +. );;
- : float -> float -> float = <fun>
(******)

Python's dynamic lookup of module-level symbols also make things more
difficult than in Ocaml. In OCaml, names that are bound stay bound:

(*** OCaml ***)
# let x = 42;;
# let say_x () = print_endline (string_of_int x);;
val say_x : unit -> unit = <fun>
# say_x ();;
42
- : unit = ()
# let x = "hello";;
val x : string = "hello"
# say_x ();;
42
- : unit = ()
(******)
Note, again, that the OCaml operators and functions are often
themselves typed to make type-inference work. This may make things
slightly verbose again. I could be wrong, but I couldn't find a
simple, generic, "print" function that could print any value in OCaml.
In this example, the say_x function keeps a record of all the name
bindings from before, which is why it remembers the original binding
for 'x'. This is just fundamentally different from the "late binding"
approach using in Python:

### Python ###
x = 42
def say_x(): .... print x
.... say_x() 42 x = "hello"
say_x()

hello
###
So I'm not so sure that Python's current design makes type-inference
easy. I'm pretty sure it's a little harder than just yanking out
OCaml's type-inference engine and jury-rigging it into Python. *grin*
Jul 18 '05 #45
Nick Patavalis <np**@efault.net> writes:
If you define a function like:

def sum (a, b):
return a + b

How can the compiler know what code to produce? It could trace all the
applications of sum(), and decide what types of arguments sum() is
actually applied on. But this is not easy, and sometimes it is
straight-out impossible.


Compilers for languages like Lisp and Smaltalk have dealt with this
for decades. They can either generate code that switches on the type
tags, or have dispatch tables in the objects that point to code for
operations like "+", or take advice or declarations from the
programmer about the arg types, among other possibilities. Any of
these approaches generates code that runs much faster than interpreted code.
Jul 18 '05 #46
On 2004-08-17, Paul Rubin <> wrote:

Compilers for languages like Lisp and Smaltalk have dealt with this
for decades. They can either generate code that switches on the type
tags, or have dispatch tables in the objects that point to code for
operations like "+", or take advice or declarations from the
programmer about the arg types, among other possibilities. Any of
these approaches generates code that runs much faster than
interpreted code.


Yes, I know. Something like this was what I was thinking about. I
would really love to see this technology brought to Python, or at
least a discussion as to what additions would be required in the
*language* in order for similar technologies to be easily applicable
to future Pythonic environments.

/npat
Jul 18 '05 #47
Nick Patavalis <np**@efault.net> writes:
On 2004-08-17, Paul Rubin <> wrote:

Compilers for languages like Lisp and Smaltalk have dealt with this
for decades. They can either generate code that switches on the type
tags, or have dispatch tables in the objects that point to code for
operations like "+", or take advice or declarations from the
programmer about the arg types, among other possibilities. Any of
these approaches generates code that runs much faster than
interpreted code.


Yes, I know. Something like this was what I was thinking about. I
would really love to see this technology brought to Python, or at
least a discussion as to what additions would be required in the
*language* in order for similar technologies to be easily applicable
to future Pythonic environments.


those compilers are totally unflexible,
as they can't deal with data types and function code supplied at run time,
and they cache too much information, causing bloat of the ram.

Klaus Schilling
Jul 18 '05 #48

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

75
by: Howard Nease | last post by:
Hello, everyone. I would appreciate any advice that someone could give me on my future career path. Here is my situation: I am a bright Junior in a very well-respected private high school, taking...
1
by: Indigo Moon Man | last post by:
I'm a complete newbie to Python and have been working the tutorials for only about 4 days (I'm doing 'How to Think Like a Computer Scientist' right now) but I already love this language. It's much...
19
by: KefX | last post by:
I've been following the group a bit (somewhat loosely; discussions involving other languages or advanced concepts kind of lose me), and I see all this negativity (OMG Python's lambda is borken...
13
by: mwt | last post by:
I've only been goofing around with Python for about a month now, but already I am in love. I never get that feeling -- so common with Java -- that I'm swimming upstream, struggling to force the...
8
by: cokofreedom | last post by:
I was reading up on this site http://www.noulakaz.net/weblog/ 2007/03/18/a-regular-expression-to-check-for-prime-numbers/] of an interesting way to work out prime numbers using Regular Expression....
3
by: Sera Jackson | last post by:
ok, I know its an over discussed topic. Althought I understand why it is there I cant constantly see it in my argument list in parenthesis. can someone give me an insight of the cons of a syntax...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.