473,289 Members | 1,729 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,289 software developers and data experts.

"no variable or argument declarations are necessary."

I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?
--
http://www.jim.com
Oct 2 '05 #1
134 7710
D H
James A. Donald wrote:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?


It's a fundamental part of python, as well as many other scripting
languages.
If you're not comfortable with it, you might try a language that forces
you to declare every variable first like java or C++.
Otherwise, in python, I'd recommend using variable names that you can
easily spell. Also do plenty of testing of your code. It's never been
an issue for me, although it would be nicer if python were
case-insensitive, but that is never going to happen.
Oct 2 '05 #2
James A. Donald wrote:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?


A variable has to be assigned to before it is used, otherwise a
NameError exception is thrown..
a + 1 Traceback (most recent call last):
File "<interactive input>", line 1, in ?
NameError: name 'a' is not defined a = 1
a + 1

2

Typos in variable names are easily discovered unless the typo happens to
exist in the current context.

Will McGugan
--
http://www.willmcgugan.com
"".join({'*':'@','^':'.'}.get(c,0) or chr(97+(ord(c)-84)%26) for c in
"jvyy*jvyyzpthtna^pbz")
Oct 2 '05 #3
Wow, never even occured ot me someone would have a problem with this!

But, this might help:

http://www.logilab.org/projects/pylint

In more detail:
Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.
No, the error message will be pretty clear actually :) You are
attempting to use a variable that doesn't exist! This would be the same
type of message you would get from a compiled language, just at a
different point in time (runtime vs. compile time).
If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to,
Possible, though good design should always keep anysuch situation at
bay. Python is OO, hence scoping should rarely be a problem ... globals
are mostly evil, so the context at any given time should be the method,
you'd need a fairly big and complex method to start loosing track of
what you called what ... Also, a good naming convention should keep this
at bay.

Also, because things are interpreted, you don't (normally) need to put
extensive forthought into things as you do with compiled languages. You
can run things quickyl and easily on demand, a misnamed variable will be
clearly indicated and easily solved in a matter of minutes.

Using a smart IDE might also help prevent such problems before they occur?

Hope you enjoy python :)

J.F.

James A. Donald wrote: I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified to read

"no variable or argument declarations are necessary."

Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to, or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.

What can one do to swiftly detect this type of bug?
--
http://www.jim.com

Oct 2 '05 #4
The easiest way to avoid this problem (besides watching for NameError
exceptions) is to use an editor that has automatic name completion.
Eric3 is a good example. So, even though in theory it could be an
issue, I rarely run into this in practice.

-Don

Oct 2 '05 #5
James A. Donald:
> Surely that means that if I misspell a variable name, my program will
> mysteriously fail to work with no error message.

On Sun, 02 Oct 2005 17:11:13 -0400, Jean-François Doyon No, the error message will be pretty clear actually :)


Now why, I wonder, does this loop never end :-)
egold = 0
while egold < 10:
ego1d = egold+1
--
http://www.jim.com
Oct 3 '05 #6
James A. Donald wrote:
On Sun, 02 Oct 2005 17:11:13 -0400, Jean-François Doyon
James A. Donald:
> Surely that means that if I misspell a variable name, my program will
> mysteriously fail to work with no error message.

No, the error message will be pretty clear actually :)

Now why, I wonder, does this loop never end :-)
egold = 0
while egold < 10:
ego1d = egold+1


I know (hope! :-) that's a tongue-in-cheek question, however the answer as
to why that's not a problem is more to do with development habits rather
than language enforcement. (yes with bad habits that can and will happen)

Much python development is test-driven. Either formally using testing
frameworks (I'm partial to unittest, but others like other ones), or
informally using a combination of iterative development and the
interactive shell. Or a mix of the two.

With a formal test framework you would have noticed the bug above
almost instantly - because your test would never finish (Which would
presumably count as a failure for the test that exercises that code).

Whilst that might seem odd, what you're actually doing with type
declarations is saying "if names other than these are used, a bug
exists" and "certain operations on these names are valid". (as well
as a bunch of stuff that may or may not relate to memory allocation
etc)

With test driven development you are specifically testing the functionality
you want to exist *does* exist. TDD also provides a few tricks that can
help you get around writers block, and also catch bugs like above easily and
more importantly early.

Bruce Eckel (author of a fair few interesting C++ & Java books :-) has a
couple of interesting essays on this topic which I think also take this
idea a lot further than is probably suitable for here:

* Strong Typing vs. Strong Testing:
http://www.mindview.net/WebLog/log-0025
* How to Argue about Typing
http://www.mindview.net/WebLog/log-0052

For what it's worth, if you've not come across test driven development
before then I'd highly recommend Kent Beck's "Test Driven Development: By
Example". You'll either love it or hate it. IMO, it's invaluable though!
I suppose though the difference between static types based testing and
test driven development is that static types only really help you find
bugs (in terms of aiding development), whereas TDD actually helps you
write your code. (Hopefully with less bugs!)

Best Regards,
Michael.

Oct 3 '05 #7
"Michael" <ms@cerenity.org> wrote:
James A. Donald wrote:
On Sun, 02 Oct 2005 17:11:13 -0400, Jean-Francois Doyon
James A. Donald:
> Surely that means that if I misspell a variable name, my program will
> mysteriously fail to work with no error message.
No, the error message will be pretty clear actually :)

Now why, I wonder, does this loop never end :-)
egold = 0
while egold < 10:
ego1d = egold+1


I know (hope! :-) that's a tongue-in-cheek question, however the answer as
to why that's not a problem is more to do with development habits rather
than language enforcement. (yes with bad habits that can and will happen)

[snipped description of test-driven development culture]


As an aside, more to the point of the specific erroneous example is the lack of the standard python
idiom for iteration:

for egold in xrange(10):
pass

Learning and using standard idioms is an essential part of learning a language; python is no
exception to this.

George
Oct 3 '05 #8
Op 2005-10-03, George Sakkis schreef <gs*****@rutgers.edu>:
"Michael" <ms@cerenity.org> wrote:
James A. Donald wrote:
> On Sun, 02 Oct 2005 17:11:13 -0400, Jean-Francois Doyon
> James A. Donald:
>> > Surely that means that if I misspell a variable name, my program will
>> > mysteriously fail to work with no error message.
>> No, the error message will be pretty clear actually :)
> Now why, I wonder, does this loop never end :-)
> egold = 0
> while egold < 10:
> ego1d = egold+1


I know (hope! :-) that's a tongue-in-cheek question, however the answer as
to why that's not a problem is more to do with development habits rather
than language enforcement. (yes with bad habits that can and will happen)

[snipped description of test-driven development culture]


As an aside, more to the point of the specific erroneous example is the lack of the standard python
idiom for iteration:

for egold in xrange(10):
pass

Learning and using standard idioms is an essential part of learning a language; python is no
exception to this.


Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1

--
Antoon Pardon
Oct 3 '05 #9
Antoon Pardon wrote:
Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1


Oh come on. That is a completely contrived example, and besides you can
still rewrite it easily using the 'standard idiom' at which point it
becomes rather clearer that it is in danger of being an infinite loop even
without assigning to the wrong variable.

for egold in range(10):
while test():
pass

I find it very hard to believe that anyone would actually mistype ego1d
while intending to type egold (1 and l aren't exactly close on the
keyboard), and if they typed ego1d thinking that was the name of the loop
variable they would type it in both cases (or use 'ego1d += 1') which would
throw an exception.

The only remaining concern is the case where both ego1d and egold are
existing variables, or more realistically you increment the wrong existing
counter (j instead of i), and your statically typed language isn't going to
catch that either.

I'm trying to think back through code I've written over the past few years,
and I can remember cases where I've ended up with accidental infinite loops
in languages which force me to write loops with explicit incremements, but
I really can't remember that happening in a Python program.

Having just grepped over a pile of Python code, I'm actually suprised to
see how often I use 'while' outside generators even in cases where a 'for'
loop would be sensible. In particular I have a lot of loops of the form:

while node:
... do something with node ...
node = node.someAttribute

where someAttribute is parentNode or nextSibling or something. These would,
of course be better written as for loops with appropriate iterators. e.g.

for node in node.iterAncestors():
... do something with node ...
Oct 3 '05 #10
Op 2005-10-03, Duncan Booth schreef <du**********@invalid.invalid>:
Antoon Pardon wrote:
Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1

Oh come on. That is a completely contrived example,


No it is not. You may not have had any use for this
kind of code, but unfamiliary with certain types
of problems, doesn't make something contrived.
and besides you can
still rewrite it easily using the 'standard idiom' at which point it
becomes rather clearer that it is in danger of being an infinite loop even
without assigning to the wrong variable.

for egold in range(10):
while test():
pass
And trying to force this into standard idiom is just silly.
When people write examples they try to get the essential
thing into the example in order to make things clear to
other people. The real code they may be a lot more complicated.
That you can rework the example into standard idiom doesn't mean
the real code someone is working with can be reworked in a like manner.
I find it very hard to believe that anyone would actually mistype ego1d
while intending to type egold (1 and l aren't exactly close on the
keyboard), and if they typed ego1d thinking that was the name of the loop
variable they would type it in both cases (or use 'ego1d += 1') which would
throw an exception.
Names do get misspelled and sometimes that misspelling is hard to spot.
That you find the specific misspelling used as an example contrived
doesn't change that.
The only remaining concern is the case where both ego1d and egold are
existing variables, or more realistically you increment the wrong existing
counter (j instead of i), and your statically typed language isn't going to
catch that either.
A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.
I'm trying to think back through code I've written over the past few years,
and I can remember cases where I've ended up with accidental infinite loops
in languages which force me to write loops with explicit incremements, but
I really can't remember that happening in a Python program.
Good for you, but you shouldn't limit your view to your experience.
Having just grepped over a pile of Python code, I'm actually suprised to
see how often I use 'while' outside generators even in cases where a 'for'
loop would be sensible. In particular I have a lot of loops of the form:

while node:
... do something with node ...
node = node.someAttribute

where someAttribute is parentNode or nextSibling or something. These would,
of course be better written as for loops with appropriate iterators. e.g.

for node in node.iterAncestors():
... do something with node ...


That "of course" is unfounded. They may be better in your specific
code, but what you showed is insufficient to decide that. The first
code could for instance be reversing the sequence in the part that
is labeled ...do something with node ...

--
Antoon Pardon
Oct 3 '05 #11
Antoon Pardon wrote:
A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.


Wrong. It would catch at compile-time those misspellings which do not
happen to coincide with another declared variable. It would give the
programmer a false sense of security since they 'know' all their
misspellings are caught by the compiler. It would not be a substitute for
run-time testing.

Moreover, it adds a burden on the programmer who has to write all those
declarations, and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code. Also there is
increased overhead when maintaining the code as all those declarations have
to be kept in line as the code changes over time.

It's a trade-off: there is a potential advantage, but lots of
disadvantages. I believe that the disadvantages outweight the possible
benefit. Fortunately there are plenty of languages to choose from out
there, so those who disagree with me are free to use a language which does
insist on declarations.
Oct 3 '05 #12
James A. Donald wrote:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified
"horrified" ???

Ok, so I'll give you more reasons to be 'horrified':
- no private/protected/public access restriction - it's just a matter of
conventions ('_myvar' -> protected, '__myvar' -> private)
- no constants (here again just a convention : a name in all uppercase
is considered a constant - but nothing will prevent anyone to modify it)
- possibility to add/delete attributes to an object at runtime
- possibility to modify a class at runtime
- possibility to change the class of an object at runtime
- possibility to rebind a function name at runtime
.....

If you find all this horrifying too, then hi-level dynamic languages are
not for you !-)
to read

"no variable or argument declarations are necessary."
No declarative static typing is necessary - which not the same thing. In
Python, type informations belong to the object, not to names that are
bound to the object.

Of course you cannot use a variable that is not defined ('defining' a
variable in Python being just a matter of binding a value to a name).
Surely that means that if I misspell a variable name, my program will
mysteriously fail to work with no error message.
Depends. If you try to use an undefined variable, you'll get a name error:
print var1 # var1 is undefined at this stage

Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'var1' is not defined

Now if the typo is on the LHS, you'll just create a new name in the
current namespace:

myvra = 42 # should have been 'myvar' and not 'myvra'

But you'll usually discover it pretty soon:

print myvar
Traceback (most recent call last):
File "<stdin>", line 1, in ?
NameError: name 'myvar' is not defined

If you don't declare variables, you can inadvertently re-use an
variable used in an enclosing context when you don't intend to,
yes, but this is quite uncommon.

The 'enclosing context' is composed of the 'global' (which should be
named 'module') namespace and the local namespace. Using globals is bad
style, so it shouldn't be too much of a concern, but anyway trying to
*assign* to a var living in the global namespace without having
previously declared the name as global will not overwrite the global
variable - only create a local name that'll shadow the global one. Since
Python is very expressive, functions code tend to be small, so the
chances of inadvertently reuse a local name are usually pretty low.

Now we have the problem of shadowing inherited attributes in OO. But
then the same problem exists in most statically typed OOPLs.
or
inadvertently reference a new variable (a typo) when you intended to
reference an existing variable.
Nope. Trying to 'reference' an undefined name raises a name error.
What can one do to swiftly detect this type of bug?


1/ write small, well-decoupled code
2/ use pychecker or pylint
3/ write unit tests

You'll probably find - as I did - that this combination (dynamic typing
+ [pylint|pychecker] + unit tests) usually leads to fewer bugs than just
relying on declarative static typing.

What you fear can become reality with some (poorly designed IMHO)
scripting languages like PHP, but should not be a concern with Python.
Try working with it (and not to fight agaisnt it), and you'll see by
yourself if it fits you.

--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Oct 3 '05 #13
James A. Donald wrote:
James A. Donald:
> Surely that means that if I misspell a variable name, my program will
> mysteriously fail to work with no error message.

On Sun, 02 Oct 2005 17:11:13 -0400, Jean-François Doyon
No, the error message will be pretty clear actually :)

Now why, I wonder, does this loop never end :-)
egold = 0
while egold < 10:
ego1d = egold+1


A more pythonic style would be:

egold = 0
while egold < 10:
ego1d += 1

And that one raises a name error !-)
--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Oct 3 '05 #14
dw******@gmail.com wrote:
The easiest way to avoid this problem (besides watching for NameError
exceptions) is to use an editor that has automatic name completion.
Eric3 is a good example. So, even though in theory it could be an
issue, I rarely run into this in practice.


I don't use emacs automatic completion, and I still rarely (read:
'never') run into this kind of problem in Python.
--
bruno desthuilliers
ruby -e "print 'o****@xiludom.gro'.split('@').collect{|p|
p.split('.').collect{|w| w.reverse}.join('.')}.join('@')"
Oct 3 '05 #15
Op 2005-10-03, Duncan Booth schreef <du**********@invalid.invalid>:
Antoon Pardon wrote:
A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.
Wrong. It would catch at compile-time those misspellings which do not
happen to coincide with another declared variable.


Fine, it is still better than python which will crash each time
one of these is encountered.
It would give the
programmer a false sense of security since they 'know' all their
misspellings are caught by the compiler. It would not be a substitute for
run-time testing.
I don't think anyone with a little bit of experience will be so naive.
Moreover, it adds a burden on the programmer who has to write all those
declarations,
So? He has to write all those lines of code too.

People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.

I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO
and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code.
Well maybe we should remove all those comments from code too,
because all it does is add more lines for people to read.
Also there is
increased overhead when maintaining the code as all those declarations have
to be kept in line as the code changes over time.


Which is good. Just as you have to keep the unittests in line as code
changes over time.

--
Antoon Pardon
Oct 3 '05 #16
Antoon Pardon wrote:
and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code.


Well maybe we should remove all those comments from code too,
because all it does is add more lines for people to read.


You'll get no argument from me there. The vast majority of comments I come
across in code are a total waste of time.
Oct 3 '05 #17
On Mon, 03 Oct 2005 06:59:04 +0000, Antoon Pardon wrote:
Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1


for item in [x for x in xrange(10) if test()]:

But it isn't about the idioms. It is about the trade-offs. Python allows
you to do things that you can't do in other languages because you
have much more flexibility than is possible with languages that
require you to declare variables before using them. The cost is, some
tiny subset of possible errors will not be caught by the compiler. But
since the compiler can't catch all errors anyway, you need to test for
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.
Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.
--
Steven.

Oct 3 '05 #18
Steven D'Aprano a écrit :
On Mon, 03 Oct 2005 06:59:04 +0000, Antoon Pardon wrote:

Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1

for item in [x for x in xrange(10) if test()]:

But it isn't about the idioms. It is about the trade-offs. Python allows
you to do things that you can't do in other languages because you
have much more flexibility than is possible with languages that
require you to declare variables before using them. The cost is, some
tiny subset of possible errors will not be caught by the compiler. But
since the compiler can't catch all errors anyway, you need to test for
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.
Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.


As a matter of fact, doing that one on a HP48 calculator with unit
anotated values would have worked perfectly, except for the distance <
27 check which would have raised one error.
Oct 3 '05 #19
On Mon, 03 Oct 2005 13:58:33 +0000, Antoon Pardon wrote:
Op 2005-10-03, Duncan Booth schreef <du**********@invalid.invalid>:
Antoon Pardon wrote:
A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.
Wrong. It would catch at compile-time those misspellings which do not
happen to coincide with another declared variable.


Fine, it is still better than python which will crash each time
one of these is encountered.


Python doesn't crash when it meets an undeclared variable. It raises an
exception.

This lets you do things like:

try:
False
except NameError:
print "bools are not defined, making fake bools..."

False = 0
True = not False

def bool(obj):
if obj: return True
else: return False

# not identical to real bools, but close enough to fake it (usually)

Moreover, it adds a burden on the programmer who has to write all those
declarations,


So? He has to write all those lines of code too.

People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.


Yes, but there is no evidence that pre-declaration of variables is a
burden worth carrying. It doesn't catch any errors that your testing
wouldn't catch anyway.

I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO


Speaking as somebody who spent a long time programming in Pascal, I got
heartedly sick and tired of having to jump backwards and forwards from
where I was coding to the start of the function to define variables.

It got to the stage that sometimes I'd pre-define variables I thought I
might need, intending to go back afterwards and delete the ones I didn't
need. When the programmer is having to to jump through hoops to satisfy
the compiler, there is something wrong.

and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code.


Well maybe we should remove all those comments from code too,
because all it does is add more lines for people to read.


Well-written comments should give the reader information which is not in
the code. If the comment gives you nothing that wasn't obvious from the
code, it is pointless and should be removed.

Variable declarations give the reader nothing that isn't in the code. If I
write x = 15, then both I and the compiler knows that there is a variable
called x. It is blindingly obvious. Why do I need to say "define x" first?

Pre-defining x protects me from one class of error, where I typed x
instead of (say) n. That's fine as far as it goes, but that's not
necessarily an _error_. If the typo causes an error, e.g.:

def spam(n):
return "spam " * x # oops, typo

then testing will catch it, and many other errors as well. Declaring the
variable doesn't get me anything I wouldn't already get.

But if it doesn't cause an error, e.g.:

def spam(n):
if n:
return "spam " * n
else:
x = 0 # oops, typo
return "spam " * n

This may never cause a failure, since n is always an integer. Since my
other testing guarantees that n is always an integer, it doesn't matter
that I've created a variable x that doesn't get used. Yes, it would be
nice for the compiler to flag this, but if the cost of that niceness is to
have to define every single variable, I can live without it.

Also there is
increased overhead when maintaining the code as all those declarations have
to be kept in line as the code changes over time.


Which is good. Just as you have to keep the unittests in line as code
changes over time.


That is not the same at all. Changing variable declarations needs to be
done every time you modify the internal implementation of a function.
Changing the unittests, or any other testing for that matter, only needs
to be done when you change the interface.

In principle, if you have an interface designed up front before you write
any code, you could write all your tests at the start of the project and
never change them again. You can't do that with variable declarations,
since every time you change the implementation you have to change the
declarations.
--
Steven.

Oct 3 '05 #20
On 3 Oct 2005 13:58:33 GMT
Antoon Pardon wrote:
People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.

I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO


+1

Some people just don't get the simple fact that declarations are
essentially kind of unit test you get for free (almost), and the compiler
is a testing framework for them.

--
jk
Oct 3 '05 #21
On Mon, 03 Oct 2005 20:30:35 +0400, en.karpachov wrote:
Some people just don't get the simple fact that declarations are
essentially kind of unit test you get for free (almost), and the compiler
is a testing framework for them.


No. Some people just don't get it that declarations aren't almost
free, because they cost a lot in human labour, and that they give you
practically nothing that your unit testing wouldn't give you anyway.
--
Steven.

Oct 3 '05 #22
en**********@ospaz.ru wrote:
On 3 Oct 2005 13:58:33 GMT
Antoon Pardon wrote:

People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.

I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO

+1

Some people just don't get the simple fact that declarations are
essentially kind of unit test you get for free (almost), and the compiler
is a testing framework for them.

Hmm. Presumably introspection via getattr() is way too dangerous, then?
Might as well throw the function away ...

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 3 '05 #23
On Tue, 04 Oct 2005 01:46:49 +1000
Steven D'Aprano wrote:
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.


So, I guess, you have a spare Mars lander especially for unit-testing? :)

--
jk
Oct 3 '05 #24
On Mon, 03 Oct 2005 17:43:35 +0100
Steve Holden wrote:
Hmm. Presumably introspection via getattr() is way too dangerous, then?


Sure, it is dangerous. Not a showstopper, though.

I mean, the absolute address access in the C is too dangerous, yes, but it
doesn't make declarations in C any less useful.

--
jk
Oct 3 '05 #25
"Steven D'Aprano" <st***@REMOVETHIScyber.com.au> wrote
[snipped]
No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()


Actually modern compilers can (http://www.boost.org/libs/mpl/doc/tu...-analysis.html)
at the expense of the programmer's eye health...

George
Oct 3 '05 #26
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
On Mon, 03 Oct 2005 06:59:04 +0000, Antoon Pardon wrote:
Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.


As far as I can tell, this is as much hearsay and personal experience
as the alternate claim that not having them costs you lots of
debugging time and errors. If anyone has pointers to real research
into this area (I've heard the TRAK folks did some, but haven't been
able to turn any up), I'd love to hear it.

My gut reaction is that it's a wash. The time taken to declare
variables in well-written code in a well-designed language - meaning
the declarations and use will be close together - isn't all that
great, but neither are the savings.

The other win from declaring variables is that if you compile the code
you can make assumptions about the types of variables and thus save
doing (some of) the type determination at run time. But there are type
inferencing languages and systems - and they've been around since the
70s - that can do that for you without having to declare the
variables, so that doesn't buy you much.

If I'm going to get compiler support for semantic checking like this,
I want it to serious levels. I want function pre/post conditions
checked. I want loop and class invariant checked. I want subsumption
in my inheritance tree. Nuts - I want a complete, well-designed
inheritance tree. Duck typing is great stuff, but if I'm going to be
doing the work to declare everything, I want *everything* that can be
checked checked.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 3 '05 #27
Steven D'Aprano wrote:
On Mon, 03 Oct 2005 06:59:04 +0000, Antoon Pardon wrote:
x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.
Yes, and a reserved position in the unemployment line as well, I would bet.
Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.


Also checking types is not the same as checking values.

In most cases where critical code is used you really want value testing
not type checking. This is where self validating objects are useful and
there is nothing preventing anyone from using them in Python.

Cheers,
Ron
Oct 3 '05 #28
egold = 0:
while egold < 10:
if test():
ego1d = egold + 1


Both pylint and pychecker pick this up. I wrapped the code in a
function (to prevent importing from running in an infinite loop) and ran
both pylint and pychecker:

plyint: W: 5:myfunc: Unused variable 'ego1d'
pychecker: test.py:4: Local variable (ego1d) not used
I make a habit to run pylint or pychecker on my code often. They pick
up a lot of stuff like unused variables, etc.

But you can also do this:

/* initialize variables i'm gonna use */
int vara = 0;
int varb = 0;
while (vara < 10) {
varb = vara + 1;
}

So we can make a similar mistake in C if you type the wrong (declared)
variable name. Moreover, "gcc -Wall" did not report the "unused"
variable so it might be even more difficult to track down the problem.
Oct 4 '05 #29
Op 2005-10-03, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Mon, 03 Oct 2005 13:58:33 +0000, Antoon Pardon wrote:
Op 2005-10-03, Duncan Booth schreef <du**********@invalid.invalid>:
Antoon Pardon wrote:

A language where variable have to be declared before use, would allow
to give all misspelled (undeclared) variables in on go, instead of
just crashing each time one is encounterd.

Wrong. It would catch at compile-time those misspellings which do not
happen to coincide with another declared variable.
Fine, it is still better than python which will crash each time
one of these is encountered.


Python doesn't crash when it meets an undeclared variable. It raises an
exception.


Your nit-picking. For the sake of finding misspelled variables the
difference is irrelevant.
Moreover, it adds a burden on the programmer who has to write all those
declarations,


So? He has to write all those lines of code too.

People often promote unittesting here. Writing all those unittest is
an added burden too. But people think this burden is worth it.


Yes, but there is no evidence that pre-declaration of variables is a
burden worth carrying. It doesn't catch any errors that your testing
wouldn't catch anyway.


Maybe not, but it may catch them earlier and may make them easier
to recognize.

Declarations also allow easier writable closures. Since the declaration
happens at a certain scope, the run time can easily find the correct
scope when a variable is rebound.

They also relieve a burden from the run-time, since all variables
are declared, the runtime doesn't has to check whether or not
a variable is accesible, it knows it is.

And if you provide type information with the declaration, more
efficient code can be produced.
I think writing declaration is also worth it. The gain is not as
much as with unittesting but neither is the burden, so that
balances out IMO


Speaking as somebody who spent a long time programming in Pascal, I got
heartedly sick and tired of having to jump backwards and forwards from
where I was coding to the start of the function to define variables.


I have programmed a lot in Pascal too and Modula II and other languages.
I never found declarations that much a burden. That you got heartedly sick
of having to use declarations says very little about declarations and
says more about you.

I think language matters shouldn't be setlled by personal preferences.
It got to the stage that sometimes I'd pre-define variables I thought I
might need, intending to go back afterwards and delete the ones I didn't
need. When the programmer is having to to jump through hoops to satisfy
the compiler, there is something wrong.
Maybe it was your way of working. I never thought I had to go through
hoops to satisfy the compiler. You have to satisfy that compilor anyway,
for the moment I have more problems with colons that have to be put
after an "if", "else" etc than I ever had with declarations.
and worse it adds a burden on everyone reading the code who
has more lines to read before understanding the code.


Well maybe we should remove all those comments from code too,
because all it does is add more lines for people to read.


Well-written comments should give the reader information which is not in
the code. If the comment gives you nothing that wasn't obvious from the
code, it is pointless and should be removed.

Variable declarations give the reader nothing that isn't in the code. If I
write x = 15, then both I and the compiler knows that there is a variable
called x. It is blindingly obvious. Why do I need to say "define x" first?


Because it isn't at all obvious at which scope that x is. Sure you can
define your language that rebinding is always at local scope, but that
you need to define it so, means it isn't that obvious.

Also the question is not whether or not there is a variable x, the
quesntions whether or not there should be a variable x. That is not
at all that obvious when you write x = 15.
Pre-defining x protects me from one class of error, where I typed x
instead of (say) n. That's fine as far as it goes, but that's not
necessarily an _error_. If the typo causes an error, e.g.: def spam(n):
return "spam " * x # oops, typo

then testing will catch it, and many other errors as well. Declaring the
variable doesn't get me anything I wouldn't already get.
Yes it would have caught this error even before you had to begin
testing.
But if it doesn't cause an error, e.g.:

def spam(n):
if n:
return "spam " * n
else:
x = 0 # oops, typo
return "spam " * n

This may never cause a failure, since n is always an integer. Since my
other testing guarantees that n is always an integer,
This is naive. Testing doesn't guarantee anything. If this is what you
think about testing, then testing gives you a false impression of
security. Maybe we should drop testing.
it doesn't matter
that I've created a variable x that doesn't get used. Yes, it would be
nice for the compiler to flag this, but if the cost of that niceness is to
have to define every single variable, I can live without it.
But we should decide what language features are usefull and which are
not by what some individual can or can't live without.
Also there is
increased overhead when maintaining the code as all those declarations have
to be kept in line as the code changes over time.


Which is good. Just as you have to keep the unittests in line as code
changes over time.


That is not the same at all. Changing variable declarations needs to be
done every time you modify the internal implementation of a function.


Well now I'll nitpick too. No you don't. You only have to do so
if this modification needs other variables.
Changing the unittests, or any other testing for that matter, only needs
to be done when you change the interface.
Which will be often enough.
In principle, if you have an interface designed up front before you write
any code, you could write all your tests at the start of the project and
never change them again. You can't do that with variable declarations,
since every time you change the implementation you have to change the
declarations.


This argument has very little value since all you are saying is that
if you were smart enought to choose the right interface to begin with,
you won't have to change interface related stuff like unittests, while
if you were not that smart in choosing your implementation, you will
have to change implemantation related stuff like declarations.

--
Antoon Pardon
Oct 4 '05 #30
Op 2005-10-03, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Mon, 03 Oct 2005 06:59:04 +0000, Antoon Pardon wrote:
Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1


for item in [x for x in xrange(10) if test()]:

But it isn't about the idioms. It is about the trade-offs. Python allows
you to do things that you can't do in other languages because you
have much more flexibility than is possible with languages that
require you to declare variables before using them. The cost is, some
tiny subset of possible errors will not be caught by the compiler. But
since the compiler can't catch all errors anyway, you need to test for
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.


Using (unit)tests will not guarantee that your programs is error free.

So if sooner or later a (unit)tested program causes a problem, will you
then argue that we should abondon tests, because tests won't catch
all errors.

--
Antoon Pardon
Oct 4 '05 #31
Mike Meyer wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.

As far as I can tell, this is as much hearsay and personal experience
as the alternate claim that not having them costs you lots of
debugging time and errors. If anyone has pointers to real research
into this area (I've heard the TRAK folks did some, but haven't been
able to turn any up), I'd love to hear it.


Sorry, I have no hard research to point to. If I did, I
would have referenced it.
My gut reaction is that it's a wash. The time taken to declare
variables in well-written code in a well-designed language - meaning
the declarations and use will be close together - isn't all that
great, but neither are the savings.
It isn't the typing time to declare variables, it is
the context switching. You're focused on implementing
an algorithm, realise you need to declare another
variable, your brain does a mini-context switch, you
scroll up to the declaration section, you declare it,
you scroll back to where you were, and now you have to
context switch again.

You've gone from thinking about the implementation of
the algorithm to thinking about how to satisfy the
requirements of the compiler. As context switches go,
it isn't as big as the edit-compile-make-run method of
testing, but it is still a context switch.

Or you just code without declaring, intending to go
back and do it later, and invariably forget.

[snip]
If I'm going to get compiler support for semantic checking like this,
I want it to serious levels. I want function pre/post conditions
checked. I want loop and class invariant checked. I want subsumption
in my inheritance tree. Nuts - I want a complete, well-designed
inheritance tree. Duck typing is great stuff, but if I'm going to be
doing the work to declare everything, I want *everything* that can be
checked checked.


We like to think of programming as a branch of
engineering. It isn't, not yet.

I've worked for architects who were involved in some
major engineering projects. One of the things I learnt
is that there are times when you need to specify
objects strictly. When only a 5% titanium stainless
steel alloy will do, then *only* a 5% titanium
stainless steel alloy will do. That's when you want
strict type-checking.

But the rest of the time, just about any stainless
steel will do, and sometimes you don't even care if it
is steel -- iron will do the job just as well. If the
nail is hard enough to be hammered into the wood, and
long enough to hold it in place, it is good enough for
the job. That's the real-world equivalent of duck typing.

Static typed languages that do all those checks are the
equivalent of building the space shuttle, where
everything must be specified in advance to the nth
degree: it must not just be a three inch screw, but a
three inch Phillips head anti-spark non-magnetic
corrosion-resistant screw rated to a torque of
such-and-such and capable of resisting vaccuum welding,
extreme temperatures in both directions, and exposure
to UV radiation. For that, you need a compiler that
will do what Mike asks for: check *everything*.

I don't know whether there are languages that will
check everything in that way. If there aren't, then
perhaps there should be. But Python shouldn't be one of
them. Specifying every last detail about the objects
making up the space shuttle is one of the reasons why
it costs umpty-bazillion dollars to build one, and
almost as much to maintain it -- and it still has a
safety record worse than most $10,000 cars.

That level of specification is overkill for most
engineering projects, and it's overkill for most
programming projects too. It is all well and good to
tear your hair and rip your clothes over the
possibility of the language allowing some hidden bug in
the program, but get over it: there is always a trade
off to be made between cost, time and risk of bugs.
Python is a language that makes the trade off one way
(fast development, low cost, high flexibility, moderate
risk) rather than another (slow development, high cost,
low flexibility, low risk).

We make the same trade offs in the real world too: the
chair you sit on is not built to the same level of
quality as the space shuttle, otherwise it would cost
$100,000 instead of $100. Consequently, sometimes
chairs break. Deal with it.
--
Steven.

Oct 4 '05 #32

"bruno modulix" <on***@xiludom.gro> wrote in message
news:43***********************@news.free.fr...
James A. Donald wrote:
I am contemplating getting into Python, which is used by engineers I
admire - google and Bram Cohen, but was horrified


"horrified" ???

Ok, so I'll give you more reasons to be 'horrified':
- no private/protected/public access restriction - it's just a matter of
conventions ('_myvar' -> protected, '__myvar' -> private)
- no constants (here again just a convention : a name in all uppercase
is considered a constant - but nothing will prevent anyone to modify it)
- possibility to add/delete attributes to an object at runtime
- possibility to modify a class at runtime
- possibility to change the class of an object at runtime
- possibility to rebind a function name at runtime
....

If you find all this horrifying too, then hi-level dynamic languages are
not for you !-)


Not to mention that since the O.P. seem to assume that the compiler will
protect against deliberate subversion by evil programmers then he must be
further "horrified" to learn that, although it is harder to do the above in
f.ex. C++, it is not at all impossible, a carefully crafted pointer or a
little devious sub-classing goes a long way.

If all else fails, The humble Linker holds the Word of Power!

Tampering with linking is both the easiest way to subvert code reviews,
language checks and boundaries and also the hardest to discover because the
tampering will be buried somewhere deep inside the build process, the part
that never, ever gets reviewed because it is automated anyway and too
complex entirely so nobody sane will actually mess with it once it "works"
i.e. produces runnable code!.

Finally, given proper permissions, one can of course re-link the binary
executable, should the occasion merit. Like when one needs HIP in Telnet
which is an absolute b****rd to build on a modern Linux box. (Somebody build
that *once* in maybe 1978, I think ;-) One can replace classes in Jar
archives too - possibly one can even get the Java runtime to load the "new
version" of a jar archive in preference to the shipped one ...
I.O.W:

Superficially, the compile-time checks of Java and C++ provides some checks
& boundaries but it comes at the expense of much more machinery with many
more intricate movable parts than *also* can be interfered with (or broken).

Python is simple and self-contained, thus it is pretty obvious - or at least
not too difficult, to check what *actually* goes on with an application.

If there is no trust, nothing can be done safely. If there is trust, then
most of the percieved safety just get in the way of work.
Oct 4 '05 #33
Op 2005-10-04, Steven D'Aprano schreef <st***@REMOVEMEcyber.com.au>:
Mike Meyer wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Declared variables have considerable labour costs, and only marginal
gains. Since the steps you take to protect against other errors will also
protect against mistyping variables, declarations of variables is of
little practical benefit.

As far as I can tell, this is as much hearsay and personal experience
as the alternate claim that not having them costs you lots of
debugging time and errors. If anyone has pointers to real research
into this area (I've heard the TRAK folks did some, but haven't been
able to turn any up), I'd love to hear it.


Sorry, I have no hard research to point to. If I did, I
would have referenced it.
My gut reaction is that it's a wash. The time taken to declare
variables in well-written code in a well-designed language - meaning
the declarations and use will be close together - isn't all that
great, but neither are the savings.


It isn't the typing time to declare variables, it is
the context switching. You're focused on implementing
an algorithm, realise you need to declare another
variable, your brain does a mini-context switch, you
scroll up to the declaration section, you declare it,
you scroll back to where you were, and now you have to
context switch again.

You've gone from thinking about the implementation of
the algorithm to thinking about how to satisfy the
requirements of the compiler. As context switches go,
it isn't as big as the edit-compile-make-run method of
testing, but it is still a context switch.


Nobody forces you to work this way. You can just finish
your algorithm and declare your variables afterwards.

Besides likewise things can happen in python. If you
suddenly realise your class needs an instance variable,
chances are you will have to add initialisation code
for that instance variable in the __init__ method.
So either you do a context switch from implementing
whatever method you were working on to the initialisation
in __init__ and back or you just code without
initialisation, intending to go back an do it later.
Or you just code without declaring, intending to go
back and do it later, and invariably forget.


What's the problem, the compilor will allert you
to your forgetfullness and you can then correct
them all at once.

--
Antoon Pardon
Oct 4 '05 #34
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Or you just code without declaring, intending to go
back and do it later, and invariably forget.


What's the problem, the compilor will allert you
to your forgetfullness and you can then correct
them all at once.


Thiat in fact happens to me all the time and is an annoying aspect of
Python. If I forget to declare several variables in C, the compiler
gives me several warning messages and I fix them in one edit. If I
forget to initialize several variables in Python, I need a separate
test-edit cycle to hit the runtime error for each one.
Oct 4 '05 #35
Paul Rubin wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Or you just code without declaring, intending to go
back and do it later, and invariably forget.


What's the problem, the compilor will allert you
to your forgetfullness and you can then correct
them all at once.

Thiat in fact happens to me all the time and is an annoying aspect of
Python. If I forget to declare several variables in C, the compiler
gives me several warning messages and I fix them in one edit. If I
forget to initialize several variables in Python, I need a separate
test-edit cycle to hit the runtime error for each one.


Well I hope you aren't suggesting that declaring variables makes it
impossible to forget to initalise them. So I don;t really see the
relevance of this remark, since you simply add an extra run to fix up
the "forgot to declare" problem. After that you get precisely one
runtime error per "forgot to initialize".

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC www.holdenweb.com
PyCon TX 2006 www.python.org/pycon/

Oct 4 '05 #36
Op 2005-10-04, Steve Holden schreef <st***@holdenweb.com>:
Paul Rubin wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Or you just code without declaring, intending to go
back and do it later, and invariably forget.

What's the problem, the compilor will allert you
to your forgetfullness and you can then correct
them all at once.

Thiat in fact happens to me all the time and is an annoying aspect of
Python. If I forget to declare several variables in C, the compiler
gives me several warning messages and I fix them in one edit. If I
forget to initialize several variables in Python, I need a separate
test-edit cycle to hit the runtime error for each one.


Well I hope you aren't suggesting that declaring variables makes it
impossible to forget to initalise them. So I don;t really see the
relevance of this remark, since you simply add an extra run to fix up
the "forgot to declare" problem. After that you get precisely one
runtime error per "forgot to initialize".


Declaration and initialisation often go together. So the fixup
to declare is often enough a fixup for the initialisation too.

--
Antoon Pardon
Oct 4 '05 #37
> What can one do to swiftly detect this type of bug?

While I can only speak from my own experience I can't remember a
single instance where this type of bug caused any kind of serious
problem. IMHO these are very trivial errors, that get caught
immediately and I would not even qualify them as bugs, more like
typos, spelling mistakes, etc.

Real bugs are a lot more insidious than that, and they might even occur
more frequently if there was type checking ... since it might even
lead to longer code

just my $0.01

Istvan.

Oct 4 '05 #38
Have those of you who think that the lack of required declarations in
Python is a huge weakness given any thought to the impact that adding
them would have on the rest of the language? I can't imagine how any
language with required declarations could even remotely resemble Python.

And if you want to use such a different language, wouldn't a different
existing language better fit your needs...?

Cheers,
Brian
Oct 4 '05 #39
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Op 2005-10-03, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Mon, 03 Oct 2005 13:58:33 +0000, Antoon Pardon wrote: Declarations also allow easier writable closures. Since the declaration
happens at a certain scope, the run time can easily find the correct
scope when a variable is rebound.


If it happens at runtime, then you can do it without declarations:
they're gone by then. Come to think of it, most functional languages -
which are the languages that make the heaviest use of closures - don't
require variable declarations.
They also relieve a burden from the run-time, since all variables
are declared, the runtime doesn't has to check whether or not
a variable is accesible, it knows it is.
Not in a dynamic language. Python lets you delete variables at run
time, so the only way to know if a variable exists at a specific
point during the execution of an arbitrary program is to execute the
program to that point.
And if you provide type information with the declaration, more
efficient code can be produced.
Only in a few cases. Type inferencing is a well-understood
technology, and will produce code as efficient as a statically type
language in most cases.
I think language matters shouldn't be setlled by personal preferences.
I have to agree with that. For whether or not a feature should be
included, there should either be a solid reason dealing with the
functionality of the language - meaning you should have a set of use
cases showing what a feature enables in the language that couldn't be
done at all, or could only be done clumsily, without the feature.

Except declarations don't add functionality to the language. They
effect the programing process. And we have conflicting claims about
whether that's a good effect or not, all apparently based on nothing
solider than personal experience. Which means the arguments are just
personal preferences.

Until someone does the research to provide hard evidence one way or
another, that's all we've got to work with. Which means that languages
should exist both with and with those features, and if one sides
experiences generalize to the population at large, they alternative
languages will die out. Which hasn't happened yet.
But we should decide what language features are usefull and which are
not by what some individual can or can't live without.


Um - that's just personal preference (though I may have misparsed your
sentence). What one person can't live without, another may not be able
to live with. All that means is that they aren't likely to be happy
with the same programming language. Which is fine - just as no
programming language can do everything, no programming language can
please everyone.

Antoon, at a guess I'd say that Python is the first time you've
encountered a dynamnic language. Being "horrified" at not having
variable declarations, which is a standard feature of such languages
dating back to the 1950s, is one such indication.

Dynamic languages tend to express a much wider range of programming
paradigms than languages that are designed to be statically
compiled. Some of these paradigms do away with - or relegate to the
level of "ugly performance hack" - features that someone only
experienced with something like Pascal would consider
essential. Assignment statements are a good example of that.

Given these kinds of differences, prior experience is *not* a valid
reason for thinking that some difference must be wrong. Until you have
experience with the language in question, you can't really decide that
some feature being missing is intolerable. You're in the same position
as the guy who told me that a language without a goto would be
unusable based on his experience with old BASIC, FORTRAN IV and
assembler.

Pick one of the many languages that don't require declarations. Try
writing a code in them, and see how much of a problem it really is in
practice, rather than trying to predict that without any
information. Be warned that there are *lots* of variations on how
undeclared variables are treated when referenced. Python raises
exceptions. Rexx gives them their print name as a value. Other
languages do other things.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 4 '05 #40
Steven D'Aprano <st***@REMOVEMEcyber.com.au> writes:
My gut reaction is that it's a wash. The time taken to declare
variables in well-written code in a well-designed language - meaning
the declarations and use will be close together - isn't all that
great, but neither are the savings. You've gone from thinking about the implementation of the algorithm to
thinking about how to satisfy the requirements of the compiler. As
context switches go, it isn't as big as the edit-compile-make-run
method of testing, but it is still a context switch.


I'm making context switches all the time when programming. I go from
thinking about the problem in terms of the problem, to thinking about
in terms of programming language objects, to thinking about the syntax
for expressing those objects. Adding another one does have a cost -
but it's no big deal.
I don't know whether there are languages that will check everything in
that way. If there aren't, then perhaps there should be. But Python
shouldn't be one of them.
Right. You're doing different things when you program in a language
like Python, vs. Eiffel (which is where I drew most of my checks
from). Each does what it does well - but they don't do the same
things.
Specifying every last detail about the objects making up the space
shuttle is one of the reasons why it costs umpty-bazillion dollars
to build one, and almost as much to maintain it -- and it still has
a safety record worse than most $10,000 cars.


As if a something that's designed to literally blow you off the face
of the earth could reasonably be compared with an internal combustion
engine that never leaves the ground for safety. The shuttle has one of
the best safety records around for vehicles that share it's
purpose. It's doing something that's inherently very dangerous, that
we are still learning how to do. Overengineering is the only way to
get any measure of safety.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Oct 4 '05 #41
In article <ma*************************************@python.or g>,
Steve Holden <st***@holdenweb.com> wrote:
Paul Rubin wrote:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Or you just code without declaring, intending to go
back and do it later, and invariably forget.

What's the problem, the compilor will allert you
to your forgetfullness and you can then correct
them all at once.

Thiat in fact happens to me all the time and is an annoying aspect of
Python. If I forget to declare several variables in C, the compiler
gives me several warning messages and I fix them in one edit. If I
forget to initialize several variables in Python, I need a separate
test-edit cycle to hit the runtime error for each one.


Well I hope you aren't suggesting that declaring variables makes it
impossible to forget to initalise them. So I don;t really see the
relevance of this remark, since you simply add an extra run to fix up
the "forgot to declare" problem. After that you get precisely one
runtime error per "forgot to initialize".


It's hard to say what anyone's suggesting, unless some recent
utterance from GvR has hinted at a possible declaration syntax
in future Pythons. Short of that, it's ... a universe of
possibilities, none of them likely enough to be very interesting.

In the functional language approach I'm familiar with, you
introduce a variable into a scope with a bind -

let a = expr in
... do something with a

and initialization is part of the package. Type is usually
inferred. The kicker though is that the variable is never
reassigned. In the ideal case it's essentially an alias for
the initializing expression. That's one possibility we can
probably not find in Python's universe.

Donn Cave, do**@u.washington.edu
Oct 4 '05 #42
On Tue, 04 Oct 2005 10:18:24 -0700, Donn Cave <do**@u.washington.edu> wrote:
[...]
In the functional language approach I'm familiar with, you
introduce a variable into a scope with a bind -

let a = expr in
... do something with a

and initialization is part of the package. Type is usually
inferred. The kicker though is that the variable is never
reassigned. In the ideal case it's essentially an alias for
the initializing expression. That's one possibility we can
probably not find in Python's universe.

how would you compare that with
lambda a=expr: ... do something (limited to expression) with a
?

Regards,
Bengt Richter
Oct 4 '05 #43
In article <43*****************@news.oz.net>,
bo**@oz.net (Bengt Richter) wrote:
On Tue, 04 Oct 2005 10:18:24 -0700, Donn Cave <do**@u.washington.edu> wrote:
[...]
In the functional language approach I'm familiar with, you
introduce a variable into a scope with a bind -

let a = expr in
... do something with a

and initialization is part of the package. Type is usually
inferred. The kicker though is that the variable is never
reassigned. In the ideal case it's essentially an alias for
the initializing expression. That's one possibility we can
probably not find in Python's universe.

how would you compare that with
lambda a=expr: ... do something (limited to expression) with a
?


OK, the limitations of a Python lambda body do have this effect.

But compare programming in a language like that, to programming
with Python lambdas? Maybe it would be like living in a Zen
Monastery, vs. living in your car.

Donn Cave, do**@u.washington.edu
Oct 4 '05 #44
Antoon Pardon wrote:
Op 2005-10-03, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Mon, 03 Oct 2005 06:59:04 +0000, Antoon Pardon wrote:

Well I'm a bit getting sick of those references to standard idioms.
There are moments those standard idioms don't work, while the
gist of the OP's remark still stands like:

egold = 0:
while egold < 10:
if test():
ego1d = egold + 1


for item in [x for x in xrange(10) if test()]:

But it isn't about the idioms. It is about the trade-offs. Python allows
you to do things that you can't do in other languages because you
have much more flexibility than is possible with languages that
require you to declare variables before using them. The cost is, some
tiny subset of possible errors will not be caught by the compiler. But
since the compiler can't catch all errors anyway, you need to test for
errors and not rely on the compiler. No compiler will catch this error:

x = 12.0 # feet
# three pages of code
y = 15.0 # metres
# three more pages of code
distance = x + y
if distance < 27:
fire_retro_rockets()

And lo, one multi-billion dollar Mars lander starts braking either too
early or too late. Result: a new crater on Mars, named after the NASA
employee who thought the compiler would catch errors.

Using (unit)tests will not guarantee that your programs is error free.

So if sooner or later a (unit)tested program causes a problem, will you
then argue that we should abondon tests, because tests won't catch
all errors.


Maybe you need to specify what kind of errors you want to catch.
Different types of errors require different approaches.

* Errors that interrupt program execution.

These are Type errors and/or illegal instruction errors such as divide
by zero. Try-excepts and checking attributes where these are possible
to handle them should be used.

* Human 'user' input errors.

Value testing is what is needed for these.

* Programming errors...

Nothing will replace testing here.
I think what you want is optional name and object locking in order to
prevent certain types of errors and increase reliability and dependability.

Name locking - This will allow you to be able to depend that a
specific name refers to a specific object. But that object can still be
modified if it's mutable.

Object locking - This would make a non mutable object from a mutable
object. A function could work here for lists. This probably isn't
possible with many complex objects. Names need to be rebound in many
objects for them to work. I think this may be much harder to do than it
seems.

An example, (but probably not possible to do).

Const = {}
Const['pi'] = 3.1415926535897931

... add more keys/value pairs ...

lockobject Const # prevent object from being changed
lockname Const # prevent name 'Const' from being rebound

... many pages of code ...

print Const['pi'] # dependable result?
Is this the type of control you want?
Would it make your programs more dependable or reliable?

Name locking might be implemented with additional name spaces, but they
would need to be checked prior to other name spaces, so it could slow
everything down.

And there would probably be ways to unlock objects. But maybe that's
not a problem as I think what you want to prevent is erroneous results
due to unintentional name changes or object changes.

I think both of these would have unexpected side effects in many cases,
so their use would be limited.

Cheers,
Ron






Oct 4 '05 #45
Brian Quinlan <br***@sweetapp.com> writes:
Have those of you who think that the lack of required declarations in
Python is a huge weakness given any thought to the impact that adding
them would have on the rest of the language? I can't imagine how any
language with required declarations could even remotely resemble
Python.


What's the big deal? Perl has an option for flagging undeclared
variables with warnings ("perl -w") or errors ("use strict") and Perl
docs I've seen advise using at least "perl -w" routinely. Those
didn't have much impact. Python already has a "global" declaration;
how does it de-Pythonize the language if there's also a "local"
declaration and an option to flag any variable that's not declared as
one or the other?

There's been a proposal from none other than GvR to add optional
static declarations to Python:

http://www.artima.com/weblogs/viewpost.jsp?thread=85551
Oct 4 '05 #46
On Tue, 2005-10-04 at 11:43 -0700, Paul Rubinhttp: wrote:
What's the big deal? Perl has an option for flagging undeclared
variables with warnings ("perl -w") or errors ("use strict") and Perl
docs I've seen advise using at least "perl -w" routinely. Those
didn't have much impact. Python already has a "global" declaration;
how does it de-Pythonize the language if there's also a "local"
declaration and an option to flag any variable that's not declared as
one or the other?


I would be happy with a "local" option. e.g.

def myfunc():
local spam = ...
local eggs = ...
global juice

breakfast = juice + spam + eggs # raises an exception (undeclared
breakfast)
What I'm *afraid* of is:

def myfunc(MyClass myparam):
int spam = 6
str eggs

# etc

i.e. typed declarations and type checking. This would annoy the heck
out of me.

Oct 4 '05 #47
marduk <us****@marduk.letterboxes.org> writes:
def myfunc(MyClass myparam):
int spam = 6
str eggs

# etc

i.e. typed declarations and type checking. This would annoy the heck
out of me.


It could be like Lisp, which has optional type declarations. If you
use the type declarations, the compiler can enforce them through
either static analysis or runtime checks, or alternatively to runtime
checks it can generate optimized code that crashes if the data has the
wrong type at runtime. If you don't use the declarations, you get the
usual dynamically typed data.
Oct 4 '05 #48
C++ and C# are converging with implicitly typed languages to the
extent that many declarations will be able to omit types. In the next
C++ standard and in C# 3.0 it may be possible to write, where Fn is a
function returning any particular type:

auto spam = Fn(); // C++0x
var spam = Fn(); // C# 3.0

http://www.research.att.com/~bs/rules.pdf
http://msdn.microsoft.com/vcsharp/future/
(most useful is the C# 3.0 language spec Word document)

Neil
Oct 5 '05 #49
Op 2005-10-04, Mike Meyer schreef <mw*@mired.org>:
Antoon Pardon <ap*****@forel.vub.ac.be> writes:
Op 2005-10-03, Steven D'Aprano schreef <st***@REMOVETHIScyber.com.au>:
On Mon, 03 Oct 2005 13:58:33 +0000, Antoon Pardon wrote: Declarations also allow easier writable closures. Since the declaration
happens at a certain scope, the run time can easily find the correct
scope when a variable is rebound.


If it happens at runtime, then you can do it without declarations:
they're gone by then.


That depends on how they are implemented. "declarations" can be
executable statements.

It is not about can we do without or not. It is about are they
helpfull or not. Python would be a whole different language
if it never adapted something it could do without.
Come to think of it, most functional languages -
which are the languages that make the heaviest use of closures - don't
require variable declarations.
But AFAIK they don't work like python which makes any variable
that is assigned to in a function, local. Which is a problem
if you want a writable closure.
They also relieve a burden from the run-time, since all variables
are declared, the runtime doesn't has to check whether or not
a variable is accesible, it knows it is.


Not in a dynamic language. Python lets you delete variables at run
time, so the only way to know if a variable exists at a specific
point during the execution of an arbitrary program is to execute the
program to that point.


It is not perfect, that doesn't mean it can't help. How much code
deletes variables.
And if you provide type information with the declaration, more
efficient code can be produced.


Only in a few cases. Type inferencing is a well-understood
technology, and will produce code as efficient as a statically type
language in most cases.


I thought it was more than in a few. Without some type information
from the coder, I don't see how you can infer type from library
code.
I think language matters shouldn't be setlled by personal preferences.


I have to agree with that. For whether or not a feature should be
included, there should either be a solid reason dealing with the
functionality of the language - meaning you should have a set of use
cases showing what a feature enables in the language that couldn't be
done at all, or could only be done clumsily, without the feature.


I think this is too strict. Decorators would IMO never made it.
The old way to do it, was certainly not clumsy IME.

I think that a feature that could be helpfull in reduction
errors, should be a candidate even if it has no other merrits.
Except declarations don't add functionality to the language. They
effect the programing process.
It would be one way to get writable closures in the language.
That is added functionality.
And we have conflicting claims about
whether that's a good effect or not, all apparently based on nothing
solider than personal experience. Which means the arguments are just
personal preferences.
Whether the good effect is good enough is certainly open for debate.
But the opponents seem to argue that since it is no absolute guarantee,
it is next to useless. Well I can't agree with that kind of argument
and will argue against it.
Antoon, at a guess I'd say that Python is the first time you've
encountered a dynamnic language. Being "horrified" at not having
variable declarations, which is a standard feature of such languages
dating back to the 1950s, is one such indication.
No I'm not horrified at not having variable declarations. I'm in
general very practical with regard to programming, and use what
features a language offers me. However that doesn't stop me from
thinking: Hey if language X would have feature F from language Y,
that could be helpfull.

Now if the developers think such a feature is not important enough
fine, by me. It is however something different if people start
arguing that feature F is totally useless. Now my impression is
that a number of people regard python or at least some aspects
of it as holy and that suggesting that some specific features
could be usefull is considered sacriledge.
Dynamic languages tend to express a much wider range of programming
paradigms than languages that are designed to be statically
compiled. Some of these paradigms do away with - or relegate to the
level of "ugly performance hack" - features that someone only
experienced with something like Pascal would consider
essential. Assignment statements are a good example of that.
I think we should get rid of thinking about a language as
static or dynamic. It is not the language which should determine
a static or dynamic approach, it is the problem you are trying
to solve. And if the coder thinks that a static approach is
best for his problem, why shouldn't he solve it that way.

That a language allows a static approach too, doesn't contradict
that it can work dynamically. Everytime a static feature is
suggested, some dynamic folks react as if the dynamic aspect
of python is in perril.
Given these kinds of differences, prior experience is *not* a valid
reason for thinking that some difference must be wrong. Until you have
experience with the language in question, you can't really decide that
some feature being missing is intolerable. You're in the same position
as the guy who told me that a language without a goto would be
unusable based on his experience with old BASIC, FORTRAN IV and
assembler.


There seems to be some misunderstanding, I don't remember stating that
missing declarations are intolerable, I certainly dont think so. I
wouldn't be programming in python for over five years now if I
thought so. But that doesn't mean having the possibilty to
declare is useless.

--
Antoon Pardon
Oct 5 '05 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Kapt. Boogschutter | last post by:
I'm trying to create a function that has at least 1 Argument but can also contain any number of Arguments (except 0 because my function would have no meaning for 0 argument). The arguments...
7
by: | last post by:
How to call a function with variable argument list from another function again with variable argument list? Example : double average ( int num, ... ); double AFunct1 ( int num, ... ); double...
3
by: zoe | last post by:
I am trying to master the variable argument list! C99 tells me: 7.15.1.4 The va_start macro: 1 #include <stdarg.h> void va_start(va_list ap, parmN); 4 The parameter parmN is the...
5
by: Andrej Prsa | last post by:
Hi! Why do I get a warning about incompatible pointer type if I try to assign a pointer to the function with variable argument number: int func (int argc, ...) , but everything is ok...
1
by: Ben Kial | last post by:
I'd like to write a wrapper function "mysprintf(char *buffer, char *format, ....)" which calls sprintf(). My question is how can I pass variable argument in mysprintf() to sprintf(). Thanks in...
1
by: S?ren Gammelmark | last post by:
Hi I have been searching the web and comp.lang.c of a method of using variable-argument function pointers and the like. And some of the questions arising in this post are answered partly in...
2
by: William Ahern | last post by:
So, I recently learned that Solaris doesn't, or doesn't seem, to provide err.h. On some platforms err.h provides simple wrappers for error printing to stderr. One of which, err(), has this...
5
by: shaanxxx | last post by:
i have statements printf("%f",(double)1); /* works fine although i have sent %f for double */ printf("%c",(int)64); /* works fine although i have sent %c for int */ Need your comment on...
19
by: Spiros Bousbouras | last post by:
Every time I've seen an example of a variable argument list function its functionality was to print formatted output. Does anyone have examples where the function is not some variation of printf ?
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: MeoLessi9 | last post by:
I have VirtualBox installed on Windows 11 and now I would like to install Kali on a virtual machine. However, on the official website, I see two options: "Installer images" and "Virtual machines"....
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
by: Aftab Ahmad | last post by:
So, I have written a code for a cmd called "Send WhatsApp Message" to open and send WhatsApp messaage. The code is given below. Dim IE As Object Set IE =...
0
by: marcoviolo | last post by:
Dear all, I would like to implement on my worksheet an vlookup dynamic , that consider a change of pivot excel via win32com, from an external excel (without open it) and save the new file into a...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.