By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,743 Members | 1,072 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,743 IT Pros & Developers. It's quick & easy.

OO in Python? ^^

P: n/a
Hi,

sorry for my ignorance, but after reading the Python tutorial on
python.org, I'm sort of, well surprised about the lack of OOP
capabilities in python. Honestly, I don't even see the point at all of
how OO actually works in Python.

For one, is there any good reason why I should ever inherit from a
class? ^^ There is no functionality to check if a subclass correctly
implements an inherited interface and polymorphism seems to be missing
in Python as well. I kind of can't imagine in which circumstances
inheritance in Python helps. For example:

class Base:
def foo(self): # I'd like to say that children must implement foo
pass

class Child(Base):
pass # works

Does inheritance in Python boil down to a mere code sharing?

And how do I formulate polymorphism in Python? Example:

class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"

obj = Base() # I want a base class reference which is polymorphic
if (<need D1>):
obj = D1()
else:
obj = D2()

I could as well leave the whole inheritance stuff out and the program
would still work (?).

Please give me hope that Python is still worth learning :-/

Regards,
Matthias
Dec 10 '05 #1
Share this Question
Share on Google+
86 Replies


P: n/a
Matthias Kaeppler wrote:
<snip a whole lot of talk of someone still thinking in terms of C>


Let this enlighten your way, young padawan:

modelnine@phoenix ~/gtk-gnutella-downloads $ python
Python 2.4.2 (#1, Oct 31 2005, 17:45:13)
[GCC 3.4.4 (Gentoo 3.4.4-r1, ssp-3.4.4-1.0, pie-8.7.8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import this The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!


And, _do_ (don't only read) the tutorial, and you'll understand why the
short example code you posted isn't pythonic, to say the least:
http://www.python.org/doc/2.4.2/tut/tut.html
and why inheritance in Python is necessary, but on a whole different level
of what you're thinking.

Oh, and on a last note, if you're german, you might as well join
de.comp.lang.python.

--- Heiko.
Dec 10 '05 #2

P: n/a
Matthias Kaeppler wrote:
polymorphism seems to be missing in Python


QOTW!

</F>

Dec 10 '05 #3

P: n/a
Matthias Kaeppler wrote:
class Base:
def foo(self): # I'd like to say that children must implement foo
pass
def foo(self):
raise NotImplementedError("Subclasses must implement foo")

Now calling foo on a child instance will fail if it hasn't implemented foo.
And how do I formulate polymorphism in Python? Example:

class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"
obj = Base() # I want a base class reference which is polymorphic
if (<need D1>):
obj = D1()
else:
obj = D2()


I have no idea what you're trying to do here and how it relates to
polymorphism.

--
Brian Beck
Adventurer of the First Order
Dec 10 '05 #4

P: n/a
Fredrik Lundh wrote:
Matthias Kaeppler wrote:
polymorphism seems to be missing in Python


QOTW!


Let's have some UQOTW: the un-quote of the week! ;-)

--- Heiko.
Dec 10 '05 #5

P: n/a
Brian Beck wrote:

class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"
obj = Base() # I want a base class reference which is polymorphic
if (<need D1>):
obj = D1()
else:
obj = D2()


I have no idea what you're trying to do here and how it relates to
polymorphism.


He's translating C++ code directly to Python. obj = Base() creates a
variable of type Base(), to which you can assign different object types (D
(), D2()) which implement the Base interface (are derived from Base).
Err... At least I think it's what this code is supposed to mean...

In C++ you'd do:

Base *baseob;

if( <i want d1> ) {
baseob = (Base*)new D1();
} else {
baseob = (Base*)new D2();
}

baseob->foo();

(should, if foo is declared virtual in Base, produce "d1" for D1, and "d2"
for D2)

At least IIRC, it's been quite some time since I programmed C++... ;-)
*shudder*

--- Heiko.
Dec 10 '05 #6

P: n/a
In article <dn*************@news.t-online.com>,
Matthias Kaeppler <vo**@void.com> wrote:
...
obj = Base() # I want a base class reference which is polymorphic
obj now refers to an instance of Base.
if (<need D1>):
obj = D1()
obj now refers to an instance of D1(). The Base instance is
unreferenced.
else:
obj = D2()
obj now refers to an instance of D2(). The Base instance is
unreferenced.

Note that there is no code path that results in obj still referring to
an instance of Base. Unless making a Base had side effects, there is no
use in the first line.

I could as well leave the whole inheritance stuff out and the program
would still work (?).
That program might.

Please give me hope that Python is still worth learning :-/


Python has inheritance and polymorphism, implemented via dictionaries.
Python's various types of namespace are implemented with dictionaries.

Type this in to the Python interpreter:

class Base:
def foo(self):
print 'in Base.foo'

class D1(Base):
def foo(self):
print 'in D1.foo'
Base.foo(self)

class D2(Base):
def foo(self):
print 'in D2.foo'
Base.foo(self)

def makeObj():
return needD1 and D1() or D2()

needD1 = True
makeObj().foo()

needD1 = False
makeObj().foo()
__________________________________________________ ______________________
TonyN.:' *firstname*nlsnews@georgea*lastname*.com
' <http://www.georgeanelson.com/>
Dec 10 '05 #7

P: n/a
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> "Matthias" == Matthias Kaeppler <vo**@void.com> writes:

Matthias> sorry for my ignorance, but after reading the Python
Matthias> tutorial on python.org, I'm sort of, well surprised about
Matthias> the lack of OOP capabilities in python. Honestly, I don't
Matthias> even see the point at all of how OO actually works in
Matthias> Python.

It's very common for Python newbies, especially those with backgrounds
in languages such as C++, Java etc. to not really 'get' the Python way
of handling types until they've had a fair amount of experience with
Python. If you want to program Pythonically, you must first unlearn a
number of things.

For instance, in e.g. the Java tradition, if a function needs a
triangle object, it'll take a triangle object as an argument. If it
can handle any type of shape, it'll either take a shape base class
instance as an argument or there'll be some kind of shape interface that
it can take. Argument types are strictly controlled. Not so with
Python. A Python solution will typically take any type of object as an
argument so long as it behaves as expected, and if it doesn't, we deal
with the resulting exception (or don't, depending on what we're trying
to accomplish). For instance, if the function from before that wants a
shape really only needs to call an area method, anything with an area
method can be used successfully as an argument.

Some have dubbed this kind of type check 'duck typing': if it walks
like a duck and quacks like a duck, chances are it'll be a duck. To
those who are used to (more or less) strong, static type checks, this
will seem a reckless approach, but it really works rather well, and
subtle type errors are, in my experience, as rare in Python as in any
other language. In my opinion, the tricks the C*/Java people
occasionally do to get around the type system, such as casting to the
fundamental object type, are worse because they're seldom expected and
resulting errors thus typically more subtle.

In my very first post on this news group a number of years ago, I
asked for an equivalent of Java's interfaces. The only reply I got was
that I didn't need them. While the reason was very obvious, even with
what I knew about Python, it still took a while to sink in. From what
I can tell, you're in somewhat the same situation, and the two of us
are far from unique. As I said in the beginning, Python newbies with a
background in statically typed languages typically have a lot to
unlearn, but in my opinion, it's well worth it.
Martin
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using Mailcrypt+GnuPG <http://www.gnupg.org>

iEYEARECAAYFAkOba8oACgkQYu1fMmOQldXzcgCg0JEGTEG7xC/yAx8C1VFO8H1R
LWwAnRJ8AxHBe8YoHcDC5oGRfYaPHTfX
=HdTR
-----END PGP SIGNATURE-----
Dec 11 '05 #8

P: n/a
Heiko Wundram wrote:
Matthias Kaeppler wrote:
<snip a whole lot of talk of someone still thinking in terms of C>

Well, unless you are (or he is) in with the GNOME crowd, C probably
isn't really the object-oriented language acting as inspiration here.

[Zen of Python]

Of course the ZoP (Zen of Python) is deep guidance for those
languishing in some design dilemma or other, but not exactly helpful,
concrete advice in this context. (I'm also getting pretty jaded with
the recent trend of the ZoP being quoted almost once per thread on
comp.lang.python, mostly as a substitute for any real justification of
Python's design or any discussion of the motivations behind its
design.) That said, the questioner does appear to be thinking of
object-oriented programming from a statically-typed perspective, and
I'd agree that, ZoP or otherwise, a change in perspective and a
willingness to accept other, equally legitimate approaches to
object-orientation will lead to a deeper understanding and appreciation
of the Python language.

Anyway, it appears that the questioner is confusing declarations with
instantiation, amongst other things:
And how do I formulate polymorphism in Python? Example:

class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"

obj = Base() # I want a base class reference which is polymorphic
Well, here one actually gets a reference to a Base object. I know that
in C++ or Java, you'd say, "I don't care exactly what kind of Base-like
object I have right now, but I want to be able to hold a reference to
one." But in Python, this statement is redundant: names/variables
potentially refer to objects of any type; one doesn't need to declare
what type of objects a name will refer to.
if (<need D1>):
obj = D1()
else:
obj = D2()
Without the above "declaration", this will just work. If one needs an
instance of D1, one will assign a new D1 object to obj; otherwise, one
will assign a new D2 object to obj. Now, when one calls the foo method
on obj, Python will just find whichever implementation of that method
exists on obj and call it. In fact, when one does call the method, some
time later in the program, the object held by obj doesn't even need to
be instantiated from a related class: as long as the foo method exists,
Python will attempt to invoke it, and this will even succeed if the
arguments are compatible.

All this is quite different to various other object-oriented languages
because many of them use other mechanisms to find out whether such a
method exists for any object referred to by the obj variable. With such
languages, defining a base class with the foo method and defining
subclasses with that method all helps the compiler to determine whether
it is possible to find such a method on an object referred to by obj.
Python bypasses most of that by doing a run-time check and actually
looking at what methods are available just at the point in time a
method is being called.
I could as well leave the whole inheritance stuff out and the program would still work
(?).
Correct. Rewinding...
Does inheritance in Python boil down to a mere code sharing?


In Python, inheritance is arguably most useful for "code sharing", yes.
That said, things like mix-in classes show that this isn't as
uninteresting as one might think.

Paul

Dec 11 '05 #9

P: n/a
Heiko Wundram wrote:
Fredrik Lundh wrote:
Matthias Kaeppler wrote:
polymorphism seems to be missing in Python


QOTW!


Let's have some UQOTW: the un-quote of the week! ;-)


+1
Dec 11 '05 #10

P: n/a
Paul Boddie wrote:
Heiko Wundram wrote:
Matthias Kaeppler wrote:
> <snip a whole lot of talk of someone still thinking in terms of C>

Well, unless you are (or he is) in with the GNOME crowd, C probably
isn't really the object-oriented language acting as inspiration here.


Pardon this glitch, I corrected it in a followup-post somewhere along the
line, it's been some time since I've last used C/C++ for more than just
Python module programming and as such the term C has come to be synonymous
to "everything not Python" for me. ;-)
[Zen of Python]


I find pointing to the ZoP pretty important, especially for people who start
to use the language. I know the hurdle that you have to overcome when you
grew up with a language which forces static typing on you (I learnt Pascal
as my first language, then used C, C++ and Java extensively for quite some
time before moving on to Perl and finally to Python), and when I started
using Python I had just the same feeling of "now why doesn't Python do this
like C++ does it, I loose all my security?" or something similar.

What got me thinking was reading the ZoP and seeing the design criteria for
the language. That's what actually made me realize why Python is the way it
is, and since that day I am at ease with the design decisions because I can
rationally understand and grip them and use the design productively. Look
at namespaces: I always understood what a namespace was (basically a
dictionary), but it took Tim Peters three lines

"""
Simple is better than complex.
Flat is better than nested.
Namespaces are one honking great idea -- let's do more of those!
"""

to actually get a hint at what the designer thought about when he
implemented namespaces as they are now, with the simplicity that they
actually have. It's always better to follow the designers thoughts about
something he implemented than to just learn that something is the way it is
in a certain language.

I still have that awkward feeling for Perl. TIMTOWTDI just doesn't cut it
when it's yelled at me, I still can't see a single coherent vision which
Larry Wall followed when he designed the language. That's why I decided to
drop it. ;-)

Maybe I'm assuming things by thinking that others also follow my line of
thought, but I've actually had very positive responses so far when telling
people that a certain feature is a certain way and then pointing them to
the ZoP, they all pretty much told me after a certain time of thought that
"the decision made sense now."

--- Heiko.
Dec 11 '05 #11

P: n/a
Heiko Wundram wrote:
Brian Beck wrote:
class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"
obj = Base() # I want a base class reference which is polymorphic
if (<need D1>):
obj = D1()
else:
obj = D2()


I have no idea what you're trying to do here and how it relates to
polymorphism.

He's translating C++ code directly to Python. obj = Base() creates a
variable of type Base(), to which you can assign different object types (D
(), D2()) which implement the Base interface (are derived from Base).
Err... At least I think it's what this code is supposed to mean...

In C++ you'd do:

Base *baseob;

if( <i want d1> ) {
baseob = (Base*)new D1();
} else {
baseob = (Base*)new D2();
}

baseob->foo();

(should, if foo is declared virtual in Base, produce "d1" for D1, and "d2"
for D2)

At least IIRC, it's been quite some time since I programmed C++... ;-)
*shudder*


Yes, that's what I tried to express (the cast to Base* is redundant here
by the way, since D1/D2 are also of type Base; you can always hold a
base class pointer to derived types without type conversion).

I have also read the other answers to my question, and I am really sorry
if I have sounded ignorant in my post, but it's harder than I thought to
move to a language where all these techniques one had learned in years
of hard work suddenly become redundant :)
I'm so used to statically typed languages that the shift is very
confusing. Looks as if it isn't as easy to learn Python afterall, for
the mere reason of unlearning rules which don't apply in the world of
Python anymore (which seem to be quite a lot!).

Regards,
Matthias

Dec 11 '05 #12

P: n/a
Brian Beck wrote:
def foo(self):
raise NotImplementedError("Subclasses must implement foo")


That's actually a good idea, though not as nice as a check at
"compile-time" (jesus, I'm probably talking in C++ speech again, is
there such a thing as compile-time in Python at all?!)

Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.

I really see issues with this, can anyone comment on this who has been
working with Python more than just a day (like me)?

Regards,
Matthias
Dec 11 '05 #13

P: n/a
That was quite insightful Martin, thanks.

Regards,
Matthias
Dec 11 '05 #14

P: n/a
Hallchen!

Matthias Kaeppler <vo**@void.com> writes:
[...]

Another thing which is really bugging me about this whole
dynamically typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.

I really see issues with this, can anyone comment on this who has
been working with Python more than just a day (like me)?


There are even a couple of further checks which don't happen
(explicitly) in a dynamic language like Python, and which do happen
in most statically typed languages like C++. And yes, they are a
source of programming mistakes.

However, in everyday programming you don't feel this. I don't make
more difficult-to-find mistakes in Python than I used to make in my
C++ code. But what you do feel is the additional freedom that the
dynamic approach gives to you.

Basically it's a matter of taste and purpose whether you want to be
controlled heavily or not. Python is particularly liberal, which I
appreciate very much.

Tsch,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus ICQ 264-296-646
Dec 11 '05 #15

P: n/a
Matthias Kaeppler wrote:
Brian Beck wrote:
def foo(self):
raise NotImplementedError("Subclasses must implement foo")

That's actually a good idea, though not as nice as a check at
"compile-time" (jesus, I'm probably talking in C++ speech again, is
there such a thing as compile-time in Python at all?!)

Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.

I really see issues with this, can anyone comment on this who has been
working with Python more than just a day (like me)?

Regards,
Matthias


Matthias,

maybe this article is of interest for you:
http://www.mindview.net/WebLog/log-0025

Regards,
Ernst
Dec 11 '05 #16

P: n/a

P: n/a
On Sun, 11 Dec 2005 10:02:31 +0100, Matthias Kaeppler wrote:
Brian Beck wrote:
def foo(self):
raise NotImplementedError("Subclasses must implement foo")
That's actually a good idea, though not as nice as a check at
"compile-time" (jesus, I'm probably talking in C++ speech again, is
there such a thing as compile-time in Python at all?!)


Yes. Python, like Java, compiles your code into byte code which is then
executed by the Python runtime engine.

Back in the early days of computing the distinction between compiled
languages and interpreted languages might have been meaningful, but not so
much today. There is significant fuzzy overlap between the two.
Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.
Yes, this is one specific error which the Python compiler won't pick up
for you. However, it will pick up the related error:

foo = "some string!"
# ...
if (something_fubar):
bar = fo # should be foo

assuming there isn't some other name fo which already exists.
I really see issues with this, can anyone comment on this who has been
working with Python more than just a day (like me)?


Python works well with test-driven development. Test-driven development
will pick up this sort of error, and many other errors too, with less
effort and more certainty than compile-time checking. The problem with
static typed languages is that they make the programmer do a lot of the
work, just so the compiler can pick up a few extra errors at compile time
rather than at run time.

But since the compiler can't save you from run time errors, you still
should be doing test-driven development. But if you are doing test-driven
development, the value of the static type checking is rather low.

After all, "does it compile?" is only one test out of many, and really the
least important. Lots of code compiles that doesn't work; however no code
that works can possibly fail to compile.

Having said that, static type checking is not utterly useless, and rumour
has it that Python may some day include some version of it.

Oh, if you are tempted to fill your Python code with manual type checks,
using type() or isinstance(), I suggest that you learn about "duck typing"
first. Otherwise known as "latent typing".
--
Steven.

Dec 11 '05 #18

P: n/a

Steven D'Aprano wrote:
Python works well with test-driven development. Test-driven development
will pick up this sort of error, and many other errors too, with less
effort and more certainty than compile-time checking. The problem with
static typed languages is that they make the programmer do a lot of the
work, just so the compiler can pick up a few extra errors at compile time
rather than at run time.

Any language would be benefited from test-driven development, python
needs it because of its dynamic nature.

And I don't think Haskell make the programmer do a lot of work(just
because of its static type checking at compile time).

Dec 11 '05 #19

P: n/a
On 12/11/05, Matthias Kaeppler <vo**@void.com> wrote:
Brian Beck wrote:
def foo(self):
raise NotImplementedError("Subclasses must implement foo")
That's actually a good idea, though not as nice as a check at
"compile-time" (jesus, I'm probably talking in C++ speech again, is
there such a thing as compile-time in Python at all?!)

Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.

I really see issues with this, can anyone comment on this who has been
working with Python more than just a day (like me)?


You are totally correct and this does cause errors. However, I'd like
you to take a few minutes and go back over all your C and C++ and Java
code and make note of how many lines of code and how many complicated
constructs you've used over the years to subvert the type system in
those languages. In my experience, and in many other peoples
experience, dynamic ducky typing saves far more work than it costs,
even when you factor in "typo bugs" which would be prevented by static
typing. Thats not even counting bugs in the extra code you've written
to work around the compiler. In fact, I'd say that a signifigant
portion of my time in writing C++ code is spent convincing the
compiler to do what I want, and not figuring out what I want. In
Python, the opposite is true. My experience in that regard is pretty
typical - you'll find scores of stories on the web of C++ programmers
who switched to Python and breathed a sigh of relief.

All that said, there are automated checkers that can assist in
avoiding these kind of
bugs (as well as several others). Look into PyChecker and PyLint.

Regards,
Matthias
--
http://mail.python.org/mailman/listinfo/python-list

Dec 11 '05 #20

P: n/a
Matthias Kaeppler wrote:
I really see issues with this, can anyone comment on this who has been
working with Python more than just a day (like me)?


Maybe you should work with Python more than one day before you
start looking for potential problems? ;-)

(I suggest reimplementing some portion of some C++ program you've
worked on recently, to get a feel for the language.)

For any potential problem you can think of, you'll find people here
who've never ever had any problems with it, and you'll find people
who think that this specific problem is what prevents Python from
going "mainstream" (and who love when someone else seems to
support their view, whether they really do it or not).

FWIW, having worked full time in and on Python for over 10 years, I
can assure you that I don't have a problem with:

- mistyped variable names
- indentation ("SUCKS BIG TIME")
- how to handle import statements
- finding things in the library reference
- the blue color on python.org
- the size of the python DLL on windows
- tabs vs. spaces
- unnecessary colons
- lambdas being removed in python 3.0
- lambdas not being removed in python 3.0
- limited support for GIF animation in PIL
- the noise level on comp.lang.python
- the global interpreter lock
- the unsuitability of notepad as a programmer editor
- the number of web frameworks available

or any of the other "major" problems that you'll hear about on c.l.python
from time to time. But that's me. Your milage may vary. The only way
to find out is to use the language. Write code, not usenet posts.

</F>

Dec 11 '05 #21

P: n/a
In article <dn*************@news.t-online.com>,
Matthias Kaeppler <"matthias at finitestate dot org"> wrote:

Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.


pychecker (or pylint, but I haven't tried that)
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

"Don't listen to schmucks on USENET when making legal decisions. Hire
yourself a competent schmuck." --USENET schmuck (aka Robert Kern)
Dec 11 '05 #22

P: n/a
On Sun, 11 Dec 2005 05:48:00 -0800, bonono wrote:

Steven D'Aprano wrote:
Python works well with test-driven development. Test-driven development
will pick up this sort of error, and many other errors too, with less
effort and more certainty than compile-time checking. The problem with
static typed languages is that they make the programmer do a lot of the
work, just so the compiler can pick up a few extra errors at compile time
rather than at run time.
Any language would be benefited from test-driven development, python
needs it because of its dynamic nature.


We can use "need" in the strict sense, as in "the language won't work
without it". I think we can reject that as too strong, because clearly
Python can work without unittests or any other sort of testing.

In the looser sense of "this will benefit you", I think it is fair to say
that *all* languages need test-driven development. If you want your code
to do some non-trivial task X, and you don't actually test to see if
it really does do X, then all the compiler tests in the world won't tell
you that your code is doing X.

Of course, the IT world is full of people writing code and not testing
it, or at least not testing it correctly. That's why there are frequent
updates or upgrades to software that break features that worked in the
older version. That would be impossible in a test-driven methodology, at
least impossible to do by accident.
And I don't think Haskell make the programmer do a lot of work(just
because of its static type checking at compile time).


I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed. At least the "What Is Haskell?" page at
haskell.org describes the language as strongly typed, non-strict, and
allowing polymorphic typing.

--
Steven.

Dec 11 '05 #23

P: n/a

Steven D'Aprano wrote:
And I don't think Haskell make the programmer do a lot of work(just
because of its static type checking at compile time).


I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed. At least the "What Is Haskell?" page at
haskell.org describes the language as strongly typed, non-strict, and
allowing polymorphic typing.

What is your definition of statically typed ? The non-strict as far as
I know is not referring to type checking. It does check type at compile
time though it is quite different from language like C, Java, the
polymorphic typing.

Dec 11 '05 #24

P: n/a
D H
Fredrik Lundh wrote:
Write code, not usenet posts.


QOTW!
Dec 11 '05 #25

P: n/a

Ernst Noch wrote:
Matthias Kaeppler wrote:
Brian Beck wrote:
def foo(self):
raise NotImplementedError("Subclasses must implement foo")

That's actually a good idea, though not as nice as a check at
"compile-time" (jesus, I'm probably talking in C++ speech again, is
there such a thing as compile-time in Python at all?!)

Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.

I really see issues with this, can anyone comment on this who has been
working with Python more than just a day (like me)?

Regards,
Matthias


Matthias,

maybe this article is of interest for you:
http://www.mindview.net/WebLog/log-0025

And two related ones.

http://www.algorithm.com.au/mt/progr...ic_typing.html

Just follow the links.

Dec 11 '05 #26

P: n/a
gene tani wrote:
http://naeblis.cx/rtomayko/2004/12/1...c-method-thing
http://dirtsimple.org/2004/12/java-i...on-either.html
http://dirtsimple.org/2004/12/python-is-not-java.html

http://idevnews.com/PrintVersion_Cas...&Go2=Go&ID=118
http://www.idevnews.com/PrintVersion...cks.asp?ID=107
http://www.eros-os.org/pipermail/e-l...ne/005337.html


First of all, thanks everybody for posting all these answers and links,
much appreciated. What Bruce Eckel wrote about dynamic typing was quite
convincing and reasonable.

I stumbled over this paragraph in "Python is not Java", can anyone
elaborate on it:

"In Java, you have to use getters and setters because using public
fields gives you no opportunity to go back and change your mind later to
using getters and setters. So in Java, you might as well get the chore
out of the way up front. In Python, this is silly, because you can start
with a normal attribute and change your mind at any time, without
affecting any clients of the class. So, don't write getters and setters."

Why would I want to use an attribute in Python, where I would use
getters and setters in Java? I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...), but is that a reason to only write white box classes? ^^

- Matthias
Dec 11 '05 #27

P: n/a
bo****@gmail.com wrote:
Just follow the links.


I'll try ;-)

Dec 11 '05 #28

P: n/a
On Sun, 11 Dec 2005 07:10:27 -0800, bonono wrote:

Steven D'Aprano wrote:
> And I don't think Haskell make the programmer do a lot of work(just
> because of its static type checking at compile time).


I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed. At least the "What Is Haskell?" page at
haskell.org describes the language as strongly typed, non-strict, and
allowing polymorphic typing.

What is your definition of statically typed ? The non-strict as far as
I know is not referring to type checking. It does check type at compile
time though it is quite different from language like C, Java, the
polymorphic typing.


Strongly typed means that objects have a type. All objects in Python have
a type.

Strongly typed languages like Python forbid you from performing operations
on mismatched types, e.g. 1 + "1" does not work. In order to perform
operations on mismatched types, you must explicitly perform a conversion,
e.g. 1 + int("1").

Weakly typed languages do not prevent you performing operations on
mismatched types, e.g. something like 1 + "1" is allowed in languages like
Basic and Perl.

Untyped languages do not have any type information at all -- everything
is just bytes. The most obvious example is assembly language.

It should be noted that strong and weak typing is a matter of degree:
despite being mostly strongly typed, Python does do automatic coercion of
ints and floats, and although it is (arguably) weakly typed, Perl won't
allow you to treat scalars as arrays or vice versa.

Dynamic typing means that variables can be dynamically set to objects of
wildly different types. For Python, we would say that any name can be
bound to any object of any type.

Static typing is the opposite of dynamic typing. Once a variable or
name is defined as a certain type (either by a declaration, or
implicitly the first time it is used), it can only be assigned to values
of that same type.

These two articles may be helpful:

http://www.voidspace.org.uk/python/a...k_typing.shtml
http://www.artima.com/forums/flat.js...06&thread=7590
A thoughtful defence of static typing is here:

http://www.xoltar.org/misc/static_typing_eckel.html

The fact that it is sub-titled "How Java/C++/C# Ruin Static Typing for the
Rest of Us" should give some idea what it is about.
--
Steven.

Dec 11 '05 #29

P: n/a
Heiko Wundram wrote:
Maybe I'm assuming things by thinking that others also follow my line of
thought, but I've actually had very positive responses so far when telling
people that a certain feature is a certain way and then pointing them to
the ZoP, they all pretty much told me after a certain time of thought that
"the decision made sense now."


Sorry to come across all harsh on the subject. Perhaps you know people
who are more meditative than I do, but I notice that you served up some
concrete advice elsewhere in the thread, so if the ZoP doesn't provide
any guidance to the questioner as it is, at least there's something
else to read through and grasp.

Paul

Dec 11 '05 #30

P: n/a

Steven D'Aprano wrote:
On Sun, 11 Dec 2005 07:10:27 -0800, bonono wrote:

Steven D'Aprano wrote:
> And I don't think Haskell make the programmer do a lot of work(just
> because of its static type checking at compile time).

I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed. At least the "What Is Haskell?" page at
haskell.org describes the language as strongly typed, non-strict, and
allowing polymorphic typing.
What is your definition of statically typed ? The non-strict as far as
I know is not referring to type checking. It does check type at compile
time though it is quite different from language like C, Java, the
polymorphic typing.


Strongly typed means that objects have a type. All objects in Python have
a type.

Strongly typed languages like Python forbid you from performing operations
on mismatched types, e.g. 1 + "1" does not work. In order to perform
operations on mismatched types, you must explicitly perform a conversion,
e.g. 1 + int("1").

Weakly typed languages do not prevent you performing operations on
mismatched types, e.g. something like 1 + "1" is allowed in languages like
Basic and Perl.


This much I know but it was not what we are talking about.

Untyped languages do not have any type information at all -- everything
is just bytes. The most obvious example is assembly language.

It should be noted that strong and weak typing is a matter of degree:
despite being mostly strongly typed, Python does do automatic coercion of
ints and floats, and although it is (arguably) weakly typed, Perl won't
allow you to treat scalars as arrays or vice versa.

Dynamic typing means that variables can be dynamically set to objects of
wildly different types. For Python, we would say that any name can be
bound to any object of any type.

Static typing is the opposite of dynamic typing. Once a variable or
name is defined as a certain type (either by a declaration, or
implicitly the first time it is used), it can only be assigned to values
of that same type.

These two articles may be helpful:

http://www.voidspace.org.uk/python/a...k_typing.shtml
http://www.artima.com/forums/flat.js...06&thread=7590
A thoughtful defence of static typing is here:

http://www.xoltar.org/misc/static_typing_eckel.html

The fact that it is sub-titled "How Java/C++/C# Ruin Static Typing for the
Rest of Us" should give some idea what it is about.

And you would see in the xoltar link that Haskell is a language of
static typing. You cannot call a function with parameters with
incompatible types which would be checked at compile time.

Dec 11 '05 #31

P: n/a
Steven D'Aprano wrote:
Weakly typed languages do not prevent you performing operations on
mismatched types, e.g. something like 1 + "1" is allowed in languages like
Basic and Perl.


Actually, Perl and at least the version of BASIC that I previously used
are not weakly-typed languages either. The addition operator in Perl
uses coercion much as Python does/supports for some operand types, and
classic microcomputer BASICs generally suffixed variable names in order
to impose some kind of static typing. One classic example of a
weakly-typed language is BCPL, apparently, but hardly anyone has any
familiarity with it any more.

Paul

Dec 11 '05 #32

P: n/a
On Sun, 11 Dec 2005 17:05:16 +0100, Matthias Kaeppler wrote:
Why would I want to use an attribute in Python, where I would use
getters and setters in Java?
Oh boy! I've just come out of a rather long thread about that very issue.
If you care enough to read a bunch of people arguing past each other,
check the thread "Another newbie question", especially the posts about the
so-called Law of Demeter.

But for the short summary: suppose I write a class:

class Parrot:
def __init__(self, x):
self._x = x
print "please use instance.getx() and setx()"
def getx(self):
return self._x
def setx(self, x):
self._x = x
What if I want to export methods of x other than get and set? Perhaps x
is a numeric value: now I have to export a whole series of arithmetic
operators: addx, subx, mulx, etc. And there may be other attributes I need
to export as well...

When I write getter and setter methods for an attribute, I make myself
responsible for maintaining everything about that attribute. I create
extra functions that need to be tested, debugged and documented. I expect
my class users to learn how to use my class, and every method is another
method they have to learn about. Writing getter and setter methods have
costs -- I pay some of those costs, and my users pay some of those costs.

Then I get fed up and write this class instead:

class NorwegianBlue:
def __init__(self, x):
self.x = x
print "please use public attribute instance.x"

NorwegianBlue is a much smaller class. That means less development time
for me, less test code to write, less documentation, less time needed for
my users to learn the API -- they already know what the API for x is,
because they supplied it. The costs to both class designer and class
user are much less, the flexibility is greater, and I finish writing the
class quicker.

Is there a disadvantage to NorwegianBlue? Yes -- I am now locked in to one
particular interface. I can't even do something as trivial as change the
name of attribute x. But then I can't change the name of Parrot.getx
either, not without breaking people's code.

But still, sometimes I know that I will want to -- or even that I might
want to -- radically change the implementation of my class. Perhaps x is a
list now, but later it will be a binary tree. How much effort will be
needed to fix the breakage if NorwegianBlue changes? If the expected
effort to fix the breakage is more than the effort to write setters and
getters, then it makes sense to write setters and getters.

But contrariwise, if it will take less effort to fix the problem than to
defend against it, then why defend against it? Getters and setters are
insurance against you changing the private interface. You wouldn't pay
$20,000 a year to protect a $20,000 car. You might not even pay $1,000 a
year.

So do your trade-offs, and make your decision.
I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...),
I'll give you a much better break in C++:

#def private = public

So much for encapsulation, hey?

Java makes it harder, but not that much more:

http://www.informit.com/articles/art...&seqNum=3&rl=1
but is that a reason to only write white box classes? ^^


Why would you want to write black box classes?

Python is not a bondage and discipline language. Remember kids, whips and
chains are only for mummies and daddies who love each other very much.

Python includes tools for preventing people *accidentally* shooting
themselves in the foot. But the philosophy of the language is to allow,
even to encourage, people to tinker under the hood. If that means that
somebody *deliberately* shoots themselves in the foot, well, that's the
price you pay for freedom: some people make stupid decisions.

I look at it this way: as the class designer, I have ZERO idea what
attributes and methods of my class will be just the perfect thing to solve
somebody's problem, so it is rude of me to lock them up as private --
especially since they *will* find a way to hack my class and access my
private attributes and methods anyway. All I'm doing is making their life
miserable by making them go out and buy books like "Hacking Private
Attributes In Java" or something.

But as a class designer, I know perfectly well which of my class members
are free for anyone to touch (instance.public), which should be considered
private by convention (instance._private) and which need a great big sign
on the door saying "Enter At Own Risk! Don't touch this unless you know
what you are doing!!!" (instance.__mangled).

--
Steven.

Dec 11 '05 #33

P: n/a
In article <dn*************@news.t-online.com>,
Matthias Kaeppler <"matthias at finitestate dot org"> wrote:

Why would I want to use an attribute in Python, where I would use
getters and setters in Java? I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...), but is that a reason to only write white box classes? ^^


"Simple is better than complex."
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

"Don't listen to schmucks on USENET when making legal decisions. Hire
yourself a competent schmuck." --USENET schmuck (aka Robert Kern)
Dec 11 '05 #34

P: n/a
Matthias Kaeppler <vo**@void.com> wrote:
...
I stumbled over this paragraph in "Python is not Java", can anyone
elaborate on it:

"In Java, you have to use getters and setters because using public
fields gives you no opportunity to go back and change your mind later to
using getters and setters. So in Java, you might as well get the chore
out of the way up front. In Python, this is silly, because you can start
with a normal attribute and change your mind at any time, without
affecting any clients of the class. So, don't write getters and setters."

Why would I want to use an attribute in Python, where I would use
getters and setters in Java? I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...), but is that a reason to only write white box classes? ^^


Consider the difference between asking a user of your class to write:

a.setFoo(1 + a.getFoo())

versus allowing him or her to write:

a.foo += 1

I hope you can see how much more elegant and handy the second
alternative is. The point is, in Python, you can easily make the
semantics of the second alternative identical to those of the first: all
you need to do is, within a's class, add ONE line:
foo = property(getFoo, setFoo)
and every access of a.foo will become a call to a.getFoo(), every
setting of a.foo=bar a call to a.setFoo(bar).

The ability to do this IF AND WHEN getFoo and setFoo really need to
exist (i.e., they do something meaningful, not just boilerplate setting
and getting of plain attributes) empowers you to always allow the users
of your class to access attributes -- you will change an attribute to a
property only in future versions of your class that do need some
meaningful action upon the getting or setting or that attribute.
Alex
Dec 11 '05 #35

P: n/a
Matthias Kaeppler <vo**@void.com> wrote:
...
I'm so used to statically typed languages that the shift is very
confusing. Looks as if it isn't as easy to learn Python afterall, for
the mere reason of unlearning rules which don't apply in the world of
Python anymore (which seem to be quite a lot!).


If you start your reasoning with Java for most aspects (and with C++ for
just a few) it may get easier -- for example, garbage collection, the
semantics of assignment and argument passing, as well as the
immutability of strings, are very close in Java and Python, and they're
such a "bedrock layer" parts of the languages that IMHO they're more
likely to be stumbling blocks if you "think C++" rather than "think
Java", while learning Python.

If you can imagine a Java program where everything is declared to be
Object, all method calls have an implicit cast to "some interface
supplying this method", and all class declarations implicitly add many
"implements IMethodFoo" for automatic interfaces supplying each of their
methods foo, you're a good part of the way;-). Such a style would be
unusual (to say the least) in Java, but it's clearly POSSIBLE... it's
actually not that unusual in Objective C (except you say ID rather than
Object, and implicit interfaces ARE sort of there) -- the main caveat i
would give to an Objective C person learning Python (besides the garbage
collection issues) is "in Python, you cannot call arbitrary methods on
None and expect them to be innocuous noops" (like in Java or C++,
calling anything on None, aka a NULL pointer aka null, is a runtime
error in Python too; the language without this restriction in this case
is Objective C!-).
Alex
Dec 11 '05 #36

P: n/a
Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Sun, 11 Dec 2005 17:05:16 +0100, Matthias Kaeppler wrote:
Why would I want to use an attribute in Python, where I would use
getters and setters in Java?


Oh boy! I've just come out of a rather long thread about that very issue.
If you care enough to read a bunch of people arguing past each other,
check the thread "Another newbie question", especially the posts about the
so-called Law of Demeter.

But for the short summary: suppose I write a class:

class Parrot:
def __init__(self, x):
self._x = x
print "please use instance.getx() and setx()"
def getx(self):
return self._x
def setx(self, x):
self._x = x

What if I want to export methods of x other than get and set? Perhaps x
is a numeric value: now I have to export a whole series of arithmetic
operators: addx, subx, mulx, etc. And there may be other attributes I need
to export as well...


Sorry, I don't see this.

a.setx( b.getx() + c.getx() - d.getx() )

will work just as well as (in a better-designed class) would

a.x = b.x + c.x - d.x

It's just that the former style you force syntactic cruft and overhead
which you may save in the latter. "Exporting a series of operators",
which was an issue in the LOD thread, is not one here: once you have
setter and getter, by whatever syntax, it's not germane.
Alex
Dec 11 '05 #37

P: n/a
Matthias Kaeppler wrote:
Why would I want to use an attribute in Python, where I would use
getters and setters in Java? I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...), but is that a reason to only write white box classes? ^^

- Matthias


If you've ever written non-trivial code in Java, you can't deny that
most of the classes are littered by pages and pages of

--
protected FooClass foo;
FooClass getFoo() {
return foo;
}
void setFoo(FooClass foo) {
this.foo = foo;
}
--

This is more or less equivalent to a simple
--
public FooClass foo;
--
but allows you to change the implementation details of the class without
having to bork your whole interface later.

Now, you have no reason to do this in python, you can use a regular
"real" attribute, and if you need to change the implementation details,
you can remove the real attribute and replace it with a virtual
attribute through properties without changing the class' interface.
--
class Bar(object): def __init__(self):
self.foo = 5

bar = Bar()
bar.foo 5
# now let's change the Bar class implementation and split foo in boo and far
class Bar(object): def __init__(self):
self.boo = 2
self.far = 3
def _getfoo(self):
return self.boo + self.far
foo = property(_getfoo)

bar = Bar()
bar.foo 5
--
And this is completely transparent for anyone using the Bar class, they
don't give a damn about what happens inside.

You can also use it to add checks on the allowed values, for example
-- class Bar(object): def __init__(self):
self._foo = 5
def _getfoo(self):
return self._foo
def _setfoo(self,foo):
if not 0 <= foo <= 10:
raise ValueError("foo's value must be between 0 and 10 (included)")
self._foo = foo
foo = property(_getfoo, _setfoo)

bar = Bar()
bar.foo 5 bar.foo = 2
bar.foo 2 bar.foo = 20
Traceback (most recent call last):
File "<pyshell#42>", line 1, in -toplevel-
bar.foo = 20
File "<pyshell#37>", line 8, in _setfoo
raise ValueError("foo's value must be between 0 and 10 (included)")
ValueError: foo's value must be between 0 and 10 (included)

--
or everything else you'd use Java's getters/setters for, but without
having to over-engineer your classes and wrap everything just because
the language doesn't allow you to change your mind if you ever realize
you made mistakes in the previous implementations.
Dec 11 '05 #38

P: n/a
Steven D'Aprano wrote:
I look at it this way: as the class designer, I have ZERO idea what
attributes and methods of my class will be just the perfect thing to solve
somebody's problem, so it is rude of me to lock them up as private --
especially since they *will* find a way to hack my class and access my
private attributes and methods anyway.


Actually, one good aspect of "attribute privacy" is the lowered risk of
instance attribute name clashes. When subclassing from some partly
abstract class in order to specialise its behaviour
(sgmllib.SGMLParser, for example), one has to make sure that one
doesn't reuse some instance attribute name by accident - doing so could
potentially cause behavioural issues in the superclass's mechanisms.
That said, Python does at least let you look at which attributes are
already defined in an instance due to the lack of privacy, so it does
offer some kind of solution to that problem. Moreover, Python also lets
you define double-underscore attribute names which behave give instance
attributes privacy in all respects, being invisible to users of the
instances concerned, accessible only within methods belonging to the
defining class, and safely overloadable in subclasses without incurring
conflicts.

Generally, I'm in agreement with you, though: private, protected and
final are all things which provide easy distractions from more serious
and demanding issues. They can quite often prove to be "the bikeshed"
in system design and implementation.

Paul

Dec 11 '05 #39

P: n/a
Matthias Kaeppler a crit :
Hi,

sorry for my ignorance, but after reading the Python tutorial on
python.org, I'm sort of, well surprised about the lack of OOP
capabilities in python.
I beg your pardon ???
Honestly, I don't even see the point at all of
how OO actually works in Python. For one, is there any good reason why I should ever inherit from a
class?
To specialize it (subtyping), or to add functionnalities (code reuse,
factoring)
^^ There is no functionality to check if a subclass correctly
implements an inherited interface
I don't know of any language that provide such a thing. At least for my
definition of "correctly".
and polymorphism seems to be missing
in Python as well.
Could you share your definition of polymorphism ?
I kind of can't imagine in which circumstances
inheritance in Python helps. For example:

class Base:
def foo(self): # I'd like to say that children must implement foo
pass
class Base(object):
def foo(self):
raise NotImplementedError, "please implement foo()"
class Child(Base):
pass # works Does inheritance in Python boil down to a mere code sharing?
Yes. inheritence is initially made for code sharing (cf Smalltalk). The
use of inheritence for subtyping comes from restrictions of statically
typed languages [1]. BTW, you'll notice that the GoF (which is still one
of the best references about OO) strongly advise to program to
interfaces, not to implementations. And you'll notice that some patterns
only exists as workarounds for the restrictions enforced by statically
typed languages.
[1] should say : for *a certain class of* statically typed languages.
There are also languages like OCaml that relies on type inference.
And how do I formulate polymorphism in Python?
In OO, polymorphism is the ability for objects of different classes to
answer the same message. It doesn't imply that these objects should
inherit from a common base class. Statically typed languages like C++ or
Java *restrict* polymorphism.

Example:
(snip)

You don't need any of this.

class Foo:
def walk(self):
print "%s walk" % self.__class__.__name__

class Bar:
def walk(self):
print "I'm singing in the rain"

def letsgoforawalk(walker):
walker.walk()

f = Foo()
b = Bar()

letsgoforawalk(f)
letsgoforawalk(b)

Here, the function (BTW, did you know that in Python, functions are
objects too ?) letsgoforawalk expect an object that has the type 'object
that understand the message walk()'. Any object of this type will do -
no need to have a common base class.
I could as well leave the whole inheritance stuff out and the program
would still work (?).
Of course. Why should polymorphism need anything more ?
Please give me hope that Python is still worth learning :-/


It is, once you've unlearned C++/Java/ADA/whatever.
Dec 11 '05 #40

P: n/a
Paul Boddie <pa**@boddie.org.uk> wrote:
...
offer some kind of solution to that problem. Moreover, Python also lets
you define double-underscore attribute names which behave give instance
attributes privacy in all respects, being invisible to users of the
instances concerned, accessible only within methods belonging to the
defining class, and safely overloadable in subclasses without incurring
conflicts.


Unfortunately, that depends on the subclasses' naming:

in module bah.py:
class Foo(object) ...

in module zot.py:
import bah
class Foo(bah.Foo) ...

and alas, the "privacy in all respects" breaks down. Now, the idea of
"class Foo(bah.Foo)" may look silly with such artificial names, but it
IS somewhat likely to happen in the real world when the classes' names
are meaningful. I guess pychecker or the like might check for this and
issue a warning if needed... but I do wish we had better ways to prevent
accidental naming conflicts (not that I can easily think of any).
Alex
Dec 11 '05 #41

P: n/a
Matthias Kaeppler a crit :
(snip)
I stumbled over this paragraph in "Python is not Java", can anyone
elaborate on it:

"In Java, you have to use getters and setters because using public
fields gives you no opportunity to go back and change your mind later to
using getters and setters. So in Java, you might as well get the chore
out of the way up front. In Python, this is silly, because you can start
with a normal attribute and change your mind at any time, without
affecting any clients of the class. So, don't write getters and setters."

Why would I want to use an attribute in Python, where I would use
getters and setters in Java?
Because you don't *need* getters/setters - you already got'em for free
(more on this latter).
I know that encapsulation is actually just
a hack in Python (common, "hiding" an implementation detail by prefixing
it with the classname so you can't access it by its name anymore? Gimme
a break...),
You're confusing encapsulation with information hiding. The mechanism
you're refering to is not meant to 'hide' anything, only to prevent
accidental shadowing of some attributes. The common idiom for
"information hiding" in Python is to prefix 'protected' attributes with
a single underscore. This warns developers using your code that this is
implementation detail, and that there on their own if they start messing
with it. And - as incredible as this can be - that's enough.

True, this won't stop stupid programmers from doing stupid thing - but
no language is 'idiot-proof' anyway (know the old '#define private
public' C++ trick ?), so why worry ?
but is that a reason to only write white box classes? ^^


First, what is a 'white box' class ?

public class WhiteBox {
protected integer foo;
protected integer bar;

public integer getFoo() {
return foo;
}
public void setFoo(integer newfoo) {
foo = newfoo;
}
public integer getBar() {
return bar;
}
public void setBar(integer newbar) {
bar = newbar;
}
}

Does this really qualify as a 'blackbox' ? Of course not, everything is
publicly exposed. You could have the exact same result (and much less
code) with public attributes.

Now what is the reason to write getters and setters ? Answer : so you
can change the implementation without breaking the API, right ?

Python has something named 'descriptors'. This is a mechanism that is
used for attribute lookup (notice that everything being an object,
methods are attributes too). You don't usually need to worry about it
(you should read about if you really want to understand Python's object
model), but you can still use it when you need to take control over
attribute access. One of the simplest application is 'properties' (aka
computed attributes), and here's an exemple :

class Foo(object):
def __init__(self, bar, baaz):
self.bar = bar
self._baaz = baaz

def _getbaaz(self):
return self._baaz

baaz = Property(fget=_getbaaz)

This is why you don't need explicit getters/setters in Python : they're
already there ! And you can of course change how they are implemented
without breaking the interface. Now *this* is encapsulation - and it
doesn't need much information hiding...
Dec 11 '05 #42

P: n/a
"Paul Boddie" <pa**@boddie.org.uk> writes:
One classic example of a
weakly-typed language is BCPL, apparently, but hardly anyone has any
familiarity with it any more.


Actually, BCPL is what Stevenn D'Aprano called "untyped". Except his
definition is suitable for after everyone followed IBM's footsteps in
building general-purpose byte-addressable machines.

In BCPL, everything is a word. Given a word, you can dereference it,
add it to another word (as either a floating point value or an integer
value), or call it as a function.

A classic example of a weakly-typed language would be a grandchild of
BCPL, v6 C. Since then, C has gotten steadily more strongly typed. A
standard complaint as people tried to move code from a v6 C compiler
(even the photo7 compiler) to the v7 compiler was "What do you mean I
can't ....". Of course, hardly anyone has familiarity with that any
more, either.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 11 '05 #43

P: n/a
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Of course, the IT world is full of people writing code and not testing
it, or at least not testing it correctly. That's why there are frequent
updates or upgrades to software that break features that worked in the
older version. That would be impossible in a test-driven methodology, at
least impossible to do by accident.


That sentence is only true if your tests are bug-free. If not, it's
possible to make a change that introduces a bug that passes testing
because of a bug in the tests. Since tests are code, they're never
bug-free. I will agree that the frequency of upgrades/updates breaking
things means testing isn't being done properly.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 11 '05 #44

P: n/a
Bruno Desthuilliers <bd*****************@free.quelquepart.fr> writes:
^^ There is no functionality to check if a subclass correctly
implements an inherited interface


I don't know of any language that provide such a thing. At least for
my definition of "correctly".


Well, since your definition of "correclty" is uknown, I won't use
it. I will point out that the stated goal is impossible for some
reasonable definitions of "correctly".

My definition of "correctly" is "meets the published contracts for the
methods." Languages with good support for design by contract will
insure that subclasses either correctly implement the published
contracts, or raise an exception when they fail to do so. They do that
by checking the contracts for the super classes in an appropriate
logical relationship, and raising an exception if the contract isn't
met.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 11 '05 #45

P: n/a
On Mon, 12 Dec 2005, Steven D'Aprano wrote:
On Sun, 11 Dec 2005 05:48:00 -0800, bonono wrote:
And I don't think Haskell make the programmer do a lot of work(just
because of its static type checking at compile time).
I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed.


Haskell is strongly and statically typed - very strongly and very
statically!

However, what it's not is manifestly typed - you don't have to put the
types in yourself; rather, the compiler works it out. For example, if i
wrote code like this (using python syntax):

def f(x):
return 1 + x

The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer", and concludes that you meant
(using Guido's notation):

def f(x: int) -> int:
return 1 + x

Note that this still buys you type safety:

def g(a, b):
c = "{" + a + "}"
d = 1 + b
return c + d

The compiler works out that c must be a string and d must be an int, then,
when it gets to the last line, finds an expression that must be wrong, and
refuses to accept the code.

This sounds like it wouldn't work for complex code, but somehow, it does.
And somehow, it works for:

def f(x):
return x + 1

Too. I think this is due to the lack of polymorphic operator overloading.

A key thing is that Haskell supports, and makes enormous use of, a
powerful system of generic types; with:

def h(a):
return a + a

There's no way to infer concrete types for h or a, so Haskell gets
generic; it says "okay, so i don't know what type a is, but it's got to be
something, so let's call it alpha; we're adding two alphas, and one thing
i know about adding is that adding two things of some type makes a new
thing of that type, so the type of some-alpha + some-alpha is alpha, so
this function returns an alpha". ISTR that alpha gets written 'a, so this
function is:

def h(a: 'a) -> 'a:
return a + a

Although that syntax might be from ML. This extends to more complex
cases, like:

def i(a, b):
return [a, b]

In Haskell, you can only make lists of a homogenous type, so the compiler
deduces that, although it doesn't know what type a and b are, they must be
the same type, and the return value is a list of that type:

def i(a: 'a, b: 'a) -> ['a]:
return [a, b]

And so on. I don't know Haskell, but i've had long conversations with a
friend who does, which is where i've got this from. IANACS, and this could
all be entirely wrong!
At least the "What Is Haskell?" page at haskell.org describes the
language as strongly typed, non-strict, and allowing polymorphic typing.


When applied to functional languages, 'strict' (or 'eager'), ie that
expressions are evaluated as soon as they are formed; 'non-strict' (or
'lazy') means that expressions can hang around as expressions for a while,
or even not be evaluated all in one go. Laziness is really a property of
the implementation, not the the language - in an idealised pure functional
language, i believe that a program can't actually tell whether the
implementation is eager or lazy. However, it matters in practice, since a
lazy language can do things like manipulate infinite lists.

tom

--
`,,,,`,,,,`,,,,`
Dec 12 '05 #46

P: n/a
Mike Meyer <mw*@mired.org> wrote:
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
Of course, the IT world is full of people writing code and not testing
it, or at least not testing it correctly. That's why there are frequent
updates or upgrades to software that break features that worked in the
older version. That would be impossible in a test-driven methodology, at
least impossible to do by accident.


That sentence is only true if your tests are bug-free. If not, it's
possible to make a change that introduces a bug that passes testing
because of a bug in the tests. Since tests are code, they're never
bug-free. I will agree that the frequency of upgrades/updates breaking
things means testing isn't being done properly.


Yours is a good point: let's be careful not to oversell or overhype TDD,
which (while great) is not a silver bullet. Specifically, TDD is prone
to a "common-mode failure" between tests and code: misunderstanding of
the specs (generally underspecified specs); since the writer of the test
and of the code is the same person, if that person has such a
misunderstanding it will be reflected equally in both code and test.

Which is (part of) why code developed by TDD, while more robust against
many failure modes than code developed more traditionally, STILL needs
code inspection (or pair programming), integration tests, system tests,
and customer acceptance tests (not to mention regression tests, once
bugs are caught and fixed;-), just as much as code developed otherwise.
Alex
Dec 12 '05 #47

P: n/a
Tom Anderson <tw**@urchin.earth.li> wrote:
...
Haskell is strongly and statically typed - very strongly and very
statically!
Sure.

However, what it's not is manifestly typed - you don't have to put the
types in yourself; rather, the compiler works it out. For example, if i
wrote code like this (using python syntax):

def f(x):
return 1 + x

The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer", and concludes that you meant


hmmm, not exactly -- Haskell's not QUITE as strongly/rigidly typed as
this... you may have in mind CAML, which AFAIK in all of its variations
(O'CAML being the best-known one) *does* constrain + so that "the only
thing you can add to an integer is another integer". In Haskell, + can
sum any two instances of types which meet typeclass Num -- including at
least floats, as well as integers (you can add more types to a typeclass
by writing the required functions for them, too). Therefore (after
loading in ghci a file with
f x = x + 1
), we can verify...:

*Main> :type f
f :: (Num a) => a -> a
A very minor point, but since the need to use +. and the resulting lack
of polymorphism are part of what keeps me away from O'CAML and makes me
stick to Haskell, I still wanted to make it;-).
Alex
Dec 12 '05 #48

P: n/a
On Mon, 12 Dec 2005 01:12:26 +0000, Tom Anderson <tw**@urchin.earth.li> wrote:
tom

--
`,,,,`,,,,`,,,,`
---910079544-1780890058-1134349946=:30272--


[OT} (just taking liberties with your sig ;-)
,<@><
,,,,`,,,,P`,,y,,t`,,h, ,o`,,n,,
Regards,
Bengt Richter
Dec 12 '05 #49

P: n/a
Mike Meyer wrote:
Bruno Desthuilliers <bd*****************@free.quelquepart.fr> writes:
^^ There is no functionality to check if a subclass correctly
implements an inherited interface


I don't know of any language that provide such a thing. At least for
my definition of "correctly".

Well, since your definition of "correclty" is uknown, I won't use
it.


!-)

My own definition of 'correctly' in this context would be about ensuring
that the implementation respects a given semantic.

But honestly, this was a somewhat trollish assertion, and I'm afraid
forgot to add a smiley here.
--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Dec 12 '05 #50

86 Replies

This discussion thread is closed

Replies have been disabled for this discussion.