469,353 Members | 2,066 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,353 developers. It's quick & easy.

OO in Python? ^^

Hi,

sorry for my ignorance, but after reading the Python tutorial on
python.org, I'm sort of, well surprised about the lack of OOP
capabilities in python. Honestly, I don't even see the point at all of
how OO actually works in Python.

For one, is there any good reason why I should ever inherit from a
class? ^^ There is no functionality to check if a subclass correctly
implements an inherited interface and polymorphism seems to be missing
in Python as well. I kind of can't imagine in which circumstances
inheritance in Python helps. For example:

class Base:
def foo(self): # I'd like to say that children must implement foo
pass

class Child(Base):
pass # works

Does inheritance in Python boil down to a mere code sharing?

And how do I formulate polymorphism in Python? Example:

class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"

obj = Base() # I want a base class reference which is polymorphic
if (<need D1>):
obj = D1()
else:
obj = D2()

I could as well leave the whole inheritance stuff out and the program
would still work (?).

Please give me hope that Python is still worth learning :-/

Regards,
Matthias
Dec 10 '05
86 3668
In article <1h**************************@mail.comcast.net>,
al***@mail.comcast.net (Alex Martelli) wrote:
Tom Anderson <tw**@urchin.earth.li> wrote:
...
Haskell is strongly and statically typed - very strongly and very
statically!


Sure.

However, what it's not is manifestly typed - you don't have to put the
types in yourself; rather, the compiler works it out. For example, if i
wrote code like this (using python syntax):

def f(x):
return 1 + x

The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer", and concludes that you meant


hmmm, not exactly -- Haskell's not QUITE as strongly/rigidly typed as
this... you may have in mind CAML, which AFAIK in all of its variations
(O'CAML being the best-known one) *does* constrain + so that "the only
thing you can add to an integer is another integer". In Haskell, + can
sum any two instances of types which meet typeclass Num -- including at
least floats, as well as integers (you can add more types to a typeclass
by writing the required functions for them, too). Therefore (after
loading in ghci a file with
f x = x + 1
), we can verify...:

*Main> :type f
f :: (Num a) => a -> a
A very minor point, but since the need to use +. and the resulting lack
of polymorphism are part of what keeps me away from O'CAML and makes me
stick to Haskell, I still wanted to make it;-).


But if you try
f x = x + 1.0

it's
f :: (Fractional a) => a -> a

I asserted something like this some time ago here, and was
set straight, I believe by a gentleman from Chalmers. You're
right that addition is polymorphic, but that doesn't mean
that it can be performed on any two instances of Num. I had
constructed a test something like that to check my thinking,
but it turns out that Haskell was able to interpret "1" as
Double, for example -- basically, 1's type is Num too.
If you type the constant (f x = x + (1 :: Int)), the function
type would be (f :: Int -> Int). Basically, it seems (+) has
to resolve to a (single) instance of Num.

Donn Cave, do**@u.washington.edu
Dec 12 '05 #51
On Mon, 12 Dec 2005, Bengt Richter wrote:
On Mon, 12 Dec 2005 01:12:26 +0000, Tom Anderson <tw**@urchin.earth.li> wrote:
--
`,,,,`,,,,`,,,,`


[OT} (just taking liberties with your sig ;-)
,<@><
,,,,`,,,,P`,,y,,t`,,h, ,o`,,n,,


The irony is that with my current news-reading setup, i see my own sig as
a row of question marks, seasoned with backticks and commas. Your
modification looks like it's adding a fish; maybe the question marks are a
kelp bed, which the fish is exploring for food.

Hmm. Maybe if i look at it through Google Groups ...

Aaah! Very good!

However, given the context, i think it should be:

,<OO><
,,,,`,,,,P`,,y,,t`,,h, ,o`,,n,,

!

tom

--
limited to concepts that are meta, generic, abstract and philosophical --
IEEE SUO WG
Dec 13 '05 #52
On Mon, 12 Dec 2005, Donn Cave wrote:
In article <1h**************************@mail.comcast.net>,
al***@mail.comcast.net (Alex Martelli) wrote:
Tom Anderson <tw**@urchin.earth.li> wrote:
...

For example, if i wrote code like this (using python syntax):

def f(x):
return 1 + x

The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer"


hmmm, not exactly -- Haskell's not QUITE as strongly/rigidly typed as
this... you may have in mind CAML, which AFAIK in all of its variations
(O'CAML being the best-known one) *does* constrain + so that "the only
thing you can add to an integer is another integer". In Haskell, + can
sum any two instances of types which meet typeclass Num -- including at
least floats, as well as integers (you can add more types to a typeclass
by writing the required functions for them, too). Therefore (after
loading in ghci a file with
f x = x + 1
), we can verify...:

*Main> :type f
f :: (Num a) => a -> a


But if you try
f x = x + 1.0

it's
f :: (Fractional a) => a -> a

I asserted something like this some time ago here, and was set straight,
I believe by a gentleman from Chalmers. You're right that addition is
polymorphic, but that doesn't mean that it can be performed on any two
instances of Num.


That's what i understand. What it comes down to, i think, is that the
Standard Prelude defines an overloaded + operator:

def __add__(x: int, y: int) -> int:
<primitive operation to add two ints>

def __add__(x: float, y: float) -> float:
<primitive operation to add two floats>

def __add__(x: str, y: str) -> str:
<primitive operation to add two strings>

# etc

So that when the compiler hits the expression "x + 1", it has a finite set
of possible interpretations for '+', of which only one is legal - addition
of two integers to yield an integer. Or rather, given that "1" can be an
int or a float, it decides that x could be either, and so calls it "alpha,
where alpha is a number". Or something.

While we're on the subject of Haskell - if you think python's
syntactically significant whitespace is icky, have a look at Haskell's
'layout' - i almost wet myself in terror when i saw that!

tom

--
limited to concepts that are meta, generic, abstract and philosophical --
IEEE SUO WG
Dec 13 '05 #53

Tom Anderson wrote:
While we're on the subject of Haskell - if you think python's
syntactically significant whitespace is icky, have a look at Haskell's
'layout' - i almost wet myself in terror when i saw that!

Though one doesn't need to use indentation and write everything using
{} in Haskell.

Dec 13 '05 #54
Welcome to Python Matthias. I hope you will enjoy it!

Matthias Kaeppler wrote:
Another thing which is really bugging me about this whole dynamically
typing thing is that it seems very error prone to me:

foo = "some string!"

# ...

if (something_fubar):
fo = "another string"

Oops, the last 'o' slipped, now we have a different object and the
interpreter will happily continue executing the flawed program.


As an old hardware designer from the space industry, I'm well
aquainted with the idea of adding redundancy to make things
more reliable. I also know that this doesn't come without a
prize. All this stuff you add to detect possible errors might
also introduce new errors, and it takes a lot of time and
effort to implement--time that could be spent on better things.

In fact, the typical solutions that are used to increase the
odds that hardware doesn't fail before its Mean Time To
Failure (MTTF), will significantly lower the chance that it
works much longer than its MTTF! More isn't always better.

While not the same thing, software development is similar.
All that redundancy in typical C++ programs has a high
development cost. Most of the stuff in C++ include files
are repeated in the source code, and the splitting of
code between include and source files mean that a lot of
declarations are far from the definitions. We know that
this is a problem: That's why C++ departed from C's concept
of putting all local declarations in the beginning of
functions. Things that are closely related should be as
close as possible in the code!

The static typing means that you either have to make several
implementations of many algorithms, or you need to work with
those convoluted templates that were added to the language as
an afterthought.

Generally, the more you type, the more you will mistype.
I even suspect that the bug rate grows faster than the
size of the code. If you have to type five times as much,
you will probably make typos five times as many times,
but you also have the problem that the larger amount of
code is more difficult to grasp. It's less likely that
all relevant things are visible on the screen at the same
time etc. You'll make more errors that aren't typos.

Python is designed to allow you to easily write short and
clear programs. Its dynamic typing is a very important
part of that. The important thing isn't that we are
relieved from the boring task of typing type declarations,
but rather that the code we write can be much more generic,
and the coupling between function definitions and function
callers can be looser. This means that we get faster
development and easier maintenace if we learn to use this
right.

Sure, a C++ or Java compiler will discover some mistakes
that would pass through the Python compiler. This is not
a design flaw in Python, it's a direct consequence of its
dynamic nature. Compile time type limitations goes against
the very nature of Python. It's not the checks we try to
avoid--it's the premature restrictions in functionality.

Anyway, I'm sure you know that a successful build with
C++ or Java doesn't imply correct behaviour of your
program.

All software needs to be tested, and if we want to work
effectively and be confident that we don't break things
as we add features or tidy up our code, we need to make
automated tests. There are good tools, such as unittest,
doctest, py.test and TextTest that can help us with that.

If you have proper automated tests, those tests will
capture your mistypings, whether they would have been
caught by a C++ or Java compiler or not. (Well, not if
they are in dead code, but C++/Java won't give you any
intelligent help with that either...)

I've certainly lost time due to mistyped variables now
and then. It's not uncommon that I've actually mistyped
in a way that Java/C++ would never notice (e.g. typed i
instead of j in some nested for loop etc) but sometimes
compile time type checking would have saved time for me.

On the other hand, I'm sure that type declarations on
variables would bring a rigidity to Python that would
cost me much more than I would gain, and with typeless
declarations as in Perl (local x) I would probably
waste more time on adding forgotten declarations (or
removing redundant ones) than I would save time on
noticing the variable mistypings a few seconds before
my unittests catch them. Besides, there are a number
of lint-like tools for Python if you want static code
checks.

As I wrote, the lack of type checking in Python is a
consequence of the very dynamic nature of the language.
A function should assume as little as possible about
its parameters, to be able to function in the broadest
possible scope. Don't add complexity to make you code
support things you don't know of a need for, but take
the chance Python gives you of assuming as little as
possible about your callers and the code you call.

This leads to more flexible and maintainable software.
A design change in your software will probably lead to
much more code changes if you write in C++ than if you
write in Python.

While feature-by-feature comparisions of different
programming languages might have some merit, the only
thing that counts in the end is how the total package
works... I think you'll find that Python is a useful
package, and a good tool in a bigger tool chest.
Dec 13 '05 #55

Magnus Lycka wrote:
The static typing means that you either have to make several
implementations of many algorithms, or you need to work with
those convoluted templates that were added to the language as
an afterthought. I don't see this in Haskell.
While feature-by-feature comparisions of different
programming languages might have some merit, the only
thing that counts in the end is how the total package
works... I think you'll find that Python is a useful
package, and a good tool in a bigger tool chest.

That is very true.

Dec 13 '05 #56
bo****@gmail.com wrote:
Magnus Lycka wrote:
The static typing means that you either have to make several
implementations of many algorithms, or you need to work with
those convoluted templates that were added to the language as
an afterthought.


I don't see this in Haskell.


No, I was refering to C++ when I talked about templates.

I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?

I don't doubt that it's possible to make a statically typed
language much less assembly like than C++...
Dec 14 '05 #57

Magnus Lycka wrote:
I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?

I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.

Dec 14 '05 #58
bo****@gmail.com wrote:
Magnus Lycka wrote:
I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?


I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.


Huh? I must have expressed my thoughts badly. This is trivial to
use in Python. You could for instance write a module like this:

### my_module.py ###
import copy

def sum(*args):
result = copy.copy(args[0])
for arg in args[1:]:
result += arg
return result

### end my_module.py ###

Then you can do:
from my_module import sum
sum(1,2,3) 6 sum('a','b','c') 'abc' sum([1,2,3],[4,4,4]) [1, 2, 3, 4, 4, 4]


Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?
Dec 14 '05 #59

Magnus Lycka wrote:
bo****@gmail.com wrote:
Magnus Lycka wrote:
I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?


I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.


Huh? I must have expressed my thoughts badly. This is trivial to
use in Python. You could for instance write a module like this:

### my_module.py ###
import copy

def sum(*args):
result = copy.copy(args[0])
for arg in args[1:]:
result += arg
return result

### end my_module.py ###

Then you can do:
>>> from my_module import sum
>>> sum(1,2,3) 6 >>> sum('a','b','c') 'abc' >>> sum([1,2,3],[4,4,4]) [1, 2, 3, 4, 4, 4] >>>


Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?

Ah, I thought you were talking about DLL or some external library
stuff. In Haskell, it use a concept of type class. Conceptually similar
to the "duck typing" thing in python/ruby. You just delcare the data
type then add an implementation as an instance of a type class that
knows about +/- or copy. The inference engine then would do its work.

I would assume that even in python, there are different implement of
+/- and copy for different object types.

Dec 14 '05 #60

Magnus Lycka wrote:
Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?


The following is a very short Haskell function I defined which I hope
can give you some idea.

What it does is just take a generic list of things(I want to use it on
string) and break it up into a tuple using any object in token as
seperator (sort of C's strtok).

breakKeyword token xs =
case break (flip elem token) xs of
(_,[]) -> (xs,[])
(ys,z:zs) -> (ys, zs)

This is the function declaration derived by Haskell(I haven't specify
anything above about types)

*MyList> :type breakKeyword
breakKeyword :: (Eq a) => [a] -> [a] -> ([a], [a])

What it means is that breakKeyword can take any list of object type "a"
so long it belongs to the class Eq. It can be char, number or whatever
so long it is an instance of Eq.

All that is needed for my custom data type(whatever it is) is that it
must implment the compare function that "elem" would use to compare if
a given object is in the list of token.

*MyList> :type elem
elem :: (Eq a) => a -> [a] -> Bool

Dec 14 '05 #61
Op 2005-12-14, Magnus Lycka schreef <ly***@carmen.se>:
bo****@gmail.com wrote:
Magnus Lycka wrote:
I don't really know Haskell, so I can't really compare it
to Python. A smarter compiler can certainly infer types from
the code and assemble several implementations of an
algorithm, but unless I'm confused, this makes it difficult
to do the kind of dynamic linking / late binding that we do in
Python. How do you compile a dynamic library without locking
library users to specific types?
I don't know. I am learning Haskell(and Python too), long way to go
before I would get into the the usage you mentioned, if ever, be it
Haskell or Python.


Huh? I must have expressed my thoughts badly. This is trivial to
use in Python. You could for instance write a module like this:

### my_module.py ###
import copy

def sum(*args):
result = copy.copy(args[0])
for arg in args[1:]:
result += arg
return result

### end my_module.py ###

Then you can do:
from my_module import sum
sum(1,2,3) 6 sum('a','b','c') 'abc' sum([1,2,3],[4,4,4]) [1, 2, 3, 4, 4, 4]


Assume that you didn't use Python, but rather something with
static typing.


That depends on what you would call static typing.

Suppose we would add type declarations in python.
So we could do things like

int: a
object: b

Some people seem to think that this would introduce static
typing, but the only effect those staments need to have
is that each time a variable is rebound an assert statement
would implicitly be executed, checking whether the variable is
still an instance of the declared type.

(Assuming for simplicity that all classes are subclasses
of object so that all objects are instances of object.)
How could you make a module such as my_module.py,


In the above scenario in just the same way as in python.

--
Antoon Pardon
Dec 14 '05 #62
Antoon Pardon wrote:
Suppose we would add type declarations in python.
So we could do things like

int: a
object: b

Some people seem to think that this would introduce static
typing, but the only effect those staments need to have
is that each time a variable is rebound an assert statement
would implicitly be executed, checking whether the variable is
still an instance of the declared type.


Doesn't work; duck typing is emphatically not subclass-typing. For this
system to still work and be as general as Python is now (without having
to make all variables 'object's), we'd need true interface checking.
That is, we'd have to be able to say:

implements + like int: a

or somesuch. This is a Hard problem, and not worth solving for the
simple benefit of checking type errors in code.

It might be worth solving for dynamic code optimization, but that's
still a ways off.
Dec 14 '05 #63
Christopher Subich wrote:
Doesn't work; duck typing is emphatically not subclass-typing. For this
system to still work and be as general as Python is now (without having
to make all variables 'object's), we'd need true interface checking.
That is, we'd have to be able to say:

implements + like int: a

or somesuch. This is a Hard problem, and not worth solving for the
simple benefit of checking type errors in code.

It might be worth solving for dynamic code optimization, but that's
still a ways off.


Correct, but he's just trolling you know. What he suggests isn't
static typing, and he knows it. It gives all the rigidity of static
typing with only a tiny fraction of the claimed benefits, but it would
give a hefty performance penalty.
Dec 14 '05 #64
Magnus Lycka <ly***@carmen.se> writes:
Huh? I must have expressed my thoughts badly. This is trivial to
use in Python. You could for instance write a module like this:

### my_module.py ###
import copy

def sum(*args):
result = copy.copy(args[0])
for arg in args[1:]:
result += arg
return result

### end my_module.py ###

Then you can do:
>>> from my_module import sum
>>> sum(1,2,3) 6 >>> sum('a','b','c') 'abc' >>> sum([1,2,3],[4,4,4]) [1, 2, 3, 4, 4, 4] >>>


Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?


CLU had this decades ago. You'd right something like:

def sum(*args) args has +=:
...

Basically, it did duck typing, checked at compile time instead of
dynamically.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Dec 14 '05 #65
In article <11**********************@g47g2000cwa.googlegroups .com>,
bo****@gmail.com wrote:
Magnus Lycka wrote:
Assume that you didn't use Python, but rather something with
static typing. How could you make a module such as my_module.py,
which is capable of working with any type that supports some
standard copy functionality and the +-operator?
The following is a very short Haskell function I defined which I hope
can give you some idea.

....
*MyList> :type breakKeyword
breakKeyword :: (Eq a) => [a] -> [a] -> ([a], [a])

What it means is that breakKeyword can take any list of object type "a"
so long it belongs to the class Eq. It can be char, number or whatever
so long it is an instance of Eq.

All that is needed for my custom data type(whatever it is) is that it
must implment the compare function that "elem" would use to compare if
a given object is in the list of token.

*MyList> :type elem
elem :: (Eq a) => a -> [a] -> Bool


Moreover, 1) this compare implementation is (as I understand it)
made available via runtime information, so a typeclass instance
may be implemented after the function that encounters it was compiled,
and 2) implementation often requires no more than "deriving Eq"
after the data type declaration -- assuming the data type is
composed of other types of data, you can infer the functional
composition of a typeclass instance.

The sum function in my_module.py was, in a very approximate sense,
like the standard "foldr" function. Of course foldr doesn't need to
copy, nor does it use any += operator. foldr is generic to any
function of type (a -> a -> b) (i.e., takes two parameters of one
type and returns a value of another type.) Lists don't support
a (+) operation (numeric only), but they support (++) concatenation
and (:) composition. foldr also takes an initial parameter of type b.

so,
foldr (:) "" ['a', 'b', 'c']
foldr (++) [] [[1, 2, 3], [4, 4, 4]]

Really, this kind of abstraction of data types is not only well
supported in Haskell, it can be almost a curse, at least for
someone like myself who has fairly superficial experience with
this kind of programming. After all the functions have been
zealously scrubbed clean of any trace of concrete data types and
rendered maximally abstract, they can be a little hard to understand.

Donn Cave, do**@u.washington.edu
Dec 14 '05 #66

Donn Cave wrote:
Really, this kind of abstraction of data types is not only well
supported in Haskell, it can be almost a curse, at least for
someone like myself who has fairly superficial experience with
this kind of programming. After all the functions have been
zealously scrubbed clean of any trace of concrete data types and
rendered maximally abstract, they can be a little hard to understand.

My experience too. An interesting thing I experienced with Haskell is
that even though it is strongly and statically typed, I seldom think
about data types when writing programs, kind of typeless to me.

Dec 15 '05 #67
<bo****@gmail.com> wrote:
those convoluted templates that were added to the language as
an afterthought.

I don't see this in Haskell.


Well, historically templates HAVE been added to Haskell "as an
afterthought" (well after the rest of the language was done), and
judging mostly from
<http://research.microsoft.com/~simon...l/meta-haskell
..ps> it doesn't seem unfair to call them "convoluted"...
Alex

Dec 15 '05 #68

Alex Martelli wrote:
<bo****@gmail.com> wrote:
those convoluted templates that were added to the language as
an afterthought.

I don't see this in Haskell.


Well, historically templates HAVE been added to Haskell "as an
afterthought" (well after the rest of the language was done), and
judging mostly from
<http://research.microsoft.com/~simon...l/meta-haskell
.ps> it doesn't seem unfair to call them "convoluted"...

I think I was talking about the need to add templates in order for
writing generic functions that was mentioned(see the example given
about sum), not in the context you are talking about. You seem to have
skipped the other half of the text I quoted.

Dec 15 '05 #69
<bo****@gmail.com> wrote:
Alex Martelli wrote:
<bo****@gmail.com> wrote:
> those convoluted templates that were added to the language as
> an afterthought.
I don't see this in Haskell.


Well, historically templates HAVE been added to Haskell "as an
afterthought" (well after the rest of the language was done), and
judging mostly from
<http://research.microsoft.com/~simon...l/meta-haskell
.ps> it doesn't seem unfair to call them "convoluted"...

I think I was talking about the need to add templates in order for
writing generic functions that was mentioned(see the example given
about sum), not in the context you are talking about. You seem to have
skipped the other half of the text I quoted.


Right, you can get good genericity with Haskell's typeclasses (I've
posted about that often in the past, and desperately and so far
unsuccessfully tried to convince Guido to use something close to
typeclasses rather than "interfaces" for such purposes as PEP 246
[protocol adaptation]); it's the state of _templates_ in Haskell,
specifically, which I was rather dubious about (it may be that I just
haven't dug into them deep enough yet, but they do seem not a little
"convoluted" to me, so far).
Alex
Dec 15 '05 #70

Alex Martelli wrote:
<bo****@gmail.com> wrote:
Alex Martelli wrote:
<bo****@gmail.com> wrote:

> > those convoluted templates that were added to the language as
> > an afterthought.
> I don't see this in Haskell.

Well, historically templates HAVE been added to Haskell "as an
afterthought" (well after the rest of the language was done), and
judging mostly from
<http://research.microsoft.com/~simon...l/meta-haskell
.ps> it doesn't seem unfair to call them "convoluted"...

I think I was talking about the need to add templates in order for
writing generic functions that was mentioned(see the example given
about sum), not in the context you are talking about. You seem to have
skipped the other half of the text I quoted.


Right, you can get good genericity with Haskell's typeclasses (I've
posted about that often in the past, and desperately and so far
unsuccessfully tried to convince Guido to use something close to
typeclasses rather than "interfaces" for such purposes as PEP 246
[protocol adaptation]); it's the state of _templates_ in Haskell,
specifically, which I was rather dubious about (it may be that I just
haven't dug into them deep enough yet, but they do seem not a little
"convoluted" to me, so far).

Yup, the templates is an afterthought and the point of discussion by
Lispers(?) too. I have no idea what it is intended for, there must be
some need for it but definitely beyond what I can handle.

Dec 15 '05 #71
Op 2005-12-14, Christopher Subich schreef <cs****************@spam.subich.block.com>:
Antoon Pardon wrote:
Suppose we would add type declarations in python.
So we could do things like

int: a
object: b

Some people seem to think that this would introduce static
typing, but the only effect those staments need to have
is that each time a variable is rebound an assert statement
would implicitly be executed, checking whether the variable is
still an instance of the declared type.
Doesn't work; duck typing is emphatically not subclass-typing.


I don't see how that is relevant.
For this
system to still work and be as general as Python is now (without having
to make all variables 'object's),


But the way Guido wants python to evolve would make all variables
objects. This is what PEP 3000 states.

Support only new-style classes; classic classes will be gone.

As far as I understand this would imply that all classes are subclasses
of object and thus that isinstance(var, object) would be true for all variables.

--
Antoon Pardon
Dec 15 '05 #72
Op 2005-12-14, Magnus Lycka schreef <ly***@carmen.se>:
Christopher Subich wrote:
Doesn't work; duck typing is emphatically not subclass-typing. For this
system to still work and be as general as Python is now (without having
to make all variables 'object's), we'd need true interface checking.
That is, we'd have to be able to say:

implements + like int: a

or somesuch. This is a Hard problem, and not worth solving for the
simple benefit of checking type errors in code.

It might be worth solving for dynamic code optimization, but that's
still a ways off.
Correct, but he's just trolling you know. What he suggests isn't
static typing, and he knows it.


What I know or not isn't the point. My impression is that different
people have different ideas on what is static typing and what is not.
I can't read minds about what each individual person thinks. So
I don't know whether you (or someone else) considered this static
typing or not.
It gives all the rigidity of static
typing with only a tiny fraction of the claimed benefits, but it would
give a hefty performance penalty.


Yes it would give a performance penalty. That is irrelavant to the
question asked. Nothing stops the language designers from using
this type information, to produce more efficient code where possible.
But whether or not the designers would do this, would make no difference
on what would be possible to do.

--
Antoon Pardon
Dec 15 '05 #73

Antoon Pardon wrote:
Op 2005-12-14, Christopher Subich schreef
Doesn't work; duck typing is emphatically not subclass-typing.


I don't see how that is relevant.
For this
system to still work and be as general as Python is now (without having
to make all variables 'object's),


But the way Guido wants python to evolve would make all variables
objects. This is what PEP 3000 states.

Support only new-style classes; classic classes will be gone.

As far as I understand this would imply that all classes are subclasses
of object and thus that isinstance(var, object) would be true for all variables.


But that's still useless for your purposes. Everything will be derived
from object but it doesn't mean everything file-like will be derived
from file or everything dictionary-like will be derived from
dictionary. Duck-typing means that code told to 'expect' certain types
will break unnecessarily when a different-yet-equivalent type is later
passed to it.

--
Ben Sizer

Dec 15 '05 #74
Op 2005-12-15, Ben Sizer schreef <ky*****@gmail.com>:

Antoon Pardon wrote:
Op 2005-12-14, Christopher Subich schreef
> Doesn't work; duck typing is emphatically not subclass-typing.
I don't see how that is relevant.
> For this
> system to still work and be as general as Python is now (without having
> to make all variables 'object's),


But the way Guido wants python to evolve would make all variables
objects. This is what PEP 3000 states.

Support only new-style classes; classic classes will be gone.

As far as I understand this would imply that all classes are subclasses
of object and thus that isinstance(var, object) would be true for all variables.


But that's still useless for your purposes.


What purpose would that be? Maybe you can tell me, so I can
know too.
Everything will be derived from object but it doesn't mean
everything file-like will be derived from file or everything
dictionary-like will be derived from dictionary.
So? I answered a question. That my answer is not usefull for
a specific purpose is very well prosible but is AFAIC irrelevant.
I didn't notice a specific purpose behind the question
and didn't answer the question with a specific purpose in mind.
Duck-typing means that code told to 'expect' certain types
will break unnecessarily when a different-yet-equivalent type is later
passed to it.


I think you mixed things up.

--
Antoon Pardon
Dec 15 '05 #75

Antoon Pardon wrote:
Op 2005-12-15, Ben Sizer schreef <ky*****@gmail.com>: So? I answered a question. That my answer is not usefull for
a specific purpose is very well prosible but is AFAIC irrelevant.


The point being made was that your declarations such as these:

int: a
object: b

would break the original idea (a module containing a sum() function
that can take any object that has an addition operator). Inheritance
isn't good enough in this situation. I apologise if that isn't what you
were answering, but that seems to have been the thread context.

--
Ben Sizer

Dec 15 '05 #76
Op 2005-12-15, Ben Sizer schreef <ky*****@gmail.com>:

Antoon Pardon wrote:
Op 2005-12-15, Ben Sizer schreef <ky*****@gmail.com>:
So? I answered a question. That my answer is not usefull for
a specific purpose is very well prosible but is AFAIC irrelevant.


The point being made was that your declarations such as these:

int: a
object: b

would break the original idea (a module containing a sum() function
that can take any object that has an addition operator).


1) a declaration as

object: b

Wouldn't break the original idea, since b would be basically a python
object as it is now.

2) Sure a declaration as

int: a

would break the original idea, but that was just given as an example
of what kind of declarations one might possibly use. You are not obligated
to use declarations that limits you so much.
Inheritance
isn't good enough in this situation. I apologise if that isn't what you
were answering, but that seems to have been the thread context.


No I wasn't answering that. I was just trying to give an idea from
a different angle. People seem to think that one uses static typing
or inheritance typing or duck typing. IMO the possibility of inheritance
typing doesn't have to prevent duck typing.

--
Antoon Pardon
Dec 15 '05 #77
In article <1h**************************@mail.comcast.net>,
Alex Martelli <al***@mail.comcast.net> wrote:

Right, you can get good genericity with Haskell's typeclasses (I've
posted about that often in the past, and desperately and so far
unsuccessfully tried to convince Guido to use something close to
typeclasses rather than "interfaces" for such purposes as PEP 246
[protocol adaptation]); it's the state of _templates_ in Haskell,
specifically, which I was rather dubious about (it may be that I just
haven't dug into them deep enough yet, but they do seem not a little
"convoluted" to me, so far).


Hrm. I don't recall anything about typeclasses, so my suspicion is that
you were writing something lengthy and above my head. Can you write
something reasonably short about it? (I'm asking partly for your
benefit, because if it makes sense to me, that will likely go a long way
toward making sense to Guido -- we seem to think similarly in certain
ways.)
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

"Don't listen to schmucks on USENET when making legal decisions. Hire
yourself a competent schmuck." --USENET schmuck (aka Robert Kern)
Dec 15 '05 #78
<bo****@gmail.com> wrote:
...
[protocol adaptation]); it's the state of _templates_ in Haskell,
specifically, which I was rather dubious about (it may be that I just
haven't dug into them deep enough yet, but they do seem not a little
"convoluted" to me, so far).

Yup, the templates is an afterthought and the point of discussion by
Lispers(?) too. I have no idea what it is intended for, there must be
some need for it but definitely beyond what I can handle.


I believe that the basic idea is to make available a powerful "compile
time language" in parallel to the "runtime language" -- so in a sense
the motivation is related to that for macros in Lisp, at least if I grok
it correctly... but with more anchoring in the typesystem rather than in
syntactic aspects of the base language. As it turns out that C++'s
templates also make up a Turing-complete compile-time language anchored
in the (admittedly weaker/less elegant) typesystem of the base language,
it's not all that different "philosophically" (if my understanding is
correct). I gather that the next C++ standard will have "concepts" (in
the generic programming sense of the word) as a first-class construct,
rather than just as an abstraction to help you think of templates, so it
may be that the current distinctions will blur even further...
Alex
Dec 15 '05 #79
Aahz <aa**@pythoncraft.com> wrote:
...
Hrm. I don't recall anything about typeclasses, so my suspicion is that
you were writing something lengthy and above my head. Can you write
something reasonably short about it? (I'm asking partly for your
benefit, because if it makes sense to me, that will likely go a long way
toward making sense to Guido -- we seem to think similarly in certain
ways.)


Think of a typeclass as something "like an interface, more than an
interface" to which a type can easily be "adapted" by a third programmer
even if the two programmers who wrote the type and typeclass were
working separately and with no knowledge of each other -- not too far
from the vaguer idea of "protocol" I support in PEP 246 (which focuses
on the adaptation), except that in Haskell things happen at compile time
while in Python we prefer to avoid the strong distinction between
compile time and runtime.

You may think of a typeclass as akin to an abstract baseclass, because
it's not constrained to only giving the signatures of methods, it can
also supply some default implementations of some methods in terms of
others. Guido blogged in August about interfaces versus ABCs, not
remembering why he had once Pronounced about preferring ABCs, and in his
critique of ABCs he mentions that one weakness of their ability to
provide default implementations is that you have to decide about what is
the most fundamental subset, in whose terms the rest is implemented.
But *typeclasses do away with that need*. Consider (arbitrary
pythonesquoid syntax):

typeclass mapping:
def __getitem__(self, key):
_notthere=[]
result = self.get(key, _notthere)
if result is notthere: raise KeyError
return result
def get(self, key, default):
try: return self[key]
except KeyError: return default
# etc etc

this LOOKS like mutual recursion, but since it's a typeclass it doesn't
mean that: it means __getitem__ may be defined (and then get uses the
provided default implementation unless overridden) OR get may be defined
(and then it's __getitem__ that may use the default implementation
supplied by the the typeclass, or else override it).

When you compile a typeclass you build a directed graph of dependencies
of methods on each other, which may include cycles; when you show how a
type adapts to a typeclass, you build a copy of that graph removing the
dependencies of those methods which do get explicitly implemented (in
the type or in the adapter) -- if the copy at the end of these
compilations still has cycles, or leaves (methods that the typeclass
requires and neither the type nor the adapter supply), then this raises
an exception (incomplete adaptation).

Thus, a typeclass clearly shows the semantics intended for methods that
depend on each other, and conveniently lets you, the adapter's author,
choose what to implement -- the typeclass's author has not been forced
to pick the "most fundamental" methods. ABCs, or extremely handy mixins
such as UserDict.DictMixin, do force a programmer who knows nothing
about the internals of your class (the author of the ABC or mixin) to
pick "most fundamental" methods. Thus, typeclasses are more useful than
ABCs by as much as ABC are more useful than (simply "syntactical")
interfaces -- coupled with adaptation mechanisms, the overall result can
be extremely handy (as any Haskell programmer might confirm).
Alex
Dec 15 '05 #80
> sorry for my ignorance, but after reading the Python tutorial on
python.org, I'm sort of, well surprised about the lack of OOP
capabilities in python. Honestly, I don't even see the point at all of
how OO actually works in Python. For one, is there any good reason why I should ever inherit from a
class? ^^ There is no functionality to check if a subclass correctly
implements an inherited interface and polymorphism seems to be missing
in Python as well. I kind of can't imagine in which circumstances
inheritance in Python helps. For example:
Python IS Object Oriented, since everything is an object in Python,
even functions, strings, modules, classes and class instances.

But Python is also dynamically typed so inheritance and polymorphism,
ideas coming from other languages, are not that much important.
Please give me hope that Python is still worth learning


Python is different than C/C++, Java and co.
If you can pass over it, you'll see for yourself if it's worth learning.

Dec 15 '05 #81
In article <dn**********@panix1.panix.com>, aa**@pythoncraft.com (Aahz)
wrote:
In article <1h**************************@mail.comcast.net>,
Alex Martelli <al***@mail.comcast.net> wrote:

Right, you can get good genericity with Haskell's typeclasses (I've
posted about that often in the past, and desperately and so far
unsuccessfully tried to convince Guido to use something close to
typeclasses rather than "interfaces" for such purposes as PEP 246
[protocol adaptation]); it's the state of _templates_ in Haskell,
specifically, which I was rather dubious about (it may be that I just
haven't dug into them deep enough yet, but they do seem not a little
"convoluted" to me, so far).


Hrm. I don't recall anything about typeclasses, so my suspicion is that
you were writing something lengthy and above my head. Can you write
something reasonably short about it? (I'm asking partly for your
benefit, because if it makes sense to me, that will likely go a long way
toward making sense to Guido -- we seem to think similarly in certain
ways.)


Couple of points to hopefully augment Alex's response

- A type class may very well provide no implementation at all.
They certainly can, but they are still useful when they don't, and
indeed I think it's fair to say that isn't their main purpose.

- Doesn't hurt to recall that Haskell is a functional language.
Type classes have a superficial and maybe useful similarity to
object classes, but in Haskell, if data objects have methods,
they aren't aware of them, so to speak. If I define a new
type class for some reason, I may make instances for every
data type I can think of -- Int, Bool, (), whatever. Now do
these types suddenly sprout new methods? Not really! I just
have a set of functions that are now defined over all these
types. Outside the scope of these declarations, of course these
functions are unknown, and the data types are the same in either
case.

- If type classes are very commonly written in Haskell applications,
it's news to me. The need to define functions over a set of
data types in this way is relatively unusual - extremely useful
in the core language where you get Eq, Ord, Show and such basic
properties, but I think less needed higher up in the application
hierarchy. That might be an interesting philosophical question,
as a contrast between the basic world views of FP versus OOP, but
of course you'd want to check it with someone with a lot more
Haskell than I have.

Donn Cave, do**@u.washington.edu
Dec 15 '05 #82
Alex Martelli wrote:
Aahz <aa**@pythoncraft.com> wrote:
...
Hrm. I don't recall anything about typeclasses, so my suspicion is that
you were writing something lengthy and above my head. Can you write
something reasonably short about it? (I'm asking partly for your
benefit, because if it makes sense to me, that will likely go a long way
toward making sense to Guido -- we seem to think similarly in certain
ways.)


Think of a typeclass as something "like an interface, more than an
interface" to which a type can easily be "adapted" by a third programmer
even if the two programmers who wrote the type and typeclass were
working separately and with no knowledge of each other -- not too far
from the vaguer idea of "protocol" I support in PEP 246 (which focuses
on the adaptation), except that in Haskell things happen at compile time
while in Python we prefer to avoid the strong distinction between
compile time and runtime.

You may think of a typeclass as akin to an abstract baseclass, because
it's not constrained to only giving the signatures of methods, it can
also supply some default implementations of some methods in terms of
others. Guido blogged in August about interfaces versus ABCs, not
remembering why he had once Pronounced about preferring ABCs, and in his
critique of ABCs he mentions that one weakness of their ability to
provide default implementations is that you have to decide about what is
the most fundamental subset, in whose terms the rest is implemented.
But *typeclasses do away with that need*. Consider (arbitrary
pythonesquoid syntax):

typeclass mapping:
def __getitem__(self, key):
_notthere=[]
result = self.get(key, _notthere)
if result is notthere: raise KeyError
return result
def get(self, key, default):
try: return self[key]
except KeyError: return default
# etc etc

this LOOKS like mutual recursion, but since it's a typeclass it doesn't
mean that: it means __getitem__ may be defined (and then get uses the
provided default implementation unless overridden) OR get may be defined
(and then it's __getitem__ that may use the default implementation
supplied by the the typeclass, or else override it).

When you compile a typeclass you build a directed graph of dependencies
of methods on each other, which may include cycles; when you show how a
type adapts to a typeclass, you build a copy of that graph removing the
dependencies of those methods which do get explicitly implemented (in
the type or in the adapter) -- if the copy at the end of these
compilations still has cycles, or leaves (methods that the typeclass
requires and neither the type nor the adapter supply), then this raises
an exception (incomplete adaptation).

Thus, a typeclass clearly shows the semantics intended for methods that
depend on each other, and conveniently lets you, the adapter's author,
choose what to implement -- the typeclass's author has not been forced
to pick the "most fundamental" methods. ABCs, or extremely handy mixins
such as UserDict.DictMixin, do force a programmer who knows nothing
about the internals of your class (the author of the ABC or mixin) to
pick "most fundamental" methods. Thus, typeclasses are more useful than
ABCs by as much as ABC are more useful than (simply "syntactical")
interfaces -- coupled with adaptation mechanisms, the overall result can
be extremely handy (as any Haskell programmer might confirm).
Alex


I don't see why your typeclass illustration does not apply to ABCs as
well? The sanity checks for typeclasses you describe seem to be a
particular feature of Haskell or other pure functional languages
without destructive updates and messy aliasing problems. It's hardly
believable that this would be possible in Python and runtime checks
won't help much in this case.

By the way I also don't understand Guidos concerns. Maybe he is against
the whole idea of creating frameworks and partial implementations? It's
not very clear for what reasons he likes interfaces.

Kay

Dec 15 '05 #83
Kay Schluehr <ka**********@gmx.net> wrote:
...
typeclass mapping:
def __getitem__(self, key):
_notthere=[]
result = self.get(key, _notthere)
if result is notthere: raise KeyError
return result
def get(self, key, default):
try: return self[key]
except KeyError: return default
# etc etc
... I don't see why your typeclass illustration does not apply to ABCs as
Because in a class, A or B or not, this code WOULD mean mutual recursion
(and it can't be checked whether recursion terminates, in general). In
a typeclass, it means something very different -- a dependency loop.
It's easy to check that all loops are broken (just as easy to check that
all purely abstract methods are implemented) in an adaptation.
well? The sanity checks for typeclasses you describe seem to be a
particular feature of Haskell or other pure functional languages
without destructive updates and messy aliasing problems. It's hardly
believable that this would be possible in Python and runtime checks
won't help much in this case.
Checks would happen at adapter-registration time, which is runtime: but
so is the time at which a 'class' or 'def' statement executes, of
course. Little but syntax-checks happens at compile-time in Python.

Nevertheless the checks would help enormously, and I don't see why you
would think otherwise. Prototypes of this (a metaclass performing the
checks upon inheritance, i.e. non-abstract-subclass creation, but that's
the same algorithm that could be run at adapter registration time if we
got rid of the requirement of inheritance in favor of adaptation) have
been posted to this group years ago. If a programmer does weird dynamic
things to a type or an adapter after the checks, fine, they'll get
exceptions if they later try to call (directly or indirectly) some
method that's disappeared, or the like, just like they would with ABC's,
interfaces, inheritance, or whatever else -- very few types are
dynamically altered in this way in most production programs, anyway, so
I won't lose any sleep over that possibility.
By the way I also don't understand Guidos concerns. Maybe he is against
the whole idea of creating frameworks and partial implementations? It's
not very clear for what reasons he likes interfaces.


Just read his blog on artima -- and maybe, if something there is unclear
to you, comment about it to see if you can get him interested in
explaining better.
Alex
Dec 16 '05 #84
Alex Martelli wrote:
I don't see why your typeclass illustration does not apply to ABCs as
Because in a class, A or B or not, this code WOULD mean mutual recursion
(and it can't be checked whether recursion terminates, in general). In
a typeclass, it means something very different -- a dependency loop.
It's easy to check that all loops are broken (just as easy to check that
all purely abstract methods are implemented) in an adaptation.


I see. The problem is that a class *should* permit mutual recursion in
general while the semantics of a typeclass is different and a typeclass
instance must resolve the dependency loop introduced by the typeclass.
well? The sanity checks for typeclasses you describe seem to be a
particular feature of Haskell or other pure functional languages
without destructive updates and messy aliasing problems. It's hardly
believable that this would be possible in Python and runtime checks
won't help much in this case.
Checks would happen at adapter-registration time, which is runtime: but
so is the time at which a 'class' or 'def' statement executes, of
course. Little but syntax-checks happens at compile-time in Python.

Nevertheless the checks would help enormously, and I don't see why you
would think otherwise.


I don't think otherwise. I was just asking whether they are feasible.
The typeclass concept itself is very beautiful but it is structural and
I don't see how Python can deal with callgraph structures.
Prototypes of this (a metaclass performing the
checks upon inheritance, i.e. non-abstract-subclass creation, but that's
the same algorithm that could be run at adapter registration time if we
got rid of the requirement of inheritance in favor of adaptation) have
been posted to this group years ago. If a programmer does weird dynamic
things to a type or an adapter after the checks, fine, they'll get
exceptions if they later try to call (directly or indirectly) some
method that's disappeared, or the like, just like they would with ABC's,
interfaces, inheritance, or whatever else -- very few types are
dynamically altered in this way in most production programs, anyway, so
I won't lose any sleep over that possibility.


In case of class instantiation we have inheritance hierarchies
resulting from the actions of metaclasses. But the hierarchy is still a
structural property of a collection of classes i.e. an order on classes
that is exteriour to any of the classes. The mro can alsways be made
explicit. In case of a function call graph that has to be analyzed for
dependencies I don't see that such a simple distinctive scheme could
work. Or maybe it works and I just have no clue how completely opaque
call graphs as they appear in functions like this

def f(x):
h = g(x)
h()

can be tracked sufficiently in "register-time" in order to capture
dependencies. It is not clear what h can be unless g is called and it
might be as hard as type-inference or completely impossible to figure
this out without executing g. If otherwise the checker can guarantee
detection of a few obvious cycles where dependencies can be tracked
also syntactically, at least in principle like in your example, what
exactly is ensured by the checker and how can the definition of a
typeclass be adapted accordingly?

Kay

Dec 16 '05 #85
Kay Schluehr <ka**********@gmx.net> wrote:
...
work. Or maybe it works and I just have no clue how completely opaque
call graphs as they appear in functions like this

def f(x):
h = g(x)
h()

can be tracked sufficiently in "register-time" in order to capture
dependencies. It is not clear what h can be unless g is called and it
might be as hard as type-inference or completely impossible to figure
this out without executing g. If otherwise the checker can guarantee
detection of a few obvious cycles where dependencies can be tracked
also syntactically, at least in principle like in your example, what
exactly is ensured by the checker and how can the definition of a
typeclass be adapted accordingly?


My preference would be to have the dependency tracker mark explicit
dependencies only; here, if this was in a typeclass with the appropriate
additions of self., I'd mark f as dependent on g only. No real purpose
is served by allowing dependencies to be specified in a "cloaked" form,
anyway -- nor in going out of one's way to impede vigorous attempts by a
programmer to shoot himself or herself in the foot.
Alex
Dec 16 '05 #86
Matthias Kaeppler wrote:
Hi,

sorry for my ignorance, but after reading the Python tutorial on
python.org, I'm sort of, well surprised about the lack of OOP
capabilities in python. Honestly, I don't even see the point at all of
how OO actually works in Python.

For one, is there any good reason why I should ever inherit from a
class? ^^ There is no functionality to check if a subclass correctly
implements an inherited interface and polymorphism seems to be missing
in Python as well. I kind of can't imagine in which circumstances
inheritance in Python helps. For example:

class Base:
def foo(self): # I'd like to say that children must implement foo
pass

class Child(Base):
pass # works

Does inheritance in Python boil down to a mere code sharing?

And how do I formulate polymorphism in Python? Example:

class D1(Base):
def foo(self):
print "D1"

class D2(Base):
def foo(self):
print "D2"

obj = Base() # I want a base class reference which is polymorphic This line is redundant. You don't appear to want to actually
create a Base object here, and the following code will ensure
that you end up having an 'obj' variable anyway.
if (<need D1>):
obj = D1()
else:
obj = D2()

OK. So now you have 'obj' referencing either a D1 or a D2. Both
D1 and D2 objects have a foo() method, so here is polymorphism.

There is no evidence of inheritance here though, because you
chose to override the only method that they could have inherited
from class Base. Now, if Base had also had a bar(self) method,
both would have inherited that.

Try this:

class Base:
def __init__(self):
self.value = 0
def foo(self):
print "Base", self.value
def bar(self):
self.value += 1

class D1(Base):
def foo(self):
print "D1", self.value

class D2(Base):
def foo(self):
print "D2", self.value

want1 = False
if want1:
obj = D1()
else:
obj = D2()

obj.foo()
obj.bar()
obj.foo()
Steve
Dec 16 '05 #87

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.