By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,553 Members | 1,128 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,553 IT Pros & Developers. It's quick & easy.

which foo(x)

P: n/a
I was trying to write some polymorphic application code and found that
a superclass method implementation gets invoked when I expect a
subclass implementation. Here's how I have abstracted the problem.

I have a base class - ClassA. I have a subclass of ClassA - ClassB.
Both classes implement foo() - exact same method signature.

My application code instantiates instances of ClassB (and siblings)
via a marshaler, and only knows that the object it references is a
"kind of" ClassA. I simulate this as shown below.

ClassA a = (ClassA) Class.forName("ClassB").newInstance();

Now, if I send a.foo(), I see that the implementation overriden in
ClassB is invoked - as I had hoped and expected.
But really what I am aiming for is something with a different twist.

I have another class hierarchy with a base class - ClassX - and a
subclass - ClassY.

I have implemented ClassA.foo(ClassX) and ClassB.foo(ClassY).

Instances of ClassX and ClassY are created outside the context of the
code that invokes foo(...). All this code knows is that it's got a
reference to an object which is a "kind of" ClassX. I simulate this as
shown below.

ClassX x = (ClassX) Class.forName("ClassY").newInstance();

If, in this scenario, I send a.foo(x) I see that it is the super
implementation of foo(...) that is invoked rather than the subclass
implementation of foo(...).

This was a big surprise to me. And it may seem as puzzling to some
that I found it surprising. After all, you note, the Java compiler
chose exactly the right implementation based on the information that I
provided.

I can kind of accept this, although I would argue that this behavior
makes polymorphic code really fragile and difficult to debug, since
runtime behavior is different from the language semantics due to ...
?compiler optimization?. (Coming to Java from a Smalltalk background,
as I do, the complexity here seems a bit bizarre)

But wait. By this reasoning, shouldn't I have gotten the superclass
implementation of foo(void) in the first scenario?

Just by way of exploring this puzzle a little further, I added another
scenario. Suppose I have a variable declared as "ClassB b", and send
"foo(x)" (where, as above 'x' is an instance of ClassY). This is at
least consistent with scenario #2 - I get the super implementation
because the compiler doesn't know that 'x' is actually a reference to
an instance of ClassY.

So far, then, it seems like scenario #1 is a bug, since - although
it's what I wanted - it's inconsistent with the other two scenarios.
Is there another explanation?

-rht

public class Testcase5 {

public static void main(String argv[]) {
ClassA a;
ClassB b;
ClassX x;
try {
a = (ClassA) Class.forName("ClassB").newInstance();
b = (ClassB) Class.forName("ClassB").newInstance();
x = (ClassX) Class.forName("ClassY").newInstance();
} catch (Exception e) {
System.err.println(e.getMessage());
return;
}

b.foo(); // out: "B.foo()"
a.foo(); // out: "B.foo()"

b.foo(x); // out: "A.foo(x)"
a.foo(x); // out: "A.foo(x)"
}
}
Jul 17 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a

Hi,

(Sorry for this late answer, I'm not reading the forum regularly)
I was trying to write some polymorphic application code and found that
a superclass method implementation gets invoked when I expect a
subclass implementation. Here's how I have abstracted the problem.


You did a good analysis of the situation of Java. The key you seem to
miss to understand the "logics" behind Java's behaviour is the
following. In Java, a method call is of the form
x.meth(a);
meth is a method name, and x is call the receiver of the call. There
could be more than one argument a, but it does not matter here.
The way the code to execute for this call is like this:
1) during compilation, the compiler looks at the (static) type of a,
and looks in the class C which is the (static) type of x a method with
the name "meth" that matches the type of a (not relevant, but for the
sake of completeness: it also consider methods defined in super-classes
of C, and if there are more than one possible method, it will choose the
most specific). Let's call the result of this search method M.
2) during execution, there is a mechanism to "refine" this search, by
looking at the actual class of the object that x represents. It is sure
that this class is either C, or a subclass of C. If it is a subclass D,
then a method with the same signature as M might have been defined in D,
or somewhere between C and D. In that case, that method will be called,
not M. However, if a method has a more precise signature than M because
it handles only a subclass of the original class for the argument, it is
not considered, even when this more precise method would be applicable
to the runtime argument of the call. It is this last situation that
surprised you in your example.

So you can see that there is a fundamental asymetry in a method call:
the type of the receiver will be checked during execution to find a more
precise method, but not the types of the arguments.

The behaviour of Java, C++, Eiffel and most object-oriented languages is
called "single dispatch" because they dispatch (choose the method to
execute) based on the type of a single argument. A few other OO
languages use "multiple dispatch", because, you guessed it, they use the
type of multiple arguments to decide what method to execute. To name a
few: Cecil, Dylan, CLOS. I am myself the main designer of a language
called Nice, which is a variation on Java that supports multiple
dispatch. For instance, thanks to multiple dispatch, you can write in Nice:

class Point
{
private int x;

equals(Point that) {
return this.x == that.x;
}
}

while Java programmers need to write:

class Point
{
private int x;

public boolean equals(Object that) {
if (that instanceof Point) {
Point thatPoint = (Point) that;
return this.x == thatPoint.x;
} else {
return super.equals(that);
}
}
}

Both codes are stricly equivalent, but I believe the Nice version is
easier to write and to read ;-)

If you want to find out more, Nice's homepage is http://nice.sf.net

Daniel

Jul 17 '05 #2

P: n/a
Daniel,

Thanks for your response.

With the help of the responses to my posting, I have worked my way
towards a rationalization of this behavior.

At compile time I indicated that I wanted the method with signature
foo(ClassX) to be invoked. In implementing ClassB.foo(ClassY) I was
thinking that I was overriding ClassA.foo(ClassX).

Sloppy thinking on my part. Conceptually, the latter cannot be an
override of the former precisely because it is more restrictive. (I
suppose I was expecting the runtime to treat it as an override when it
"could" be treated as such.) Moreover, the latter has no special
relationship in Java to the former at all; it is simply a different
method with the same name.

The fact that Java allows me to reuse the name in this scenario (where
ClassY extends ClassX) is what I would call a "pitfall", because my
mistake seems to me like an easy mistake to make.

So I need to train myself to do something like what's done in your
Point example. Implement a method with precisely the the same
signature, test the argument and cast or delegate to super as
appropriate.

Thanks,

-Rick
Daniel Bonniot <Da************@inria.fr> wrote in message news:<3F**************@inria.fr>...
while Java programmers need to write:

class Point
{
private int x;

public boolean equals(Object that) {
if (that instanceof Point) {
Point thatPoint = (Point) that;
return this.x == thatPoint.x;
} else {
return super.equals(that);
}
}
}

Jul 17 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.