I'm on a team building some class libraries to be used by many other
projects.
Some members of our team insist that "All public methods should be virtual"
just in case "anything needs to be changed". This is very much against my
instincts. Can anyone offer some solid design guidelines for me?
Thanks in advance....
Nov 17 '05
164 7621
> > Implementation and interface though, beit for a single class or for any code library, are logically one and the same. Seperating them
physically does not bring any gain as far as maintainability is concerned.
It does. It makes clients independand of the implementation. They only depend on the interface.
It is not the physical separation of interface and implementation that makes
clients independent of a class's implementation, it is you not messing with
the interface that makes clients independent of the implementation.
I rarely ever change a header. (I'd be dead within a week if I did change interfaces half as often as I change their implementations. People would be queueíng at the table tennis room while their machines are locked with the compiler running and had a _lot_ of time to consider what bad things to do to me. :o> )
And your point is? Whether you change interfaces a lot or not highly depends
on the fase of the project and on your role in it. If your interfaces are
established, good for you.
Working on just the interface without touching the implementation is pointless and if you are changing
the implementation you don't want anyone else to mess with the interface. If anyone did want to mess with the interface he would most certainly want
to mess with the implementation as well.
Althoguh I think it is possible, it is uncommon to lock files using CVS. I have never done it and haven't heard from anyone doing it here.
I am not sure what you mean with CVS. When I say "lock" I am refering to a
source control system like SourceSafe, MKS or PVCS.
You should lock both interface and implemantation to make sure no one else
will change the interface while you are working on the implementation. They
are one. I know that it is unlikely someone will, I was just making the
point that interface and implementation are logically one (we are still
talking source code for a class). Mind that the idea of .c and .h files stems from the pre-OO days. People mainly extended that to .cpp and .h because they were used to working
that way. For class definitions there is no point in doing so as far as maintainability is concerned..
You're kidding, aren't you? Suppose I have a class 'X', used in just about every part of a big project. Now I found a bug in 'X::f()' and need to fix that. Why would I need to lock/change the interface?
Just lock, not change. With C# it is not only because there is no longer a practical need for separation that everything is in one file now, it is a design decision
based on the notion that a class definition is a self-contained unit. Think of
the problems that would arise for the namespace system if it were different.
I don't know C# and I don't know its namespace system. What I know is this: If I cause everybody here to recompile just because I fix some implementation, I'd be in serious trouble.
We don't have a disagreement, you just misunderstood what I was saying.
Joining interface and implementation in one file does not break the
interface when you update the implementation.
Martin.
> I like a seperate file with an overview of what your interface for a class is (speaking about implementation of classes). Now that I code JAVA sometimes, I really miss the headers, and the feeling doesn't go away.
The problem is that such an overview cannot be generated from just a
source file, because it isn't stable and could contain bugs that do not allow it
to be generated. A side bar with an alfabetically sorted list with members
and stuff is to my feeling not enough. A header file that groups functionality and members etc gives me a nicer overview.
I haven't done any substantial C# project yet so I didn't feel it yet but I
may feel the same in the near future. I hope I will just get used to it,
syntax has become clearer in the way that things are being declared loaclly,
less surprises or pitfalls. But I agree the overall code image seems to have
suffered. For one thing, I prefer this:
TMyThingy = class(TObject)
private:
Field1 :integer;
Field2 :string;
procedure method1();
procedure method2();
protected:
procedure method3();
public:
constructor Create
destructor Destroy;
procedure method4();
end;
over this:
TMyThingy = class(TObject)
private Field1 :integer;
private Field2 :string;
private procedure method1();
private procedure method2();
protected procedure method3();
public constructor Create
public destructor Destroy;
public procedure method4();
end;
The sections are lost, it's not pretty! :-(
Martin.
Martin Maat [EBL] <du***@somewhere.nl> wrote: [...] It makes clients independand of the implementation. They only depend on the interface. It is not the physical separation of interface and implementation that makes clients independent of a class's implementation, it is you not messing with the interface that makes clients independent of the implementation.
So when you checkin a c# file that you
have changed without changing anything
of the interface, does everybody have
to recompile the dependend modules? I rarely ever change a header. [...]
And your point is? [...]
Why should I lock a header if I won't
change it??? Working on just the interface without touching the implementation is pointless and if you are changing the implementation you don't want anyone else to mess with the interface. If anyone did want to mess with the interface he would most certainly want to mess with the implementation as well.
Althoguh I think it is possible, it is uncommon to lock files using CVS. I have never done it and haven't heard from anyone doing it here.
I am not sure what you mean with CVS. When I say "lock" I am refering to a source control system like SourceSafe, MKS or PVCS.
CVS, the version control system.
(www.cvshome.org)
After years of trouble with VSS, we
switched to CVS. I was not happy about
that decision back then (although I
wanted to get rid of VSS as soon as
possible). But I learned to like CVS.
You should lock both interface and implemantation to make sure no one else will change the interface while you are working on the implementation. They are one. I know that it is unlikely someone will, I was just making the point that interface and implementation are logically one (we are still talking source code for a class).
As I said, I never lock anything. If
anyone changed anything I was working
on, I merge this into my local version
(and test it) before I checkin the
file(s).
[...] Suppose I have a class 'X', used in just about every part of a big project. Now I found a bug in 'X::f()' and need to fix that. Why would I need to lock/change the interface? Just lock, not change.
Why would I need to lock the interface?
[...] Joining interface and implementation in one file does not break the interface when you update the implementation.
I don't know if C# prevents recompilation
of dependend modules if you change a module
without changing any declarations. Maybe it
does so.
However, with the interface/implementation
separated, you have a nice check to make
sure you didn't change anything accidentally,
as your implementation most often won't
compile if you did.
Martin.
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
> > It is not the physical separation of interface and implementation that
makes clients independent of a class's implementation, it is you not messing
with the interface that makes clients independent of the implementation.
So when you checkin a c# file that you have changed without changing anything of the interface, does everybody have to recompile the dependend modules?
If they care for the update they may want to recompile. If they don't, they
will not recompile. I'm not sure I understand what you are worried about.
Why should I lock a header if I won't change it???
To make sure that once you check in your updated implementation and unlock
your interface, the result will be consistent. You will be sure the
interface that matches the implementation has not been changed.
CVS, the version control system. (www.cvshome.org)
Are you dislectic by any change? Ah, CVS stands for "Concurrent Versions
System". What was so bad about Visual SourceSafe?
As I said, I never lock anything. If anyone changed anything I was
working on, I merge this into my local version (and test it) before I checkin
the file(s).
Oh, that's nice. Why not keep everything on your local machine and ship that
version once it is time to release?
You are proving my point. If you do lock the stuff that belongs together no
one can change anything while you're working on it. Suppose I have a class 'X', used in just about every part of a big project. Now I found a bug in 'X::f()' and need to fix that. Why would I need to lock/change the interface?
Just lock, not change.
Why would I need to lock the interface?
I give up, you win.
I don't know if C# prevents recompilation of dependend modules if you change a module without changing any declarations. Maybe it does so.
You seem awefully afraid of recompilation. Could you be working on this one
big monolithic application in which everything depends on everything for
which any change will cause a chain reaction in the C++ compiler, rendering
your machine useless for 20 minutes? Could it be someone influencial in your
company declared "Build All" the only way to compile and had the compile
option removed from all of your Visual Studio installations? I don't see the
problem, recompilation of one class should not be issue.
Martin.
Martin Maat [EBL] <du***@somewhere.nl> wrote: [...] So when you checkin a c# file that you have changed without changing anything of the interface, does everybody have to recompile the dependend modules? If they care for the update they may want to recompile. If they don't, they will not recompile. I'm not sure I understand what you are worried about.
I am worried about having to recompile
files that depend on the interface of a
module when only its implementation
changed. Why should I lock a header if I won't change it???
To make sure that once you check in your updated implementation and unlock your interface, the result will be consistent. You will be sure the interface that matches the implementation has not been changed.
I am sure, since I merge any changes
into my code before I it check in. CVS, the version control system. (www.cvshome.org)
Are you dislectic by any change?
???
Ah, CVS stands for "Concurrent Versions System". What was so bad about Visual SourceSafe?
We needed to run analyze every night and
still lost changes since it didn't repair
everything the clients messed up. Branching
is a PITA in VSS. Remote access, too.
AFAIK, MS doesn't use VSS for their own
software. And I can understand this. As I said, I never lock anything. If anyone changed anything I was working on, I merge this into my local version (and test it) before I checkin the file(s).
Oh, that's nice. Why not keep everything on your local machine and ship that version once it is time to release?
Because that would hurt.
We have dedicated build machines for
this. If I need a release, a script
will do a clean check out on a label,
build this version, run some tests,
and build the installer.
The result will eventually be shipped.
You are proving my point. If you do lock the stuff that belongs together no one can change anything while you're working on it.
Since you haven't even heard of CVS,
how do you think you can judge the way
it is used to work? The way I described
(Have you ever looked at SourceForge?
Can you imagine this be to done using
VSS? With the developers spread all
over the world?) > Suppose I have a class 'X', used in just > about every part of a big project. Now I > found a bug in 'X::f()' and need to fix > that. Why would I need to lock/change the > interface? Just lock, not change. Why would I need to lock the interface?
I give up, you win.
No, it was a serious question.
I don't know if C# prevents recompilation of dependend modules if you change a module without changing any declarations. Maybe it does so.
You seem awefully afraid of recompilation. Could you be working on this one big monolithic application in which everything depends on everything for which any change will cause a chain reaction in the C++ compiler, rendering your machine useless for 20 minutes? Could it be someone influencial in your company declared "Build All" the only way to compile and had the compile option removed from all of your Visual Studio installations? I don't see the problem, recompilation of one class should not be issue.
Currently I work on a 700kLOC+ app. Of
this, 500kLOC+ were written in-house. And
that's not old, longish C code. All C++,
the oldest code ~5 years old, much of it
done one year ago; a lot of care went into
preventing redundancy and dependencies,
and into clean modularization. (We test
modules individually, after all). Yet, a
full rebuild of all the projects involved
takes ~60min. If I ever wanted to rebuild
all the test projects, too, it would take
another few hours.
This is one of those projects, where, if
you are going to do some major changes to
some part of it, you first write a test
program for this, then do your changes,
and test them in the main app only after
you are sure the (re-)design done and
you found most of the mistakes you made.
It simply is faster this way.
Recompilation of a class' implementation
is no issue. OTOH, changing the interface
of a class is something not done without
consideration in such a project, as it
forces all dependend modules to be re-
compiled.
I definitely wouldn't want to do do such
an app using a language where a change in
the implementation of one class forces a
recompilation of all modules depending
on its interface.
Martin.
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
> First, virtual methods do not come free, they perform worse than non-virtual methods.
This is a generalized statement regarding an *IMPLEMENTATION* of the language,
not the language itself.
Another thing. If something is declared virtual, that is a statement on the part of the designer. It implies some generic behavior that may need to be altered somehow for any derived.class in order to obtain the desired...
It doesn't/shouldn't imply anything. If the user relies in implications, they
are going to have problems with their code regardless of the construct at hand.
Remember, that virtual doesn't mean that the overriding method has to do
*anything*, it may be simply tracking, monitoring, or triggering a response.
These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/
changing behavior.
Imagine every public method of a fairly complex class being virtual. Most of them will implement fixed behavior that is not supposed to be overridden.
If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
virtual. If they shouldn't be overridden in such a way as to change behavior,
they should be virtual and that fact should be in the documentation.
It would only invite developers to screw things up and they would not understand what is expected of them.
Then document it better. Like I said, if they have to rely on implications or
assumptions, they are going to screw it up, regardless of virtuality.
** Don't protect me from myself **
Martin Maat wrote: Ken,
I'm on a team building some class libraries to be used by many other projects.
Some members of our team insist that "All public methods should be virtual" just in case "anything needs to be changed". This is very much against my instincts. Can anyone offer some solid design guidelines for me?
They are missing the point of object orientation. The first and foremost benefit is not "ultimate" flexibility", neither is is re-use. The main benefits are control of complexity; 1:1 mapping of the real world to a model and comprehensiveness.
First, virtual methods do not come free, they perform worse than non-virtual methods. Now this may be an issue and it may not, depending on the kind of application and the way the methods are used.
Another thing. If something is declared virtual, that is a statement on the part of the designer. It implies some generic behavior that may need to be altered somehow for any derived.class in order to obtain the desired behavior. It helps the developer understand the problem domain. Declaring everything virtual is bad because the developer will wonder how he should deal with the method in e derived class. Must he override it? Must he call the inherited implementation? Before or after his own actions? Can he leave it like it is? If I were that developer and I had been thinking this over, trying to understand the purpose of a particular virtual method not understanding how to deal with it and I finally went go to the designer of the base class and I would ask why and he would say "For no particular reason, I just couldn't be bothered thinking too hard about the consequences of sealing it so I left it all open for you, providing ultimate flexibility, so you can do the thinking I could not be bothered with, aren't you happy?" Then I would not be happy.
Imagine every public method of a fairly complex class being virtual. Most of them will implement fixed behavior that is not supposed to be overridden. It would only invite developers to screw things up and they would not understand what is expected of them.
Finally, if at some point "something needs to be changed" and polymorphism would be the answer, then that would be the right moment to open the base classes source and change the declaration for the particular method to protected (not public, heaven forbid).
I read the discussion on private virtual methods too. While some languages may technically allow them they don't make sense. In Delphi for instance you can declare both the base class's virtual methods and the overrided derived class's method private but that only compiles as long as both the base class and the derived class are in the same source file. Once the derived class in in a different source file, all the base class's private methods are invisible and the project won't compile. Needless to say that little projects have all classes declared in the same source file. Since it only works if you put everything together or make everything friend with everything else it is absolutely pointless because you can in those situations access anything in your base class from anywhere, it is as good as putting everything in the same class right away and not derive at all.
So the boys are wrong, you are right. Rub it in.
Martin.
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
Bret Pehrson <br**@infowest.com> wrote: First, virtual methods do not come free, they perform worse than non-virtual methods. This is a generalized statement regarding an *IMPLEMENTATION* of the language, not the language itself.
Can you tell us about any implementation
where this isn't true? Or even only
describe how such an implementation would
work? Another thing. If something is declared virtual, that is a statement on the part of the designer. It implies some generic behavior that may need to be altered somehow for any derived.class in order to obtain the desired...
It doesn't/shouldn't imply anything. If the user relies in implications, they are going to have problems with their code regardless of the construct at hand.
Remember, that virtual doesn't mean that the overriding method has to do *anything*, it may be simply tracking, monitoring, or triggering a response. These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/ changing behavior.
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour. Imagine every public method of a fairly complex class being virtual. Most of them will implement fixed behavior that is not supposed to be overridden.
If they MUST NOT be overridden for **ANY** reason, then they shouldn't be virtual. If they shouldn't be overridden in such a way as to change behavior, they should be virtual and that fact should be in the documentation.
see above It would only invite developers to screw things up and they would not understand what is expected of them.
Then document it better. Like I said, if they have to rely on implications or assumptions, they are going to screw it up, regardless of virtuality.
If a desing expresses its intention, that's
a lot bettern than having to read a lot of
documentation in order to understand the
intention.
** Don't protect me from myself **
Don't expect us to carefully read douments
that contradict your code.
[...]
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
> Can you tell us about any implementation where this isn't true? Or even only describe how such an implementation would work?
I'm not a compiler writer, nor a hardware design engineer. The fact remains,
the original statement is about an implementation (whether or not it applies to
all current implementations or not is irrelevant).
With hardware advanced as it is, especially w/ predictive processing/branching,
you can't assume anything about performance.
The only reason to make a function virtual is to allw it to be overridden. Overriding a function is changing behaviour.
Not true.
class A
{
public:
virtual void a()
{
// do something
}
};
class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}
This doesn't change behavior, but is a very valid and real-world case of
virtual overrides.
Don't expect us to carefully read douments that contradict your code.
???
I do expect you to carefully read documents that define the behavior, usage,
and intent of my interfaces.
Hendrik Schober wrote: Bret Pehrson <br**@infowest.com> wrote: First, virtual methods do not come free, they perform worse than non-virtual methods.
This is a generalized statement regarding an *IMPLEMENTATION* of the language, not the language itself.
Can you tell us about any implementation where this isn't true? Or even only describe how such an implementation would work?
Another thing. If something is declared virtual, that is a statement on the part of the designer. It implies some generic behavior that may need to be altered somehow for any derived.class in order to obtain the desired...
It doesn't/shouldn't imply anything. If the user relies in implications, they are going to have problems with their code regardless of the construct at hand.
Remember, that virtual doesn't mean that the overriding method has to do *anything*, it may be simply tracking, monitoring, or triggering a response. These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/ changing behavior.
The only reason to make a function virtual is to allw it to be overridden. Overriding a function is changing behaviour.
Imagine every public method of a fairly complex class being virtual. Most of them will implement fixed behavior that is not supposed to be overridden.
If they MUST NOT be overridden for **ANY** reason, then they shouldn't be virtual. If they shouldn't be overridden in such a way as to change behavior, they should be virtual and that fact should be in the documentation.
see above
It would only invite developers to screw things up and they would not understand what is expected of them.
Then document it better. Like I said, if they have to rely on implications or assumptions, they are going to screw it up, regardless of virtuality.
If a desing expresses its intention, that's a lot bettern than having to read a lot of documentation in order to understand the intention.
** Don't protect me from myself **
Don't expect us to carefully read douments that contradict your code.
[...]
Schobi
-- Sp******@gmx.de is never read I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people." Scott Meyers
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>> di********@discussion.microsoft.com wrote: And what language do you think will be used most in the CLI world and where the jobs are :D C#
You don't get it, do you? There are classes of applications for which
C#/Java is either too expensive in terms of ROM/RAM footprint (systems where
the hardware costs are considerably higher than the one for software) or
simply the wrong choice (hard realtime systems, device drivers). For these
applications C++ will be *the* language for *at least* another decade. C#
and Java are here to stay and that's a good thing but these languages will
not be able to to take over everything from C++.
*Every* language has it's pros and cons and language wars are thus just
plain pointless, especially when you argue with beliefs instead of technical
facts...
Regards,
Andreas
No but hey, tell that to the employers out there advertising for C# skills
:D
What do I know eh.
Sure with hardreal time you dont want dynamic memory allocation, obviously
captain obvious. This are specalized cases, for the more run of the mill
applications C# is perfect with short project cycles (which is more and more
common) and managibility of bugs and the current drive for more security via
a managed environment.
Ofcourse C++ is still used but watch C# take the mainstream applications and
C++ for specific interop and time / memory critical applications. C# will
be the dominant language in the mangaged world and C+/CLI for specific needs
however this will be kept and should be kept to a minimum as the
managability of that code is too tangled and messy.
Alot of redesign of applications I have worked on are all going C#. They are
moving away from Propriety languages like VB and java towards more
standardized ones for level playing fields and less lockin from the likes of
MS and Sun.
"Andreas Huber" <ah****@gmx.net> wrote in message
news:40********@news.bluewin.ch... di********@discussion.microsoft.com wrote: And what language do you think will be used most in the CLI world and where the jobs are :D C# You don't get it, do you? There are classes of applications for which C#/Java is either too expensive in terms of ROM/RAM footprint (systems
where the hardware costs are considerably higher than the one for software) or simply the wrong choice (hard realtime systems, device drivers). For these applications C++ will be *the* language for *at least* another decade. C# and Java are here to stay and that's a good thing but these languages will not be able to to take over everything from C++.
*Every* language has it's pros and cons and language wars are thus just plain pointless, especially when you argue with beliefs instead of
technical facts...
Regards,
Andreas
> > Can you tell us about any implementation where this isn't true? Or
even only describe how such an implementation would work?
I'm not a compiler writer, nor a hardware design engineer. The fact
remains, the original statement is about an implementation (whether or not it
applies to all current implementations or not is irrelevant).
Ah, "I am ignorant so you can't touch me". The trouble with that philosophy
is that there will be other ignorant but nontheless interested and eager to
learn people taking note of your blunt and uninformed statements. So please
be a little more cautious.
With hardware advanced as it is, especially w/ predictive
processing/branching, you can't assume anything about performance.
Polymorphism costs, no matter what the technology will be. There are more
entities involved (memory, lookup tables) and more steps to be taken
(processing). You might argue "for my application this is insignificant" or
"I don't care" but no technology is going to equalize the difference.
The only reason to make a function virtual is to allow it to be overridden. Overriding a function is changing behaviour.
Not true.
And then you provide an example demonstrating just what you are denying.
Don't expect us to carefully read douments that contradict your code.
???
I do expect you to carefully read documents that define the behavior, usage, and intent of my interfaces.
Documentation should be the second line of support. Your code is the first.
The point made is that it isn't bad if your code leaves one wondering so one
will fall back onto the documentation. What is bad though is when your code
suggests something that isn't true so one will think one understands and one
will proceed with the wrong idea. Sort of like reading one of your posts and
not reading the responses to it because the post was so sure and
self-confident that the guy obviously knew what he was talking about.
Martin.
I just developed a process control system in C# in a matter of months from
design to final shipping.
The performance was way on par with unmanaged code, it was half of what our
requirments was and perfectly more than acceptable performance , actualy it
was more than what we expected and we havnt even done an performance
optimization run on it. This is a very real world time critical appliation
(cycle times in an automated environment with lots of variables like
lighting and continual movement of items).
There is no need for the high risk of unmanaged C++ with the long
development times for this appliation. C# is more than adequate. This is a
time critical application. Automation robotics and vision. cycle times are
very very important. This just hardens my confidence in C# as a real
alternative to high performing real world automation.
<.> wrote in message news:ue**************@TK2MSFTNGP10.phx.gbl... No but hey, tell that to the employers out there advertising for C# skills :D
What do I know eh.
Sure with hardreal time you dont want dynamic memory allocation, obviously captain obvious. This are specalized cases, for the more run of the mill applications C# is perfect with short project cycles (which is more and
more common) and managibility of bugs and the current drive for more security
via a managed environment.
Ofcourse C++ is still used but watch C# take the mainstream applications
and C++ for specific interop and time / memory critical applications. C# will be the dominant language in the mangaged world and C+/CLI for specific
needs however this will be kept and should be kept to a minimum as the managability of that code is too tangled and messy.
Alot of redesign of applications I have worked on are all going C#. They
are moving away from Propriety languages like VB and java towards more standardized ones for level playing fields and less lockin from the likes
of MS and Sun.
"Andreas Huber" <ah****@gmx.net> wrote in message news:40********@news.bluewin.ch... di********@discussion.microsoft.com wrote: And what language do you think will be used most in the CLI world and where the jobs are :D C#
You don't get it, do you? There are classes of applications for which C#/Java is either too expensive in terms of ROM/RAM footprint (systems where the hardware costs are considerably higher than the one for software) or simply the wrong choice (hard realtime systems, device drivers). For
these applications C++ will be *the* language for *at least* another decade.
C# and Java are here to stay and that's a good thing but these languages
will not be able to to take over everything from C++.
*Every* language has it's pros and cons and language wars are thus just plain pointless, especially when you argue with beliefs instead of technical facts...
Regards,
Andreas
Bret Pehrson <br**@infowest.com> wrote: [...] The only reason to make a function virtual is to allw it to be overridden. Overriding a function is changing behaviour. Not true.
class A { public: virtual void a() { // do something } };
class B : public A { public: virtual void a() { A::a(); trace("processing a"); } }
This doesn't change behavior, but is a very valid and real-world case of virtual overrides.
This does change behaviour. (And I don't
think I want/need to tell you how.) Don't expect us to carefully read douments that contradict your code.
???
I do expect you to carefully read documents that define the behavior, usage, and intent of my interfaces.
I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.
[...]
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
Excellent post. Ken, you can go ahead and follow your instincts.
Bruno.
"Martin Maat" <du***@somewhere.nl> a écrit dans le message de
news:10*************@corp.supernews.com... Ken,
I'm on a team building some class libraries to be used by many other projects.
Some members of our team insist that "All public methods should be virtual" just in case "anything needs to be changed". This is very much against
my instincts. Can anyone offer some solid design guidelines for me?
They are missing the point of object orientation. The first and foremost benefit is not "ultimate" flexibility", neither is is re-use. The main benefits are control of complexity; 1:1 mapping of the real world to a
model and comprehensiveness.
First, virtual methods do not come free, they perform worse than
non-virtual methods. Now this may be an issue and it may not, depending on the kind of application and the way the methods are used.
Another thing. If something is declared virtual, that is a statement on
the part of the designer. It implies some generic behavior that may need to be altered somehow for any derived.class in order to obtain the desired behavior. It helps the developer understand the problem domain. Declaring everything virtual is bad because the developer will wonder how he should deal with the method in e derived class. Must he override it? Must he call the inherited implementation? Before or after his own actions? Can he
leave it like it is? If I were that developer and I had been thinking this over, trying to understand the purpose of a particular virtual method not understanding how to deal with it and I finally went go to the designer of the base class and I would ask why and he would say "For no particular reason, I just couldn't be bothered thinking too hard about the
consequences of sealing it so I left it all open for you, providing ultimate
flexibility, so you can do the thinking I could not be bothered with, aren't you
happy?" Then I would not be happy.
Imagine every public method of a fairly complex class being virtual. Most
of them will implement fixed behavior that is not supposed to be overridden.
It would only invite developers to screw things up and they would not understand what is expected of them.
Finally, if at some point "something needs to be changed" and polymorphism would be the answer, then that would be the right moment to open the base classes source and change the declaration for the particular method to protected (not public, heaven forbid).
I read the discussion on private virtual methods too. While some languages may technically allow them they don't make sense. In Delphi for instance
you can declare both the base class's virtual methods and the overrided
derived class's method private but that only compiles as long as both the base
class and the derived class are in the same source file. Once the derived class
in in a different source file, all the base class's private methods are invisible and the project won't compile. Needless to say that little projects have all classes declared in the same source file. Since it only works if you put everything together or make everything friend with everything else it is absolutely pointless because you can in those situations access anything in your base class from anywhere, it is as good as putting everything in the same class right away and not derive at all.
So the boys are wrong, you are right. Rub it in.
Martin.
> Ah, "I am ignorant so you can't touch me".
Try re-reading. My original point was that you can't assume anything about
performance, because it is strictly tied to the implementation (and underlying
hardware).
I have yet to read *anything* in either the C or C++ spec that deals w/
performance. The thread is about the positives and negatives of MI, not
implementation-specific or performance. The only reason to make a function virtual is to allow it to be overridden. Overriding a function is changing behaviour. Not true. And then you provide an example demonstrating just what you are denying.
Nay, my good friend. My example does NOT change the behavior of the
**CLASS**. Perhaps you would care to elaborate as to why you think it does...
Documentation should be the second line of support. Your code is the first.
You code is the first line if you are working *in the code*. Although not
explicitly defined as such, this thread has been primarily limited to the
presumption that only the interface is avaialble (i.e. header files). My
documentation comments are based on that.
Honestly, with the exception of the performance issue(s) (which I feel don't
belong in this discussion), the only reasons that I've heard against MI are
'poor style' (with absolutely no justification) and 'bad documentation' --
funny thing is that neither of these reasons have anything specifically to do
w/ MI, but are really effects of deeper problems and/or a lack of dicipline,
schooling, etc.
Excluding the 'dreaded diamond' issue of MI, can someone substantively say why
MI should *NOT* be part of a language???
"Martin Maat [EBL]" wrote: Can you tell us about any implementation where this isn't true? Or even only describe how such an implementation would work? I'm not a compiler writer, nor a hardware design engineer. The fact remains, the original statement is about an implementation (whether or not it applies to all current implementations or not is irrelevant).
Ah, "I am ignorant so you can't touch me". The trouble with that philosophy is that there will be other ignorant but nontheless interested and eager to learn people taking note of your blunt and uninformed statements. So please be a little more cautious.
With hardware advanced as it is, especially w/ predictive processing/branching, you can't assume anything about performance.
Polymorphism costs, no matter what the technology will be. There are more entities involved (memory, lookup tables) and more steps to be taken (processing). You might argue "for my application this is insignificant" or "I don't care" but no technology is going to equalize the difference. The only reason to make a function virtual is to allow it to be overridden. Overriding a function is changing behaviour. Not true.
And then you provide an example demonstrating just what you are denying. Don't expect us to carefully read douments that contradict your code.
???
I do expect you to carefully read documents that define the behavior, usage, and intent of my interfaces.
Documentation should be the second line of support. Your code is the first. The point made is that it isn't bad if your code leaves one wondering so one will fall back onto the documentation. What is bad though is when your code suggests something that isn't true so one will think one understands and one will proceed with the wrong idea. Sort of like reading one of your posts and not reading the responses to it because the post was so sure and self-confident that the guy obviously knew what he was talking about.
Martin.
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
> This does change behaviour. (And I don't think I want/need to tell you how.)
Then don't post a response! We are (presumably) all here for constructive and
educational reasons, so you statement provides nothing and actually confuses
the issue.
Please elaborate on why you think this changes class behavior. I'll probably
learn something.
I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider.
I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for anyone
else (either directly or indirectly), should include complete and correct
documentation.
It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO* comments).
If *you* can take naked prototypes and definitions and understand the usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.
Hendrik Schober wrote: Bret Pehrson <br**@infowest.com> wrote: [...] The only reason to make a function virtual is to allw it to be overridden. Overriding a function is changing behaviour.
Not true.
class A { public: virtual void a() { // do something } };
class B : public A { public: virtual void a() { A::a(); trace("processing a"); } }
This doesn't change behavior, but is a very valid and real-world case of virtual overrides.
This does change behaviour. (And I don't think I want/need to tell you how.)
Don't expect us to carefully read douments that contradict your code.
???
I do expect you to carefully read documents that define the behavior, usage, and intent of my interfaces.
I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider.
[...]
Schobi
-- Sp******@gmx.de is never read I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people." Scott Meyers
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... First, virtual methods do not come free, they perform worse than
non-virtual methods. This is a generalized statement regarding an *IMPLEMENTATION* of the
language, not the language itself.
Another thing. If something is declared virtual, that is a statement on
the part of the designer. It implies some generic behavior that may need to
be altered somehow for any derived.class in order to obtain the desired... It doesn't/shouldn't imply anything. If the user relies in implications,
they are going to have problems with their code regardless of the construct at
hand. Remember, that virtual doesn't mean that the overriding method has to do *anything*, it may be simply tracking, monitoring, or triggering a
response. These are all *VALID* uses of virtual, and have absolutely NOTHING to do
w/ changing behavior.
Imagine every public method of a fairly complex class being virtual.
Most of them will implement fixed behavior that is not supposed to be
overridden. If they MUST NOT be overridden for **ANY** reason, then they shouldn't be virtual. If they shouldn't be overridden in such a way as to change
behavior, they should be virtual and that fact should be in the documentation.
In my opinion they should be nonvirtual and internally call a
protected(private in C++) virtual method that may be overridden. Look up
"Template Method" and "Design by Contract". Making a public method virtual
means that you give your client no guarantee what so ever as to what will
happen when they invoke the method on a subclass. It also means that you've
made at statement that when I inherit your class I may replace the
functionality of the method. If what you mean can be expressed clearly in
code, you should do so.You shouldn't write code that says one thing and say
another in the documentation.
/Magnus Lidbom
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... This does change behaviour. (And I don't think I want/need to tell you how.) Then don't post a response! We are (presumably) all here for constructive
and educational reasons, so you statement provides nothing and actually
confuses the issue.
Please elaborate on why you think this changes class behavior. I'll
probably learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may rain down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method A does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in. I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider. I'm not following. Maybe my statements weren't clear, but my intention is this: any well-meaning programmer that produces code potentially for
anyone else (either directly or indirectly), should include complete and correct documentation.
It is extremely difficult (at best) to determine expected behavior from prototypes and definitions alone (meaning, no documentation, *NO*
comments). If *you* can take naked prototypes and definitions and understand the
usage, behavior, and characteristics of the interface, then you are in a definite minority. Personally, I rely heavily on the documentation.
Hendrik Schober wrote: Bret Pehrson <br**@infowest.com> wrote: [...] > The only reason to make a function virtual > is to allw it to be overridden. Overriding > a function is changing behaviour.
Not true.
class A { public: virtual void a() { // do something } };
class B : public A { public: virtual void a() { A::a(); trace("processing a"); } }
This doesn't change behavior, but is a very valid and real-world case
of virtual overrides.
This does change behaviour. (And I don't think I want/need to tell you how.)
> Don't expect us to carefully read douments > that contradict your code.
???
I do expect you to carefully read documents that define the behavior,
usage, and intent of my interfaces.
I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider.
[...]
Schobi
-- Sp******@gmx.de is never read I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people." Scott Meyers
-- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
> Simple, you do *anything* and what your class does changes.
No, no, NO!
Come on, my example doesn't change the behaior of the class -- period. It may
(and very well does) change the state or behavior of the *SYSTEM*, but we are
*not* talking about that.
The point was, is, and will continue to be: virtual methods can be used to
change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which
is what I've illustrated.
"Daniel O'Connell [C# MVP]" wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... This does change behaviour. (And I don't think I want/need to tell you how.)
Then don't post a response! We are (presumably) all here for constructive
and educational reasons, so you statement provides nothing and actually confuses the issue.
Please elaborate on why you think this changes class behavior. I'll probably learn something. Simple, you do *anything* and what your class does changes. If that call throws an exception you are adding a point of failure, if that call instantiates objects you are using memory(and perhaps creating a memory leak), if that call deletes random files you are probably angering users, if that call changes an object or variable the underlying method uses in another class you could introduce unexpected bugs and it all may rain down on someone elses head because of it. Simply because your call leaves the instance alone does *NOT* mean that its behaviour isn't changed, its state simply isn't. Behaviour is considerably more than simply what Method A does to Field B. Your example likely doesn't change the object but potentially changes the entire universe the object exists in. I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider.
I'm not following. Maybe my statements weren't clear, but my intention is this: any well-meaning programmer that produces code potentially for
anyone else (either directly or indirectly), should include complete and correct documentation.
It is extremely difficult (at best) to determine expected behavior from prototypes and definitions alone (meaning, no documentation, *NO* comments). If *you* can take naked prototypes and definitions and understand the usage, behavior, and characteristics of the interface, then you are in a definite minority. Personally, I rely heavily on the documentation.
Hendrik Schober wrote: Bret Pehrson <br**@infowest.com> wrote: > [...] > > The only reason to make a function virtual > > is to allw it to be overridden. Overriding > > a function is changing behaviour. > > Not true. > > class A > { > public: > virtual void a() > { > // do something > } > }; > > class B : public A > { > public: > virtual void a() > { > A::a(); > trace("processing a"); > } > } > > This doesn't change behavior, but is a very valid and real-world case of > virtual overrides.
This does change behaviour. (And I don't think I want/need to tell you how.)
> > Don't expect us to carefully read douments > > that contradict your code. > > ??? > > I do expect you to carefully read documents that define the behavior, usage, > and intent of my interfaces.
I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider.
> [...]
Schobi
-- Sp******@gmx.de is never read I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people." Scott Meyers
-- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
> In my opinion they should be nonvirtual and internally call a protected(private in C++) virtual method that may be overridden. Look up "Template Method" and "Design by Contract". Making a public method virtual means that you give your client no guarantee what so ever as to what will happen when they invoke the method on a subclass. It also means that you've
[snip]
You make it sound like the norm is for a programmer to use an existing class
library, add a couple of overridden methods, and then repackage that and
hand/sell it off to someone else.
In most cases, anyone that is overriding a method to an established interface
is ultimately the end user, and (provided they are informed and careful
programmers), has no need to adhere to the original 'design by contract', per
se.
We really need to make sure that we don't confuse a designer w/ a user -- I
understand the potential problems w/ a designer that overrides a previous
designer's work, but my discussions have been limited to the end user of an
interface that has the desire to monitor or change behavior (that will in
most/all cases not be subsequently used by someone else).
Magnus Lidbom wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... First, virtual methods do not come free, they perform worse than non-virtual methods.
This is a generalized statement regarding an *IMPLEMENTATION* of the
language, not the language itself.
Another thing. If something is declared virtual, that is a statement on the part of the designer. It implies some generic behavior that may need to be altered somehow for any derived.class in order to obtain the desired...
It doesn't/shouldn't imply anything. If the user relies in implications, they are going to have problems with their code regardless of the construct at hand. Remember, that virtual doesn't mean that the overriding method has to do *anything*, it may be simply tracking, monitoring, or triggering a
response. These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/ changing behavior.
Imagine every public method of a fairly complex class being virtual. Most of them will implement fixed behavior that is not supposed to be overridden. If they MUST NOT be overridden for **ANY** reason, then they shouldn't be virtual. If they shouldn't be overridden in such a way as to change
behavior, they should be virtual and that fact should be in the documentation. In my opinion they should be nonvirtual and internally call a protected(private in C++) virtual method that may be overridden. Look up "Template Method" and "Design by Contract". Making a public method virtual means that you give your client no guarantee what so ever as to what will happen when they invoke the method on a subclass. It also means that you've made at statement that when I inherit your class I may replace the functionality of the method. If what you mean can be expressed clearly in code, you should do so.You shouldn't write code that says one thing and say another in the documentation.
/Magnus Lidbom
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... Magnus Lidbom wrote: In my opinion they should be nonvirtual and internally call a protected(private in C++) virtual method that may be overridden. Look up "Template Method" and "Design by Contract". Making a public method
virtual means that you give your client no guarantee what so ever as to what
will happen when they invoke the method on a subclass. It also means that
you've [snip]
You make it sound like the norm is for a programmer to use an existing
class library, add a couple of overridden methods, and then repackage that and hand/sell it off to someone else.
In most cases, anyone that is overriding a method to an established
interface is ultimately the end user, and (provided they are informed and careful programmers), has no need to adhere to the original 'design by contract',
per se.
We really need to make sure that we don't confuse a designer w/ a user --
I understand the potential problems w/ a designer that overrides a previous designer's work, but my discussions have been limited to the end user of
an interface that has the desire to monitor or change behavior (that will in most/all cases not be subsequently used by someone else).
So you feel that if at the present moment you don't expect anyong else to
become a client of your code, lets not get into the odds of that, you should
write code that would be unacceptable if anyone else needs to use it?
You said:
"If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
vrtual. If they shouldn't be overridden in such a way as to change
behavior, they should be virtual and that fact should be in the
documentation."
So you feel that one should write code that doesn't express your intent and
then document the intent when you don't expect it to be used by others? My
opinion is that you should always write code that expresses your intent. In
this case it's also very simple to do so:
protected virtual void Extention(){}
public void DoStuff()
{
//dostuff
Extention();
}
/Magnus Lidbom
> I have yet to read *anything* in either the C or C++ spec that deals w/ performance. The thread is about the positives and negatives of MI, not implementation-specific or performance.
Aren't you confusing different news threads here? This is about polymorphism
through virtual methods and whether it is appropriate to apply it as a means
of implementing a plug-in mechanism.
My example does NOT change the behavior of the **CLASS**. Perhaps you would care to elaborate as to why you think it does...
The behavior of the class is changed by sub-classing it. What is the point
of stating that a base class is not changed by the descendant? The way we
look at it there is no point in creating a descendent class if not either
behavior or data members are changed./ extended.
You code is the first line if you are working *in the code*. Although not explicitly defined as such, this thread has been primarily limited to the presumption that only the interface is avaialble (i.e. header files). My documentation comments are based on that.
For interface definitions it is even more important that they are
self-explanatory and clear on their intended use than it is for
implementation code.
the only reasons that I've heard against MI are 'poor style' (with absolutely no justification) and 'bad documentation' --
What do you mean with MI ? "MI" has not been part of the discussion I have
been participating in.
Martin.
> It is extremely difficult (at best) to determine expected behavior from prototypes and definitions alone (meaning, no documentation, *NO*
comments). If *you* can take naked prototypes and definitions and understand the
usage, behavior, and characteristics of the interface, then you are in a definite minority. Personally, I rely heavily on the documentation.
I think you missed the point of the discussion. We are not against
documentation, neither does any of us feel strongly it should be made
redundant by code at all times. It was about particular design patterns,
whether they could be labeled "misleading code constructs" and if it was a
good thing to use them or not. To what extend we should assume coders to
recognize these patterns and understand the intend, if we should better not
use them anymore in C# now that it has more natural solutions to the
initially targeted problem like delegates.
So when someone sais "If the code does not tell the whole story, there's
always the documenttion to make up for that" we said that's no good because
once the code has given us the wrong impression yet the feeling that we
understand we are not likely to read the ducumentation.
Martin.
> What do you mean with MI ? "MI" has not been part of the discussion I have been participating in.
Yep, my mistake. MI discussions are in a different thread, and I confused
them.
"Martin Maat [EBL]" wrote: I have yet to read *anything* in either the C or C++ spec that deals w/ performance. The thread is about the positives and negatives of MI, not implementation-specific or performance.
Aren't you confusing different news threads here? This is about polymorphism through virtual methods and whether it is appropriate to apply it as a means of implementing a plug-in mechanism.
My example does NOT change the behavior of the **CLASS**. Perhaps you would care to elaborate as to why you think it does...
The behavior of the class is changed by sub-classing it. What is the point of stating that a base class is not changed by the descendant? The way we look at it there is no point in creating a descendent class if not either behavior or data members are changed./ extended.
You code is the first line if you are working *in the code*. Although not explicitly defined as such, this thread has been primarily limited to the presumption that only the interface is avaialble (i.e. header files). My documentation comments are based on that.
For interface definitions it is even more important that they are self-explanatory and clear on their intended use than it is for implementation code.
the only reasons that I've heard against MI are 'poor style' (with absolutely no justification) and 'bad documentation' --
What do you mean with MI ? "MI" has not been part of the discussion I have been participating in.
Martin.
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
Ya, I've lost interest in the thread.
Thanks for the discussions/comments/interactions. I've learned some new
things, and will apply them to my coding practices.
Magnus Lidbom wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... Magnus Lidbom wrote: In my opinion they should be nonvirtual and internally call a protected(private in C++) virtual method that may be overridden. Look up "Template Method" and "Design by Contract". Making a public method virtual means that you give your client no guarantee what so ever as to what will happen when they invoke the method on a subclass. It also means that you've [snip]
You make it sound like the norm is for a programmer to use an existing class library, add a couple of overridden methods, and then repackage that and hand/sell it off to someone else.
In most cases, anyone that is overriding a method to an established interface is ultimately the end user, and (provided they are informed and careful programmers), has no need to adhere to the original 'design by contract', per se.
We really need to make sure that we don't confuse a designer w/ a user -- I understand the potential problems w/ a designer that overrides a previous designer's work, but my discussions have been limited to the end user of an interface that has the desire to monitor or change behavior (that will in most/all cases not be subsequently used by someone else). So you feel that if at the present moment you don't expect anyong else to become a client of your code, lets not get into the odds of that, you should write code that would be unacceptable if anyone else needs to use it?
You said: "If they MUST NOT be overridden for **ANY** reason, then they shouldn't be vrtual. If they shouldn't be overridden in such a way as to change behavior, they should be virtual and that fact should be in the documentation."
So you feel that one should write code that doesn't express your intent and then document the intent when you don't expect it to be used by others? My opinion is that you should always write code that expresses your intent. In this case it's also very simple to do so:
protected virtual void Extention(){} public void DoStuff() { //dostuff Extention(); }
/Magnus Lidbom
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
> I'm on a team building some class libraries to be used by many other projects.
Some members of our team insist that "All public methods should be
virtual" just in case "anything needs to be changed". This is very much against my instincts. Can anyone offer some solid design guidelines for me?
Thanks in advance....
It's too big to be a space station.
n!
> The behavior of the class is changed by sub-classing it. What is the point of stating that a base class is not changed by the descendant? The way we look at it there is no point in creating a descendent class if not either behavior or data members are changed./ extended.
I agree that overriding to change behavior is half of the story. The other
half overriding to create/monitor/track an object's methods. I tried giving an
example of such a case before, just to point out that you can't limit 'virtual'
discussions just to the modification of behavior since it is perfectly
legitimate to override a method and *not* change any behavior.
"Martin Maat [EBL]" wrote: I have yet to read *anything* in either the C or C++ spec that deals w/ performance. The thread is about the positives and negatives of MI, not implementation-specific or performance.
Aren't you confusing different news threads here? This is about polymorphism through virtual methods and whether it is appropriate to apply it as a means of implementing a plug-in mechanism.
My example does NOT change the behavior of the **CLASS**. Perhaps you would care to elaborate as to why you think it does...
The behavior of the class is changed by sub-classing it. What is the point of stating that a base class is not changed by the descendant? The way we look at it there is no point in creating a descendent class if not either behavior or data members are changed./ extended.
You code is the first line if you are working *in the code*. Although not explicitly defined as such, this thread has been primarily limited to the presumption that only the interface is avaialble (i.e. header files). My documentation comments are based on that.
For interface definitions it is even more important that they are self-explanatory and clear on their intended use than it is for implementation code.
the only reasons that I've heard against MI are 'poor style' (with absolutely no justification) and 'bad documentation' --
What do you mean with MI ? "MI" has not been part of the discussion I have been participating in.
Martin.
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... Simple, you do *anything* and what your class does changes. No, no, NO!
Come on, my example doesn't change the behaior of the class -- period. It
may (and very well does) change the state or behavior of the *SYSTEM*, but we
are *not* talking about that.
If your method changes the system, then the behaviour of your class now
includes that change to the system, a classes behaviour is *EVERYTHING*, not
just the class state. Even if it doesn't change the result of the call it
still changes behaviour(and even in your example it *could* change the
result of the call by throwing an exception). Its likely behaviour you
*must* document, and treat as a behaviour change.
Of course, if you don't consider adding the chance of new exceptions, new
ways to fail, or new possible constraints on parameters a change in
behaviour, I think our definitions of the word differ.
In everything but an idealized world, an override should be automatically
considered a change in behaviour. Even if you can guarentee ironclad that it
works without any behaviour change, you still leave the chance that a change
*elsewhere* could change behaviour elsewhere(which is part of the problem, a
code change *anywhere* is a change in behaviour, potentially in the entire
system, even if its not entirely noticable or anything that actually effects
change). The point was, is, and will continue to be: virtual methods can be used
to change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*,
which is what I've illustrated.
"Daniel O'Connell [C# MVP]" wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... > This does change behaviour. (And I don't > think I want/need to tell you how.)
Then don't post a response! We are (presumably) all here for
constructive and educational reasons, so you statement provides nothing and actually confuses the issue.
Please elaborate on why you think this changes class behavior. I'll probably learn something. Simple, you do *anything* and what your class does changes. If that call throws an exception you are adding a point of failure, if that call instantiates objects you are using memory(and perhaps creating a memory leak), if that call deletes random files you are probably angering
users, if that call changes an object or variable the underlying method uses in another class you could introduce unexpected bugs and it all may rain
down on someone elses head because of it. Simply because your call leaves the instance alone does *NOT* mean that its behaviour isn't changed, its
state simply isn't. Behaviour is considerably more than simply what Method A
does to Field B. Your example likely doesn't change the object but
potentially changes the entire universe the object exists in. > I expect to read your headers, see and > recognize common patterns, understand > your identifiers, and use this interface > as it is with as little need for looking > it up in the docs as possible. If you > don't provide that, then that's one darn > good reason to look for another provider.
I'm not following. Maybe my statements weren't clear, but my
intention is this: any well-meaning programmer that produces code potentially for anyone else (either directly or indirectly), should include complete and
correct documentation.
It is extremely difficult (at best) to determine expected behavior
from prototypes and definitions alone (meaning, no documentation, *NO* comments). If *you* can take naked prototypes and definitions and understand the usage, behavior, and characteristics of the interface, then you are in a
definite minority. Personally, I rely heavily on the documentation.
Hendrik Schober wrote: > > Bret Pehrson <br**@infowest.com> wrote: > > [...] > > > The only reason to make a function virtual > > > is to allw it to be overridden. Overriding > > > a function is changing behaviour. > > > > Not true. > > > > class A > > { > > public: > > virtual void a() > > { > > // do something > > } > > }; > > > > class B : public A > > { > > public: > > virtual void a() > > { > > A::a(); > > trace("processing a"); > > } > > } > > > > This doesn't change behavior, but is a very valid and real-world
case of > > virtual overrides. > > This does change behaviour. (And I don't > think I want/need to tell you how.) > > > > Don't expect us to carefully read douments > > > that contradict your code. > > > > ??? > > > > I do expect you to carefully read documents that define the
behavior, usage, > > and intent of my interfaces. > > I expect to read your headers, see and > recognize common patterns, understand > your identifiers, and use this interface > as it is with as little need for looking > it up in the docs as possible. If you > don't provide that, then that's one darn > good reason to look for another provider. > > > [...] > > Schobi > > -- > Sp******@gmx.de is never read > I'm Schobi at suespammers dot org > > "Sometimes compilers are so much more reasonable than people." > Scott Meyers
-- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence
<<38952rglkwdsl>> -- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... The behavior of the class is changed by sub-classing it. What is the
point of stating that a base class is not changed by the descendant? The way
we look at it there is no point in creating a descendent class if not
either behavior or data members are changed./ extended.
I agree that overriding to change behavior is half of the story. The
other half overriding to create/monitor/track an object's methods. I tried
giving an example of such a case before, just to point out that you can't limit
'virtual' discussions just to the modification of behavior since it is perfectly legitimate to override a method and *not* change any behavior.
Okay, I understand your point now. You are saying it is not a crime to use
polymorphism as an event mechanism. For debugging purposes as in your
example I do not see any harm either. As a design pattern though I say it
should not be done if a more dedicated events mechanism is available in the
particular development environment.
That summarizes most of the noise we all generated on the subject. :-)
Martin.
.. wrote: I just developed a process control system in C# in a matter of months from design to final shipping.
The performance was way on par with unmanaged code, it was half of what our requirments was and perfectly more than acceptable performance , actualy it was more than what we expected and we havnt even done an performance optimization run on it. This is a very real world time critical appliation (cycle times in an automated environment with lots of variables like lighting and continual movement of items).
There is no need for the high risk of unmanaged C++ with the long development times for this appliation. C# is more than adequate. This is a time critical application. Automation robotics and vision. cycle times are very very important. This just hardens my confidence in C# as a real alternative to high performing real world automation.
I mostly share the views you express in your last two posts. However, I
guess the automation bit did not include any hard realtime stuff in the
millisecond range, otherwise I'd be interested how you managed to guarantee
the steering to be on time always.
Regards,
Andreas
> In everything but an idealized world, an override should be automatically considered a change in behaviour.
You have missed my point completely. I'll summarize (again), and then I'm
done.
There is more than one reason to create virtual methods:
1 - to allow a subsequent user to change behavior
2 - to allow a subsequent user to track/monitor methods or state data *without*
modifying behavior
There are probably other reasons as well...
Forget the example, it was merely to illustrate my point #2 above, NOT to be a
point of endless debate on what possible ramifications trace() has on the
system, etc. ad naseum.
"Daniel O'Connell [C# MVP]" wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... Simple, you do *anything* and what your class does changes.
No, no, NO!
Come on, my example doesn't change the behaior of the class -- period. It
may (and very well does) change the state or behavior of the *SYSTEM*, but we are *not* talking about that. If your method changes the system, then the behaviour of your class now includes that change to the system, a classes behaviour is *EVERYTHING*, not just the class state. Even if it doesn't change the result of the call it still changes behaviour(and even in your example it *could* change the result of the call by throwing an exception). Its likely behaviour you *must* document, and treat as a behaviour change.
Of course, if you don't consider adding the chance of new exceptions, new ways to fail, or new possible constraints on parameters a change in behaviour, I think our definitions of the word differ.
In everything but an idealized world, an override should be automatically considered a change in behaviour. Even if you can guarentee ironclad that it works without any behaviour change, you still leave the chance that a change *elsewhere* could change behaviour elsewhere(which is part of the problem, a code change *anywhere* is a change in behaviour, potentially in the entire system, even if its not entirely noticable or anything that actually effects change). The point was, is, and will continue to be: virtual methods can be used
to change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which is what I've illustrated.
"Daniel O'Connell [C# MVP]" wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... > > This does change behaviour. (And I don't > > think I want/need to tell you how.) > > Then don't post a response! We are (presumably) all here for constructive and > educational reasons, so you statement provides nothing and actually confuses > the issue. > > Please elaborate on why you think this changes class behavior. I'll probably > learn something. Simple, you do *anything* and what your class does changes. If that call throws an exception you are adding a point of failure, if that call instantiates objects you are using memory(and perhaps creating a memory leak), if that call deletes random files you are probably angering users, if that call changes an object or variable the underlying method uses in another class you could introduce unexpected bugs and it all may rain down on someone elses head because of it. Simply because your call leaves the instance alone does *NOT* mean that its behaviour isn't changed, its state simply isn't. Behaviour is considerably more than simply what Method A does to Field B. Your example likely doesn't change the object but potentially changes the entire universe the object exists in. > > > I expect to read your headers, see and > > recognize common patterns, understand > > your identifiers, and use this interface > > as it is with as little need for looking > > it up in the docs as possible. If you > > don't provide that, then that's one darn > > good reason to look for another provider. > > I'm not following. Maybe my statements weren't clear, but my intention is > this: any well-meaning programmer that produces code potentially for anyone > else (either directly or indirectly), should include complete and correct > documentation. > > It is extremely difficult (at best) to determine expected behavior from > prototypes and definitions alone (meaning, no documentation, *NO* comments). > If *you* can take naked prototypes and definitions and understand the usage, > behavior, and characteristics of the interface, then you are in a definite > minority. Personally, I rely heavily on the documentation. > > Hendrik Schober wrote: > > > > Bret Pehrson <br**@infowest.com> wrote: > > > [...] > > > > The only reason to make a function virtual > > > > is to allw it to be overridden. Overriding > > > > a function is changing behaviour. > > > > > > Not true. > > > > > > class A > > > { > > > public: > > > virtual void a() > > > { > > > // do something > > > } > > > }; > > > > > > class B : public A > > > { > > > public: > > > virtual void a() > > > { > > > A::a(); > > > trace("processing a"); > > > } > > > } > > > > > > This doesn't change behavior, but is a very valid and real-world case of > > > virtual overrides. > > > > This does change behaviour. (And I don't > > think I want/need to tell you how.) > > > > > > Don't expect us to carefully read douments > > > > that contradict your code. > > > > > > ??? > > > > > > I do expect you to carefully read documents that define the behavior, usage, > > > and intent of my interfaces. > > > > I expect to read your headers, see and > > recognize common patterns, understand > > your identifiers, and use this interface > > as it is with as little need for looking > > it up in the docs as possible. If you > > don't provide that, then that's one darn > > good reason to look for another provider. > > > > > [...] > > > > Schobi > > > > -- > > Sp******@gmx.de is never read > > I'm Schobi at suespammers dot org > > > > "Sometimes compilers are so much more reasonable than people." > > Scott Meyers > > -- > Bret Pehrson > mailto:br**@infowest.com > NOSPAM - Include this key in all e-mail correspondence
<<38952rglkwdsl>> -- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
This look as if it has been going for a while now, but I'd say, if you want all
public methods to be virtual by default (I'm guessing you are wanting C# to do
this
by default than by making you explicitly doing it), then change your language.
JScript .NET
allows for the feature, but that is because the language is feature oriented and
not performance
oriented. C# is performance oriented and every single virtual method lookup
incurs a
performance hit over a non virtual method. The performance increase is enough
that many of
the CLR classes were locked down to gain the performance rather than offer the
ability to
override the base behavior. In other words, a choice was made that performance
actually
outweighed flexibility.
So I'd have to say public methods should not be virtual by default, because it
would change the
performance characteristics and the security characteristics of all of my
current code should I
recompile it.
--
Justin Rogers
DigiTec Web Consultants, LLC.
Blog: http://weblogs.asp.net/justin_rogers
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... In everything but an idealized world, an override should be automatically considered a change in behaviour. You have missed my point completely. I'll summarize (again), and then I'm done.
There is more than one reason to create virtual methods:
1 - to allow a subsequent user to change behavior
2 - to allow a subsequent user to track/monitor methods or state data
*without* modifying behavior
There are probably other reasons as well...
Forget the example, it was merely to illustrate my point #2 above, NOT to be a point of endless debate on what possible ramifications trace() has on the system, etc. ad naseum.
"Daniel O'Connell [C# MVP]" wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... > Simple, you do *anything* and what your class does changes.
No, no, NO!
Come on, my example doesn't change the behaior of the class -- period. It may (and very well does) change the state or behavior of the *SYSTEM*, but we are *not* talking about that. If your method changes the system, then the behaviour of your class now includes that change to the system, a classes behaviour is *EVERYTHING*, not just the class state. Even if it doesn't change the result of the call it still changes behaviour(and even in your example it *could* change the result of the call by throwing an exception). Its likely behaviour you *must* document, and treat as a behaviour change.
Of course, if you don't consider adding the chance of new exceptions, new ways to fail, or new possible constraints on parameters a change in behaviour, I think our definitions of the word differ.
In everything but an idealized world, an override should be automatically considered a change in behaviour. Even if you can guarentee ironclad that it works without any behaviour change, you still leave the chance that a change *elsewhere* could change behaviour elsewhere(which is part of the problem, a code change *anywhere* is a change in behaviour, potentially in the entire system, even if its not entirely noticable or anything that actually effects change). The point was, is, and will continue to be: virtual methods can be used
to change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which is what I've illustrated.
"Daniel O'Connell [C# MVP]" wrote: > > "Bret Pehrson" <br**@infowest.com> wrote in message > news:40***************@infowest.com... > > > This does change behaviour. (And I don't > > > think I want/need to tell you how.) > > > > Then don't post a response! We are (presumably) all here for constructive > and > > educational reasons, so you statement provides nothing and actually > confuses > > the issue. > > > > Please elaborate on why you think this changes class behavior. I'll > probably > > learn something. > Simple, you do *anything* and what your class does changes. If that call > throws an exception you are adding a point of failure, if that call > instantiates objects you are using memory(and perhaps creating a memory > leak), if that call deletes random files you are probably angering users, if > that call changes an object or variable the underlying method uses in > another class you could introduce unexpected bugs and it all may rain down > on someone elses head because of it. Simply because your call leaves the > instance alone does *NOT* mean that its behaviour isn't changed, its state > simply isn't. Behaviour is considerably more than simply what Method A does > to Field B. Your example likely doesn't change the object but potentially > changes the entire universe the object exists in. > > > > > I expect to read your headers, see and > > > recognize common patterns, understand > > > your identifiers, and use this interface > > > as it is with as little need for looking > > > it up in the docs as possible. If you > > > don't provide that, then that's one darn > > > good reason to look for another provider. > > > > I'm not following. Maybe my statements weren't clear, but my intention is > > this: any well-meaning programmer that produces code potentially for > anyone > > else (either directly or indirectly), should include complete and correct > > documentation. > > > > It is extremely difficult (at best) to determine expected behavior from > > prototypes and definitions alone (meaning, no documentation, *NO* > comments). > > If *you* can take naked prototypes and definitions and understand the > usage, > > behavior, and characteristics of the interface, then you are in a definite > > minority. Personally, I rely heavily on the documentation. > > > > Hendrik Schober wrote: > > > > > > Bret Pehrson <br**@infowest.com> wrote: > > > > [...] > > > > > The only reason to make a function virtual > > > > > is to allw it to be overridden. Overriding > > > > > a function is changing behaviour. > > > > > > > > Not true. > > > > > > > > class A > > > > { > > > > public: > > > > virtual void a() > > > > { > > > > // do something > > > > } > > > > }; > > > > > > > > class B : public A > > > > { > > > > public: > > > > virtual void a() > > > > { > > > > A::a(); > > > > trace("processing a"); > > > > } > > > > } > > > > > > > > This doesn't change behavior, but is a very valid and real-world case > of > > > > virtual overrides. > > > > > > This does change behaviour. (And I don't > > > think I want/need to tell you how.) > > > > > > > > Don't expect us to carefully read douments > > > > > that contradict your code. > > > > > > > > ??? > > > > > > > > I do expect you to carefully read documents that define the behavior, > usage, > > > > and intent of my interfaces. > > > > > > I expect to read your headers, see and > > > recognize common patterns, understand > > > your identifiers, and use this interface > > > as it is with as little need for looking > > > it up in the docs as possible. If you > > > don't provide that, then that's one darn > > > good reason to look for another provider. > > > > > > > [...] > > > > > > Schobi > > > > > > -- > > > Sp******@gmx.de is never read > > > I'm Schobi at suespammers dot org > > > > > > "Sometimes compilers are so much more reasonable than people." > > > Scott Meyers > > > > -- > > Bret Pehrson > > mailto:br**@infowest.com > > NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>> -- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
-- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
"Bret Pehrson" <br**@infowest.com> wrote in message
news:40***************@infowest.com... In everything but an idealized world, an override should be
automatically considered a change in behaviour. You have missed my point completely. I'll summarize (again), and then I'm done.
There is more than one reason to create virtual methods:
1 - to allow a subsequent user to change behavior
2 - to allow a subsequent user to track/monitor methods or state data
*without* modifying behavior
The point I'm trying to make is you *CANNOT* override a method without
changing behaviour, so your second point is moot, it is simply a more
specific version of the first. Overriding is a behaviour change, be it at
the method level, the class level, or the system level. Even if your method
looks like
public override void MyMethod()
{
base.MyMethod();
}
I would still argue that it changes the behaviour of the class, although not
significantly. I know what you intended to point out, I simply believe its a
false premise. If you write code with ramifications you are writing code
that changes behaviour, period(in effect, that reduces to "If you write
code, you are changing behaviour"). I simply find it silly and potentially
dangerous to decide otherwise.
There are probably other reasons as well...
Forget the example, it was merely to illustrate my point #2 above, NOT to
be a point of endless debate on what possible ramifications trace() has on the system, etc. ad naseum.
"Daniel O'Connell [C# MVP]" wrote: "Bret Pehrson" <br**@infowest.com> wrote in message news:40***************@infowest.com... > Simple, you do *anything* and what your class does changes.
No, no, NO!
Come on, my example doesn't change the behaior of the class -- period.
It may (and very well does) change the state or behavior of the *SYSTEM*, but
we are *not* talking about that. If your method changes the system, then the behaviour of your class now includes that change to the system, a classes behaviour is *EVERYTHING*,
not just the class state. Even if it doesn't change the result of the call
it still changes behaviour(and even in your example it *could* change the result of the call by throwing an exception). Its likely behaviour you *must* document, and treat as a behaviour change.
Of course, if you don't consider adding the chance of new exceptions,
new ways to fail, or new possible constraints on parameters a change in behaviour, I think our definitions of the word differ.
In everything but an idealized world, an override should be
automatically considered a change in behaviour. Even if you can guarentee ironclad
that it works without any behaviour change, you still leave the chance that a
change *elsewhere* could change behaviour elsewhere(which is part of the
problem, a code change *anywhere* is a change in behaviour, potentially in the
entire system, even if its not entirely noticable or anything that actually
effects change). The point was, is, and will continue to be: virtual methods can be
used to change behavior ***OR*** for other reasons that *DO NOT CHANGE
BEHAVIOR*, which is what I've illustrated.
"Daniel O'Connell [C# MVP]" wrote: > > "Bret Pehrson" <br**@infowest.com> wrote in message > news:40***************@infowest.com... > > > This does change behaviour. (And I don't > > > think I want/need to tell you how.) > > > > Then don't post a response! We are (presumably) all here for constructive > and > > educational reasons, so you statement provides nothing and
actually > confuses > > the issue. > > > > Please elaborate on why you think this changes class behavior.
I'll > probably > > learn something. > Simple, you do *anything* and what your class does changes. If that
call > throws an exception you are adding a point of failure, if that call > instantiates objects you are using memory(and perhaps creating a
memory > leak), if that call deletes random files you are probably angering users, if > that call changes an object or variable the underlying method uses
in > another class you could introduce unexpected bugs and it all may
rain down > on someone elses head because of it. Simply because your call leaves
the > instance alone does *NOT* mean that its behaviour isn't changed, its state > simply isn't. Behaviour is considerably more than simply what Method
A does > to Field B. Your example likely doesn't change the object but potentially > changes the entire universe the object exists in. > > > > > I expect to read your headers, see and > > > recognize common patterns, understand > > > your identifiers, and use this interface > > > as it is with as little need for looking > > > it up in the docs as possible. If you > > > don't provide that, then that's one darn > > > good reason to look for another provider. > > > > I'm not following. Maybe my statements weren't clear, but my intention is > > this: any well-meaning programmer that produces code potentially
for > anyone > > else (either directly or indirectly), should include complete and correct > > documentation. > > > > It is extremely difficult (at best) to determine expected behavior from > > prototypes and definitions alone (meaning, no documentation, *NO* > comments). > > If *you* can take naked prototypes and definitions and understand
the > usage, > > behavior, and characteristics of the interface, then you are in a definite > > minority. Personally, I rely heavily on the documentation. > > > > Hendrik Schober wrote: > > > > > > Bret Pehrson <br**@infowest.com> wrote: > > > > [...] > > > > > The only reason to make a function virtual > > > > > is to allw it to be overridden. Overriding > > > > > a function is changing behaviour. > > > > > > > > Not true. > > > > > > > > class A > > > > { > > > > public: > > > > virtual void a() > > > > { > > > > // do something > > > > } > > > > }; > > > > > > > > class B : public A > > > > { > > > > public: > > > > virtual void a() > > > > { > > > > A::a(); > > > > trace("processing a"); > > > > } > > > > } > > > > > > > > This doesn't change behavior, but is a very valid and
real-world case > of > > > > virtual overrides. > > > > > > This does change behaviour. (And I don't > > > think I want/need to tell you how.) > > > > > > > > Don't expect us to carefully read douments > > > > > that contradict your code. > > > > > > > > ??? > > > > > > > > I do expect you to carefully read documents that define the behavior, > usage, > > > > and intent of my interfaces. > > > > > > I expect to read your headers, see and > > > recognize common patterns, understand > > > your identifiers, and use this interface > > > as it is with as little need for looking > > > it up in the docs as possible. If you > > > don't provide that, then that's one darn > > > good reason to look for another provider. > > > > > > > [...] > > > > > > Schobi > > > > > > -- > > > Sp******@gmx.de is never read > > > I'm Schobi at suespammers dot org > > > > > > "Sometimes compilers are so much more reasonable than people." > > > Scott Meyers > > > > -- > > Bret Pehrson > > mailto:br**@infowest.com > > NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>> -- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence
<<38952rglkwdsl>> -- Bret Pehrson mailto:br**@infowest.com NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>>
In the millisecond range, yes how about 200 millisecond cycle times and its
not even optimised yet.
"Andreas Huber" <ah****@gmx.net> wrote in message
news:40**********@news.bluewin.ch... . wrote: I just developed a process control system in C# in a matter of months from design to final shipping.
The performance was way on par with unmanaged code, it was half of what our requirments was and perfectly more than acceptable performance , actualy it was more than what we expected and we havnt even done an performance optimization run on it. This is a very real world time critical appliation (cycle times in an automated environment with lots of variables like lighting and continual movement of items).
There is no need for the high risk of unmanaged C++ with the long development times for this appliation. C# is more than adequate. This is a time critical application. Automation robotics and vision. cycle times are very very important. This just hardens my confidence in C# as a real alternative to high performing real world automation. I mostly share the views you express in your last two posts. However, I guess the automation bit did not include any hard realtime stuff in the millisecond range, otherwise I'd be interested how you managed to
guarantee the steering to be on time always.
Regards,
Andreas
.. wrote: In the millisecond range, yes how about 200 millisecond cycle times and its not even optimised yet.
So you have hard realtime requirements in the 200 millisecond range? How do
you guarantee that the GC does not interfere? The worst-case collect time
obviously depends on a multitude of factors. MS says that a gen 0 collection
should never take longer than milliseconds. For higher generation
collections I don't really have any numbers but I can imagine that in rare
cases you might get over the 200 milliseconds. How do you guarantee that
this does not happen?
Regards,
Andreas
> > > I guess the automation bit did not include any hard realtime stuff in
the millisecond range, otherwise I'd be interested how you managed to
guarantee the steering to be on time always.
In the millisecond range, yes how about 200 millisecond cycle times and its not even optimised yet.
200 milliseconds is quite a long time for process automation.
This is a very real world time critical appliation (cycle times in an
automated environment with lots of variables like lighting and continual movement of items).
Slow moving items I hope. I am aware of the fact that Neither "real time"
nor "time critical" are well defined qualifications but one doesn't think
bandwidths of about 5 Hz one is talking "real time".
Regardless the speed it is important to have some kind of safeguard in case
the controlling system fails.
So you have hard realtime requirements in the 200 millisecond range? How
do you guarantee that the GC does not interfere? The worst-case collect time obviously depends on a multitude of factors. MS says that a gen 0
collection should never take longer than milliseconds. For higher generation collections I don't really have any numbers but I can imagine that in rare cases you might get over the 200 milliseconds. How do you guarantee that this does not happen?
The GC is on its own thread. So even if a GC round took more than a second,
it should not block your application longer than a couple of time slices.
Setting your own thread's priority a bit higher than normal should do it.
In a robotics application you will typically have a dedicated watchdog
computer whose sole task it is to verify that the controlling computer is
paying attention, that is if it is updating the servo controller signal
freqently enough. If the watchdog finds the controlling computer has not
been controlling for too long a period of time, power will be cut and brakes
will stop the mechanics.
Controlling a robot with your regular desktop PC machine running Windows XP
could work fine for days or for ever but if the robot were capable of
damaging equipment, damaging the product or killing a human (and most
industrial robots can easily do all in one swing) I would not lightly hook
up any robot directly to an office computer.
Dot's application may be less deadly but I wonder about the fail-safety of
his system. What happens if your control system stops responding, Dot?
Martin.
Martin Maat [EBL] wrote: Slow moving items I hope. I am aware of the fact that Neither "real time" nor "time critical" are well defined qualifications but one doesn't think bandwidths of about 5 Hz one is talking "real time".
The definition I've seen (I think from Doing Hard Time by Douglass) sais
that hard realtime has nothing to do with speed but with deadlines. Missing
a deadline always means *uncorrectable* trouble. E.g. consider a printer
spraying the best-before date onto eggs being transported on a conveyor belt
that is moving independently of the spraying. If the computer controlling
the printer misses a deadline, an egg ends up having only half or no date.
Of course, the time window during which spraying can be successful depends
on the speed of the conveyor. For a slow conveyor this could be in the
several hundred milliseconds, for a fast one it could be well below one
millisecond. Therefore, for some hard realtime systems Windows could be the
right choice for others I wouldn't even dare to consider it.
BTW, if missing a deadline means that things are only slowed down without
doing any further harm then we have a soft rather than a hard realtime
system.
Regardless the speed it is important to have some kind of safeguard in case the controlling system fails.
So you have hard realtime requirements in the 200 millisecond range? How do you guarantee that the GC does not interfere? The worst-case collect time obviously depends on a multitude of factors. MS says that a gen 0 collection should never take longer than milliseconds. For higher generation collections I don't really have any numbers but I can imagine that in rare cases you might get over the 200 milliseconds. How do you guarantee that this does not happen? The GC is on its own thread. So even if a GC round took more than a second, it should not block your application longer than a couple of time slices.
I don't think so. To do a collection in a uniprocessor system the GC has to
suspend *all* threads that are currently running managed code. Only the
finalizers can be run in a separate thread while the other threads keep
minding their business.
The situation is a bit better in multiprocessor systems where - under
certain circumstances - collections can be done on one CPU while the others
keep running.
Setting your own thread's priority a bit higher than normal should do it.
No, that won't help you during the collection itself (see above).
In a robotics application you will typically have a dedicated watchdog computer whose sole task it is to verify that the controlling computer is paying attention, that is if it is updating the servo controller signal freqently enough. If the watchdog finds the controlling computer has not been controlling for too long a period of time, power will be cut and brakes will stop the mechanics.
Right.
Controlling a robot with your regular desktop PC machine running Windows XP could work fine for days or for ever but if the robot were capable of damaging equipment, damaging the product or killing a human (and most industrial robots can easily do all in one swing) I would not lightly hook up any robot directly to an office computer.
That's also my impression.
Regards,
Andreas
> The definition I've seen (I think from Doing Hard Time by Douglass) sais that hard realtime has nothing to do with speed but with deadlines. [...] BTW, if missing a deadline means that things are only slowed down without doing any further harm then we have a soft rather than a hard realtime system.
Sounds like solid theory. Interesting. The GC is on its own thread. So even if a GC round took more than a second, it should not block your application longer than a couple of time slices.
I don't think so. To do a collection in a uniprocessor system the GC has
to suspend *all* threads that are currently running managed code.
Yes, I understand, but I expect the GC to not bluntly collect garbage until
there's no more garbage to be found with its own thread set to time
critical. I expect it to take caution that it will not be in the apps way by
doing as much as it possibly can in idle time and by, if it really needs to
interfere, collecting some garbage for a minimum period of time and then
return control. The aggression applied is likely to be proportional to the
need. If I (the application programmer) were to generate garbage
relentlessly, the GC will probably start fighting me for processing time at
some point. While I am being a good boy however, not giving it a reason to
pull my leash, I expect it to be very very gentle with me, only collecting
garbage for very short periods.
That is the way it should work in my opinion and I have confidence in the
smart people that designed the GC but I have to say that this is purely
common sense, speculation and wishfull thinking on my part so I would be
most interested to learn how it is really done from some insider.
Martin.
> Yes, I understand, but I expect the GC to not bluntly collect garbage until there's no more garbage to be found with its own thread set to time critical. I expect it to take caution that it will not be in the apps way by doing as much as it possibly can in idle time and by, if it really needs to interfere, collecting some garbage for a minimum period of time and then return control. The aggression applied is likely to be proportional to the need. If I (the application programmer) were to generate garbage relentlessly, the GC will probably start fighting me for processing time at some point. While I am being a good boy however, not giving it a reason to pull my leash, I expect it to be very very gentle with me, only collecting garbage for very short periods.
Then you are expecting far too much from the GC. If the GC goes into a
collection
state it can and will hang indefinitely if given the chance. And yes it does
kick the
priority on the thread up giving itself a higher priority than any other code.
I've identified
an issue where finalizer code can pretty much lock your entire machine. http://weblogs.asp.net/justin_rogers.../01/65802.aspx
That is the way it should work in my opinion and I have confidence in the smart people that designed the GC but I have to say that this is purely common sense, speculation and wishfull thinking on my part so I would be most interested to learn how it is really done from some insider.
The GC is a system that has no knobs that you can turn. You can't tell it that
it should be
gentle. In the most common programming case, a managed application simply needs
more
memory (the reason for the GC is that memory has been exhausted normally), so
the GC
collects as much memory as it can. While there may be some minor load-balancing
the
original post on Gen 2 collections often taking many hundreds of milliseconds
and sometimes
many seconds is the true real world case. That is simply how it works.
--
Justin Rogers
DigiTec Web Consultants, LLC.
Blog: http://weblogs.asp.net/justin_rogers
Martin Maat [EBL] <du***@somewhere.nl> wrote: Yes, I understand, but I expect the GC to not bluntly collect garbage until there's no more garbage to be found with its own thread set to time critical. I expect it to take caution that it will not be in the apps way by doing as much as it possibly can in idle time and by, if it really needs to interfere, collecting some garbage for a minimum period of time and then return control.
No, that's not what happens.
Just for a bit of fun, I wrote a little program to give the GC some
exercise. It's not actually terribly hard work for the GC, as there
aren't many traversals to do - just a single big array with some
elements in, which sometimes get overridden leaving garbage around. The
program keeps track of the longest amount of time it's taken to
allocate a new byte array. Here it is:
using System;
public class Test
{
static void Main()
{
TimeSpan maxTime = new TimeSpan();
byte[][] array = new byte[100000][];
Random r = new Random();
for (int i=0; i < 100000000; i++)
{
int index = r.Next(array.Length);
int size = r.Next (1000);
DateTime start = DateTime.Now;
array[index] = new byte[size];
DateTime end = DateTime.Now;
TimeSpan time = end-start;
if (time > maxTime)
{
maxTime = time;
Console.WriteLine (time.TotalMilliseconds);
}
}
}
}
It *rapidly* goes over 200ms on my laptop (which is a fairly nippy
box).
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
On Sun, 1 Feb 2004 02:53:28 -0600, "Daniel O'Connell [C# MVP]"
<onyxkirx@--NOSPAM--comcast.net> wrote:
<snip> The point I'm trying to make is you *CANNOT* override a method without changing behaviour, so your second point is moot, it is simply a more specific version of the first. Overriding is a behaviour change, be it at the method level, the class level, or the system level. Even if your method looks like public override void MyMethod() { base.MyMethod(); }
I would still argue that it changes the behaviour of the class, although not significantly.
<snip>
How?
Aside from rather mundane things like a few spare clock cycles to
perform the extra method call, if a client/calling method cannot
tell, from the *result* of invoking the overriden method, that the
method was, in fact, overriden, exactly in what way has the
/behaviour/ been changed?
Oz
Daniel O'Connell [C# MVP] <onyxkirx@--NOSPAM--comcast.net> wrote: [...] In everything but an idealized world, an override should be automatically considered a change in behaviour. [...]
I'd go further: There is no reason one
would want to override a virtual function
except if one wants to change behaviour.
(Mhmm. OK, if you want to be nitpicking,
the case of overriding pure virtuals
remains to be discussed.)
[...]
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
ozbear <oz*****@yahoo.com> wrote: On Sun, 1 Feb 2004 02:53:28 -0600, "Daniel O'Connell [C# MVP]" <onyxkirx@--NOSPAM--comcast.net> wrote: [...] Even if your method looks like public override void MyMethod() { base.MyMethod(); }
I would still argue that it changes the behaviour of the class, although not significantly. <snip>
How?
Aside from rather mundane things like a few spare clock cycles to perform the extra method call, if a client/calling method cannot tell, from the *result* of invoking the overriden method, that the method was, in fact, overriden, exactly in what way has the /behaviour/ been changed?
Don't you consider performance to be
observable behaviour? I find it rather
easy to imagine a case where the above
change could render a whole app useless
due to the performance loss. And while
I can't imagine it, my experience with
that Murphy guy insists that this can
indeed happen where I'm sure it never
would. And I learned to trust this
Murphy experience over the years more
than my own judgements. :)
Oz
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
Bret Pehrson <br**@infowest.com> wrote: [...] Please elaborate on why you think this changes class behavior. I'll probably learn something.
Daniel already tried to explain this. I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. If you don't provide that, then that's one darn good reason to look for another provider.
I'm not following. Maybe my statements weren't clear, but my intention is this: any well-meaning programmer that produces code potentially for anyone else (either directly or indirectly), should include complete and correct documentation.
Of course. However, if my IDE tells me that
the function whos name I'm just typing is
virtual, I expect the class it belongs to
to be designed so that this function can be
overridden, because that's what virtual
functions are for. If I would find out
through the documentation, that this isn't
true, I would push really hard that we throw
the damn thing out the window and get a more
sensible designed lib.
It is extremely difficult (at best) to determine expected behavior from prototypes and definitions alone (meaning, no documentation, *NO* comments). If *you* can take naked prototypes and definitions and understand the usage, behavior, and characteristics of the interface, then you are in a definite minority. Personally, I rely heavily on the documentation.
Let's see... I just now have time to read
(and post in) this ng since I have changed
a header and need to re-build. I am working
at a kind of universal config file reader
that you can register functions with which
will get called when specific strings are
found in a config file. What I just did was
to add this function:
void readConfigFile(std::istream& is);
Would you look up the docs on this?
And what if I tell you that what actually I
inserted into the header is this
/*! \brief This reads a config file and calls the appropriate
** functions with the converted arguments for each
** feature string found
** \param is <==> config file to read
*/
void readConfigFile(std::istream& is);
Would you still want to look it up??
And now suppose, this function would do
everything as described, but pass the
arguments for the functins it is supposed
to call in the wrong order. Would you
still insist that, if the docs tell so,
that would be fine???
No no.
I expect to read your headers, see and recognize common patterns, understand your identifiers, and use this interface as it is with as little need for looking it up in the docs as possible. [...]
This still holds.
[...]
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
Bret Pehrson <br**@infowest.com> wrote: Ah, "I am ignorant so you can't touch me". Try re-reading. My original point was that you can't assume anything about performance, because it is strictly tied to the implementation (and underlying hardware).
You can assume that non-functions will
most likely have less, and never will
have more overhead than virtual functions.
I have yet to read *anything* in either the C or C++ spec that deals w/ performance. The thread is about the positives and negatives of MI, not implementation-specific or performance.
> The only reason to make a function virtual is to allow it to be > overridden. Overriding a function is changing behaviour. Not true.
And then you provide an example demonstrating just what you are denying.
Nay, my good friend. My example does NOT change the behavior of the **CLASS**. Perhaps you would care to elaborate as to why you think it does...
Suddenly your class causes a log file
to be written. Or another entry into
the log system. Or the trace might fail
and throw. Or... Documentation should be the second line of support. Your code is the first.
You code is the first line if you are working *in the code*. Although not explicitly defined as such, this thread has been primarily limited to the presumption that only the interface is avaialble (i.e. header files). My documentation comments are based on that.
I think Martin meant this. Declarations
_are_ code, after all.
[...] Excluding the 'dreaded diamond' issue of MI, can someone substantively say why MI should *NOT* be part of a language???
I couldn't. But then I hadn't read anything
against MI in this thread.
[...]
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
.. <.> wrote: [...] Ofcourse C++ is still used but watch C# take the mainstream applications and C++ for specific interop and time / memory critical applications. [...]
There is more to this. A lot, actually.
There is billions of lines of C/C++ code
out there that nobody will rewrite in C#
(or whatever the mainstreamers consider
today's mainstream language). This not
only needs to compile, it also needs to
get fixed and extended.
There is billions of line of C/C++ code
that is (also) compiled and used on other
platforms than Windows. C# doesn't help
with these at all. While Windows (and thus
maybe .NET someday) might be the main
platform that you and I use, that doesn't
mean it is all there is.
The shop I work for has a lot of code that
runs on Win9X, NT, 2k, XP, MacOS Classic,
MacOSX, Linux, and BSD, using half a dozen
C++ compiler/std lib compinations. And a
lot of it runs either as plugin for various
applications or as a standalone app. There
is no critical performance contraints on
our apps (other than the usual "the faster,
the less the users have to wait"). Still --
.NET and C# are out of question.
Schobi
-- Sp******@gmx.de is never read
I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people."
Scott Meyers
Not sure why you think the GC has no hard time running this code?
With this allocation scheme, very quickly, millions of objects (all < 1000
bytes) tend to survive into gen2.
The result is that, once the gen2 threshold has been reached, the GC kicks
in for a full collect, resulting in a relocation of several tens of
megabytes, which is extremely expensive especially when the garbage is
located near the start of the GC2 heap.
Willy.
relocating
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Martin Maat [EBL] <du***@somewhere.nl> wrote: Yes, I understand, but I expect the GC to not bluntly collect garbage until there's no more garbage to be found with its own thread set to time critical. I expect it to take caution that it will not be in the apps way by doing as much as it possibly can in idle time and by, if it really needs to interfere, collecting some garbage for a minimum period of time and then return control.
No, that's not what happens.
Just for a bit of fun, I wrote a little program to give the GC some exercise. It's not actually terribly hard work for the GC, as there aren't many traversals to do - just a single big array with some elements in, which sometimes get overridden leaving garbage around. The program keeps track of the longest amount of time it's taken to allocate a new byte array. Here it is:
using System;
public class Test { static void Main() { TimeSpan maxTime = new TimeSpan(); byte[][] array = new byte[100000][]; Random r = new Random();
for (int i=0; i < 100000000; i++) { int index = r.Next(array.Length); int size = r.Next (1000); DateTime start = DateTime.Now; array[index] = new byte[size]; DateTime end = DateTime.Now; TimeSpan time = end-start; if (time > maxTime) { maxTime = time; Console.WriteLine (time.TotalMilliseconds); } } } }
It *rapidly* goes over 200ms on my laptop (which is a fairly nippy box).
-- Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet If replying to the group, please do not mail me too
"Martin Maat [EBL]" <du***@somewhere.nl> wrote in message
[snip] I don't think so. To do a collection in a uniprocessor system the GC has to suspend *all* threads that are currently running managed code.
Yes, I understand, but I expect the GC to not bluntly collect garbage until there's no more garbage to be found with its own thread set to time critical. I expect it to take caution that it will not be in the apps way by doing as much as it possibly can in idle time and by,
Even if you detect that the program is not currently consuming many
cycles it might do so just a millisecond later. Once a collection is
started it cannot be interrupted because doing so would leave you with
absolutely no garbage collected. Since you were not able to visit all
objects you cannot possibly tell whether anything truly is garbage or
not.
Therefore, the GC has no other choice but to kick in when it feels it
must and then collect *all* the garbage in the generation.
if it really needs to interfere, collecting some garbage for a minimum period of time and then return control. The aggression applied is likely to be proportional to the need. If I (the application programmer) were to generate garbage relentlessly, the GC will probably start fighting me for processing time at some point. While I am being a good boy however, not giving it a reason to pull my leash, I expect it to be very very gentle with me, only collecting garbage for very short periods.
To some extent the .NET GC does exactly this, by doing faster lower
order generation collections much more often than slower higher order
ones. However, when it comes to hard realtime systems you have to
design your system so that it can deal with the worst case, i.e. a
full gen 2 collection *any time*. If such a collection can take
seconds as remarked by Justin Rogers, then it is suicide to use .NET
for just about any hard realtime application.
Regards,
Andreas
Willy Denoyette [MVP] <wi*************@pandora.be> wrote: Not sure why you think the GC has no hard time running this code? With this allocation scheme, very quickly, millions of objects (all < 1000 bytes) tend to survive into gen2.
Yup.
The result is that, once the gen2 threshold has been reached, the GC kicks in for a full collect, resulting in a relocation of several tens of megabytes, which is extremely expensive especially when the garbage is located near the start of the GC2 heap.
Yes, it was deliberately designed to get as many objects as possible
into generation 2, but in other ways it's very simple - each object
that it looks at is independent, other than the "overarching" array. In
other words, the hierarchy itself is very simple - it's very wide, but
very shallow. That was the easiest way I could figure out of exercising
the garbage collector without too many lines of code. If that hadn't
confused it enough I'd have tried harder, but as my first attempt went
over the 200ms threshold, I thought I'd leave it there :)
--
Jon Skeet - <sk***@pobox.com> http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
> Don't you consider performance to be observable behaviour?
Performance is a *characteristic*, not a behavior.
For the life of me, I can't figure out why no one seems to be able to
understand that there is more than one reason (change of behavior) for virtual
methods.
Hendrik Schober wrote: ozbear <oz*****@yahoo.com> wrote: On Sun, 1 Feb 2004 02:53:28 -0600, "Daniel O'Connell [C# MVP]" <onyxkirx@--NOSPAM--comcast.net> wrote: [...] Even if your method looks like public override void MyMethod() { base.MyMethod(); }
I would still argue that it changes the behaviour of the class, although not significantly.
<snip>
How?
Aside from rather mundane things like a few spare clock cycles to perform the extra method call, if a client/calling method cannot tell, from the *result* of invoking the overriden method, that the method was, in fact, overriden, exactly in what way has the /behaviour/ been changed?
Don't you consider performance to be observable behaviour? I find it rather easy to imagine a case where the above change could render a whole app useless due to the performance loss. And while I can't imagine it, my experience with that Murphy guy insists that this can indeed happen where I'm sure it never would. And I learned to trust this Murphy experience over the years more than my own judgements. :)
Oz
Schobi
-- Sp******@gmx.de is never read I'm Schobi at suespammers dot org
"Sometimes compilers are so much more reasonable than people." Scott Meyers
--
Bret Pehrson
mailto:br**@infowest.com
NOSPAM - Include this key in all e-mail correspondence <<38952rglkwdsl>> This discussion thread is closed Replies have been disabled for this discussion. Similar topics
175 posts
views
Thread by Ken Brady |
last post: by
| | | | | | | | | | |