By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,427 Members | 1,379 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,427 IT Pros & Developers. It's quick & easy.

preemptive OOP?

P: n/a
Ok, I have a new random question for today -- feel free to ignore and
get back to your real jobs! :)

Anyway, I'm creating a GUI (yes, all part of my master plan to
eventually have some sort of database application working) and it's
going to involve a wx.Notebook control. I think I have two options for
how I can do this. Within the class for the main window frame, I can say:

notebook = wx.Notebook(panel) # panel is parent of the Notebook control

This uses the default wx.Notebook class, and works just fine. But I was
thinking, is it a smart idea to do this instead:

class MyNotebook(wx.Notebook):
def __init__(self, parent):
wx.Notebook.__init__(self, parent)

and then call it from the main frame as:

notebook = MyNotebook(panel)

This seems to allow for future expansion of the customized Notebook
class, but at the same time I have no idea how or why I'd want to do that.

So my question in general is, is it a good idea to default to an OOP
design like my second example when you aren't even sure you will need
it? I know it won't hurt, and is probably smart to do sometimes, but
maybe it also just adds unnecessary code to the program.
Sep 28 '06 #1
Share this Question
Share on Google+
15 Replies


P: n/a
class MyNotebook(wx.Notebook):
def __init__(self, parent):
wx.Notebook.__init__(self, parent)
So my question in general is, is it a good idea to default to an OOP
design like my second example when you aren't even sure you will need
it? I know it won't hurt, and is probably smart to do sometimes, but
maybe it also just adds unnecessary code to the program.
My feeling is that there is no good reason to add the complexity of your
own custom classes if they provide no additional data or behaviour. If you
know you have plans to add your own attributes and/or methods to MyNotebook
soon, there's no harm in doing this now, but why? You can always create your
own custom subclass at the point when you have something of value to add and
through the mircale of polymorphism, it will function everywhere a Notebook
would.

-ej
Sep 28 '06 #2

P: n/a
Erik Johnson wrote:
My feeling is that there is no good reason to add the complexity of your
own custom classes if they provide no additional data or behaviour. If you
know you have plans to add your own attributes and/or methods to MyNotebook
soon, there's no harm in doing this now, but why? You can always create your
own custom subclass at the point when you have something of value to add and
through the mircale of polymorphism, it will function everywhere a Notebook
would.
I think you're right. In fact, one thing I always loved about Python was
that OOP is optional, and here I am trying to force it upon myself
unnecessarily! :)
Sep 28 '06 #3

P: n/a
John Salerno wrote:
Erik Johnson wrote:

> My feeling is that there is no good reason to add the complexity of your
own custom classes if they provide no additional data or behaviour. If you
know you have plans to add your own attributes and/or methods to MyNotebook
soon, there's no harm in doing this now, but why? You can always create your
own custom subclass at the point when you have something of value to add and
through the mircale of polymorphism, it will function everywhere a Notebook
would.


I think you're right. In fact, one thing I always loved about Python was
that OOP is optional, and here I am trying to force it upon myself
unnecessarily! :)
However, if you want to simplify the code, you could start by saying

myNotebook = wx.Notebook

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 28 '06 #4

P: n/a
Steve Holden wrote:
However, if you want to simplify the code, you could start by saying

myNotebook = wx.Notebook
I don't understand. Is that just a name change in the code?
Sep 28 '06 #5

P: n/a
John Salerno wrote:
Steve Holden wrote:

>>However, if you want to simplify the code, you could start by saying

myNotebook = wx.Notebook


I don't understand. Is that just a name change in the code?
Let me put it this way: what doesn't it do that your code did?

You're right, it *is* just a name change in the code, but if you *do*
later want to specialise the notebook behaviour you can replace the
assignment with a class definition without changing any of your other code.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 28 '06 #6

P: n/a
Steve Holden wrote:
You're right, it *is* just a name change in the code, but if you *do*
later want to specialise the notebook behaviour you can replace the
assignment with a class definition without changing any of your other code.
Oh! I wasn't quite thinking along those lines, but now it sounds good! :)
Sep 29 '06 #7

P: n/a
John Salerno wrote:
So my question in general is, is it a good idea to default to an OOP
design like my second example when you aren't even sure you will need
it? I know it won't hurt, and is probably smart to do sometimes, but
maybe it also just adds unnecessary code to the program.
In general, no. I'm a strong believer in You Aren't Going to Need It
(YAGNI):
http://c2.com/xp/YouArentGonnaNeedIt.html

because it *does* hurt
- you have to write the code in the first place
- every time you see a reference to MyNotebook you have to remind
yourself that it's just a wx.Notebook
- anyone else looking at the code has to figure out that MyNotebook is
just wx.Notebook, and then wonder if they are missing something subtle
because you must have had a reason to create a new class...

and so on...Putting in extra complexity because you think you will need
it later leads to code bloat. It's usually a bad idea.

Possible exceptions are
- If you are really, really, really sure you are going to need it
really, really soon and it would be much, much easier to add it now then
after the next three features go in, then you might consider adding it
now. But are you really that good at predicting the future?
- When you are working in a domain that you are very familiar with and
the last six times you did this job, you needed this code, and you have
no reason to think this time is any different.

You struck a nerve here, I have seen so clearly at work the difference
between projects that practice YAGNI and those that are designed to meet
any possible contingency. It's the difference between running with
running shoes on or wet, muddy boots.

Kent
Sep 30 '06 #8

P: n/a
Kent Johnson wrote:
You struck a nerve here, I have seen so clearly at work the difference
between projects that practice YAGNI and those that are designed to meet
any possible contingency. It's the difference between running with
running shoes on or wet, muddy boots.
LOL. Yeah, I guess so. I kind of expected the answer to be "yes, always
go with OOP in preparation for future expansion", but I should have
known that Python programmers would be a little more pragmatic than that. :)
Sep 30 '06 #9

P: n/a
John Salerno wrote:
LOL. Yeah, I guess so. I kind of expected the answer to be "yes, always
go with OOP in preparation for future expansion", but I should have
known that Python programmers would be a little more pragmatic than that. :)
Depends on the design philosophy of a particular programmer. YAGNI and
extreme programming are one approach. But, personally, I take that more
as a suggestion not a rule. For example, I don't *need* startswith:

s = 'Cat in a tree'
f = 'Cat'
if s[0:len(f)] == f:
....

But I'm glad startswith is there because it makes it easier for me to
understand what's going on and to use literals rather than temporary
variables:

if s.startswith('Cat'):
....

Regards,
Jordan

Sep 30 '06 #10

P: n/a
* Kent Johnson wrote (on 9/30/2006 2:04 PM):
John Salerno wrote:
>So my question in general is, is it a good idea to default to an OOP
design like my second example when you aren't even sure you will need
it? I know it won't hurt, and is probably smart to do sometimes, but
maybe it also just adds unnecessary code to the program.

In general, no. I'm a strong believer in You Aren't Going to Need It
(YAGNI):
http://c2.com/xp/YouArentGonnaNeedIt.html

because it *does* hurt
- you have to write the code in the first place
- every time you see a reference to MyNotebook you have to remind
yourself that it's just a wx.Notebook
- anyone else looking at the code has to figure out that MyNotebook is
just wx.Notebook, and then wonder if they are missing something subtle
because you must have had a reason to create a new class...

and so on...Putting in extra complexity because you think you will need
it later leads to code bloat. It's usually a bad idea.

Possible exceptions are
- If you are really, really, really sure you are going to need it
really, really soon and it would be much, much easier to add it now then
after the next three features go in, then you might consider adding it
now. But are you really that good at predicting the future?
- When you are working in a domain that you are very familiar with and
the last six times you did this job, you needed this code, and you have
no reason to think this time is any different.

You struck a nerve here, I have seen so clearly at work the difference
between projects that practice YAGNI and those that are designed to meet
any possible contingency. It's the difference between running with
running shoes on or wet, muddy boots.

Kent
I have only caught the tail of this thread so far so I may have missed
some important info. However, Kent's response is, I think, a bit of
an oversimplification.

The answer to the original question, as quoted above, is ... it depends.
On several things, actually.

If this is a 'one-shot' program or simple library to accomplish a
limited goal then the added complexity of OO is probably overkill. Many
scripts fall into this category. You can go to a lot of trouble to
generate an OO solution to a simple problem and not get much payoff for
your effort. Simple problems are often solved best with simple
solutions.

However, when an application (or library) is designed to provide a more
'general purpose' solution to one or more problems and is likely to have
a lifetime beyond the 'short term' (whatever that may mean to you), then
OO can start to pay off. In these kinds of applications you see the
need for future maintenance and a likely need to expand on the existing
solution to add new features or cover new ground. This is made easier
when the mechanism for this expansion is planned for in advance.

Without this prior planning, any expansion (not to mention bug fixing)
becomes more difficult and makes the resulting code more brittle. While
not all planning for the future requires OO, this is one mechanism that
can be employed effectively *because* it is generally well understood
and can be readily grasped *if* it is planned and documented well.

There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there. This
isn't a good thing. However, that doesn't mean that the use of OOP in
any given project is bad. It may be inappropriate.

OO is a simply a way of dealing with complexity in SW development. If a
problem is complex then the solution will have to deal with that
complexity. If OO can be used to make a complex solution less complex
then it is appropriate. If the use of OO makes a simple solution *more*
complex then it is being used inappropriately.

It is not only necessary to have the correct tools for the job but,
also, to be skilled in their use. Part of the skill of a SW developer
is in picking the correct tool for the job - and then using it
correctly.

Mark
Oct 1 '06 #11

P: n/a
John Salerno wrote:
Ok, I have a new random question for today -- feel free to ignore and
get back to your real jobs! :)

Anyway, I'm creating a GUI (yes, all part of my master plan to
eventually have some sort of database application working) and it's
going to involve a wx.Notebook control. I think I have two options for
how I can do this. Within the class for the main window frame, I can say:

notebook = wx.Notebook(panel) # panel is parent of the Notebook control

This uses the default wx.Notebook class, and works just fine. But I was
thinking, is it a smart idea to do this instead:

class MyNotebook(wx.Notebook):
def __init__(self, parent):
wx.Notebook.__init__(self, parent)

and then call it from the main frame as:

notebook = MyNotebook(panel)

This seems to allow for future expansion of the customized Notebook
class, but at the same time I have no idea how or why I'd want to do that.

So my question in general is, is it a good idea to default to an OOP
design like my second example when you aren't even sure you will need
it?
It of course depends on a lot of factors. Two of these factors are:

1/ given my current knowledge of the project, what are the probabilities
that I'll end up subclassing wx.Notebook ?

2/ if so, in how many places would I have to s/wx.Notebook/MyNotebook/

The second factor is certainly the most important here. Even if the
answer to 1/ is "50%", if there's only a couple of calls in a single
file, there's really no need to do anything by now IMHO.

As a side note, the common OO pattern for this potential problem is to
replace direct instanciation with a factory, so you just have to modify
the factory's implementation.

Now one of the nice things with Python is that it doesn't have a "new"
keyword, instead using direct calls to the class (the fact is that in
Python, classes *are* factories already). Another nice thing is that you
can easily 'alias' callables. The combination of these 2 features makes
factory pattern mostly straightforward and transparent. As someone
already pointed out, you don't need to subclass wx.Notebook - just
'alias' it to another name.
I know it won't hurt,
Mmm... Not so sure. One could argue that "premature generalization is
the root of all evil" !-)
and is probably smart to do sometimes,
cf above.
but
maybe it also just adds unnecessary code to the program.
It's not that much about the unnecessary code (which can boil down to a
single assignement), but about the unnecessary level of indirection
(which, as someone stated, is the one and only problem that cannot be
solved by adding a level of indirection !-).

--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Oct 2 '06 #12

P: n/a
Mark Elston wrote:
* Kent Johnson wrote (on 9/30/2006 2:04 PM):
>John Salerno wrote:
>>So my question in general is, is it a good idea to default to an OOP
design like my second example when you aren't even sure you will need
it? I know it won't hurt, and is probably smart to do sometimes, but
maybe it also just adds unnecessary code to the program.
In general, no. I'm a strong believer in You Aren't Going to Need It
(YAGNI):
http://c2.com/xp/YouArentGonnaNeedIt.html

because it *does* hurt
- you have to write the code in the first place
- every time you see a reference to MyNotebook you have to remind
yourself that it's just a wx.Notebook
- anyone else looking at the code has to figure out that MyNotebook is
just wx.Notebook, and then wonder if they are missing something subtle
because you must have had a reason to create a new class...

and so on...Putting in extra complexity because you think you will need
it later leads to code bloat. It's usually a bad idea.

Possible exceptions are
- If you are really, really, really sure you are going to need it
really, really soon and it would be much, much easier to add it now then
after the next three features go in, then you might consider adding it
now. But are you really that good at predicting the future?
- When you are working in a domain that you are very familiar with and
the last six times you did this job, you needed this code, and you have
no reason to think this time is any different.

You struck a nerve here, I have seen so clearly at work the difference
between projects that practice YAGNI and those that are designed to meet
any possible contingency. It's the difference between running with
running shoes on or wet, muddy boots.

Kent

I have only caught the tail of this thread so far so I may have missed
some important info. However, Kent's response is, I think, a bit of
an oversimplification.

The answer to the original question, as quoted above, is ... it depends.
On several things, actually.
Of course.
However, when an application (or library) is designed to provide a more
'general purpose' solution to one or more problems and is likely to have
a lifetime beyond the 'short term' (whatever that may mean to you), then
OO can start to pay off. In these kinds of applications you see the
need for future maintenance and a likely need to expand on the existing
solution to add new features or cover new ground. This is made easier
when the mechanism for this expansion is planned for in advance.
I am a fan of OOP and use it all the time. I was just arguing against
using it when it is not called for.
>
Without this prior planning, any expansion (not to mention bug fixing)
becomes more difficult and makes the resulting code more brittle. While
not all planning for the future requires OO, this is one mechanism that
can be employed effectively *because* it is generally well understood
and can be readily grasped *if* it is planned and documented well.
Unfortunately prior planning is an attempt to predict the future.
Correctly planning for future requirements is difficult. It is possible
to expand code without making it brittle.

Robert Martin has a great rule of thumb - first, do the simplest thing
that meets the current requirements. When the requirements change,
change the code so it will accommodate future changes of the same type.
Rather than try to anticipate all future changes, make the code easy to
change.
>
There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there. This
isn't a good thing. However, that doesn't mean that the use of OOP in
any given project is bad. It may be inappropriate.
In my experience a lot of GOOP results exactly from trying to anticipate
future requirements, thus introducing unneeded interfaces, factories, etc.

Kent
Oct 4 '06 #13

P: n/a
Kent Johnson wrote:
>There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there.

In my experience a lot of GOOP results
LOL. Good thing we didn't go with the acronym from my phrase "preemptive
OOP" ;)
Oct 4 '06 #14

P: n/a
* Kent Johnson wrote (on 10/4/2006 10:04 AM):
Mark Elston wrote:
>...
Without this prior planning, any expansion (not to mention bug fixing)
becomes more difficult and makes the resulting code more brittle. While
not all planning for the future requires OO, this is one mechanism that
can be employed effectively *because* it is generally well understood
and can be readily grasped *if* it is planned and documented well.

Unfortunately prior planning is an attempt to predict the future.
Correctly planning for future requirements is difficult. It is possible
to expand code without making it brittle.
Hmmm....

I work in an environment where we plan our 'feature' implementation
several revisions in the future. We know where we are going because
we have a backlog of end user requests for new features and a limited
pool of people to implement them.

Knowing what will have to change in the future makes this kind of
planning *much* simpler.

However, I still find it a lot easier to plan for change even without
explicit requests than you seem to indicate. I have worked in the
field for over 10 years now and I have a pretty good idea of the
kinds of things our users will need. I also have a pretty good idea
of the kinds of things we can introduce that users will find useful.
Planning for the introduction of these things is pretty useful when
we have short turn-around time between revisions and have to implement
any number of new features during these iterations.

And adding new features and handling new requests in a well thought
out manner helps us to keep the system from being brittle.

BTW, the kind of SW we develop is for large pieces of test equipment
used in semiconductor test. This SW covers the range from embedded and
driver level supporting custom hardware to end-user applications. There
is simply no way we could survive if we tried to develop software in any
kind of an ad hoc manner.
Robert Martin has a great rule of thumb - first, do the simplest thing
that meets the current requirements. When the requirements change,
change the code so it will accommodate future changes of the same type.
Rather than try to anticipate all future changes, make the code easy to
change.
In our case the requirements don't really change. They do get
augmented, however. That is, the users will want to continue to do the
things they already can - in pretty much the same ways. However, they
will also want to do additional things.

Robert's rule of thumb is applied here as well. We may have a pretty
good idea of where we are going, but we can get by for now implementing
a minimal subset. However, if we don't anticipate where we are going
to be in the next revision or two it is very likely to entail some
substantial rewrite of existing code when we get there. That results in
an unacceptable cost of development - both in terms of $$ and time. Not
only is the code obsoleted, but so are all the components that depend on
the rewritten code and all of the tests (unit and integration) for all
the affected components.
>>
There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there. This
isn't a good thing. However, that doesn't mean that the use of OOP in
any given project is bad. It may be inappropriate.

In my experience a lot of GOOP results exactly from trying to anticipate
future requirements, thus introducing unneeded interfaces, factories, etc.
While we do our best to avoid this, it *does* sometimes happen.
However, it is not as much of a problem as the reverse. If an interface
is developed that turns out not to be very useful, we can always remove
it with very little cost. If we have to *replace* or substantially
modify an existing mechanism to support a new feature the ripple effect
can cripple our schedule for several release iterations.

OTOH, your point is a good one: If we really didn't know where we
wanted to be in the future then making a wild guess and heading off
in some random direction isn't likely to be very beneficial either.
In fact it is likely to be more costly in the long run.

That strikes me as "Crap Shoot Development" (a relative of POOP? :) ).

Mark

----------------------------------------------------------------------------------
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

-- Brian Kernighan of C
Oct 4 '06 #15

P: n/a
* John Salerno wrote (on 10/4/2006 10:18 AM):
Kent Johnson wrote:
>>There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there.

In my experience a lot of GOOP results

LOL. Good thing we didn't go with the acronym from my phrase "preemptive
OOP" ;)
Oops. I couldn't resist. See my previous post.

Mark
----------------------------------------------------------------------------------
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

-- Brian Kernighan of C
Oct 4 '06 #16

This discussion thread is closed

Replies have been disabled for this discussion.