By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,610 Members | 1,989 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,610 IT Pros & Developers. It's quick & easy.

Should I use "if" or "try" (as a matter of speed)?

P: n/a
I know that this topic has the potential for blowing up in my face,
but I can't help asking. I've been using Python since 1.5.1, so I'm
not what you'd call a "n00b". I dutifully evangelize on the goodness
of Python whenever I talk with fellow developers, but I always hit a
snag when it comes to discussing the finer points of the execution
model (specifically, exceptions).

Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try". I think that
the argument goes something like, "When you set up a 'try' block, you
have to set up a lot of extra machinery than is necessary just
executing a simple conditional."

I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model. It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a "try" block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item. But in cases where I'm only trying
a single getattr (for example), using "if" might be a cheaper way to
go.

What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.

Could you please tell me if I'm even remotely close to understanding
this correctly?
--
Steve Juranich
Tucson, AZ
USA
Jul 21 '05 #1
Share this Question
Share on Google+
40 Replies


P: n/a
ncf
Honestly, I'm rather new to python, but my best bet would be to create
some test code and time it.

Jul 21 '05 #2

P: n/a
My shot would be to test it like this on your platform like this:

#!/usr/bin/env python
import datetime, time
t1 = datetime.datetime.now()
for i in [str(x) for x in range(100)]:
if int(i) == i:
i + 1
t2 = datetime.datetime.now()
print t2 - t1
for i in [str(x) for x in range(100)]:
try:
int(i) +1
except:
pass
t3 = datetime.datetime.now()
print t3 - t2

for me (on python 2.4.1 on Linux on a AMD Sempron 2200+) it gives:
0:00:00.000637
0:00:00.000823

Jul 21 '05 #3

P: n/a
Steve Juranich <sj******@gmail.com> wrote:
Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try".
Well, you've now got a failure. I used to write Fortran on punch cards, so
I guess that makes me an "old-timer", and I don't agree with that argument.
I think that the argument goes something like, "When you set up a 'try'
block, you have to set up a lot of extra machinery than is necessary
just executing a simple conditional."
That sounds like a very C++ kind of attitude, where efficiency is prized
above all else, and exception handling is relatively heavy-weight compared
to a simple conditional.
What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.


Don't worry about crap like that until the whole application is done and
it's not running fast enough, and you've exhausted all efforts to identify
algorithmic improvements that could be made, and careful performance
measurements have shown that the use of try blocks is the problem.

Exceptions are better than returning an error code for several reasons:

1) They cannot be silently ignored by accident. If you don't catch an
exception, it bubbles up until something does catch it, or nothing does and
your program dies with a stack trace. You can ignore them if you want, but
you have to explicitly write some code to do that.

2) It separates the normal flow of control from the error processing. In
many cases, this makes it easier to understand the program logic.

3) In some cases, they can lead to faster code. A classic example is
counting occurances of items using a dictionary:

count = {}
for key in whatever:
try:
count[key] += 1
except KeyError:
count[key] = 1

compared to

count = {}
for key in whatever:
if count.hasKey(key):
count[key] += 1
else:
count[key] = 1

if most keys are going to already be in the dictionary, handling the
occasional exception will be faster than calling hasKey() for every one.
Jul 21 '05 #4

P: n/a
wi******@hotmail.com <ma**********@gmail.com> wrote:
My shot would be to test it like this on your platform like this:

#!/usr/bin/env python
import datetime, time
t1 = datetime.datetime.now()
for i in [str(x) for x in range(100)]:
if int(i) == i:
i + 1
t2 = datetime.datetime.now()
print t2 - t1
for i in [str(x) for x in range(100)]:
try:
int(i) +1
except:
pass
t3 = datetime.datetime.now()
print t3 - t2

for me (on python 2.4.1 on Linux on a AMD Sempron 2200+) it gives:
0:00:00.000637
0:00:00.000823


PowerBook:~/Desktop wezzy$ python test.py
0:00:00.001206
0:00:00.002092

Python 2.4.1 Pb15 with Tiger
--
Ciao
Fabio
Jul 21 '05 #5

P: n/a
wi******@hotmail.com a écrit :
My shot would be to test it like this on your platform like this:

#!/usr/bin/env python
import datetime, time
Why not use the timeit module instead ?
t1 = datetime.datetime.now()
for i in [str(x) for x in range(100)]:
A bigger range (at least 10/100x more) would probably be better...
if int(i) == i:
This will never be true, so next line...
i + 1
....wont never be executed.
t2 = datetime.datetime.now()
print t2 - t1
for i in [str(x) for x in range(100)]:
try:
int(i) +1
except:
pass
This will never raise, so the addition will always be executed (it never
will be in the previous loop).
t3 = datetime.datetime.now()
print t3 - t2


BTW, you end up including the time spent printing t2 - t1 in the
timing, and IO can be (very) costly.

(snip meaningless results)

The "test-before vs try-expect strategy" is almost a FAQ, and the usual
answer is that it depends on the hit/misses ratio. If the (expected)
ratio is high, try-except is better. If it's low, test-before is better.

HTH
Jul 21 '05 #6

P: n/a
* Steve Juranich (2005-07-09 19:21 +0100)
I know that this topic has the potential for blowing up in my face,
but I can't help asking. I've been using Python since 1.5.1, so I'm
not what you'd call a "n00b". I dutifully evangelize on the goodness
of Python whenever I talk with fellow developers, but I always hit a
snag when it comes to discussing the finer points of the execution
model (specifically, exceptions).

Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try". I think that
the argument goes something like, "When you set up a 'try' block, you
have to set up a lot of extra machinery than is necessary just
executing a simple conditional."

I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model. It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a "try" block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item. But in cases where I'm only trying
a single getattr (for example), using "if" might be a cheaper way to
go.

What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.


"Catch errors rather than avoiding them to avoid cluttering your code
with special cases. This idiom is called EAFP ('easier to ask
forgiveness than permission'), as opposed to LBYL ('look before you
leap')."

http://jaynes.colorado.edu/PythonIdioms.html
Jul 21 '05 #7

P: n/a

"wi******@hotmail.com" <ma**********@gmail.com> wrote in message
news:11**********************@g47g2000cwa.googlegr oups.com...
My shot would be to test it like this on your platform like this:

#!/usr/bin/env python
import datetime, time
t1 = datetime.datetime.now()
for i in [str(x) for x in range(100)]:
if int(i) == i:
i + 1
t2 = datetime.datetime.now()
print t2 - t1
for i in [str(x) for x in range(100)]:
try:
int(i) +1
except:
pass
t3 = datetime.datetime.now()
print t3 - t2


This is not a proper test since the if condition always fails and the
addition not done while the try succeeds and the addition is done. To be
equivalent, remove the int call in the try part: try: i+1. This would
still not a proper test since catching exceptions is known to be expensive
and try: except is meant for catching *exceptional* conditions, not
always-bad conditions. Here is a test that I think more useful:

for n in [1,2,3,4,5,10,20,50,100]:
# time this
for i in range(n):
if i != 0: x = 1/i
else: pass
# versus
for i in range(n):
try x = 1/i
except ZeroDivisionError: pass

I expect this will show if faster for small n and try for large n.

Terry J. Reedy

Jul 21 '05 #8

P: n/a
"Thorsten Kampe" <th******@thorstenkampe.de> wrote in message
news:6i**************************@40tude.net...
* Steve Juranich (2005-07-09 19:21 +0100)
I know that this topic has the potential for blowing up in my face,
but I can't help asking. I've been using Python since 1.5.1, so I'm
not what you'd call a "n00b". I dutifully evangelize on the goodness
of Python whenever I talk with fellow developers, but I always hit a
snag when it comes to discussing the finer points of the execution
model (specifically, exceptions).

Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try". I think that
the argument goes something like, "When you set up a 'try' block, you
have to set up a lot of extra machinery than is necessary just
executing a simple conditional."

I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model. It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a "try" block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item. But in cases where I'm only trying
a single getattr (for example), using "if" might be a cheaper way to
go.

What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.


"Catch errors rather than avoiding them to avoid cluttering your code
with special cases. This idiom is called EAFP ('easier to ask
forgiveness than permission'), as opposed to LBYL ('look before you
leap')."

http://jaynes.colorado.edu/PythonIdioms.html


It depends on what you're doing, and I don't find a "one size fits all"
approach to be all that useful.

If execution speed is paramount and exceptions are relatively rare,
then the try block is the better approach.

If you simply want to throw an exception, then the clearest way
of writing it that I've ever found is to encapsulate the raise statement
together with the condition test in a subroutine with a name that
describes what's being tested for. Even a name as poor as
"HurlOnFalseCondition(<condition>, <exception>, <parms>, <message>)
can be very enlightening. It gets rid of the in-line if and raise
statements,
at the cost of an extra method call.

John Roth

In both approaches, you have some
error handling code that is going to clutter up your program flow.
Jul 21 '05 #9

P: n/a
Steve Juranich wrote:
I was wondering how true this holds for Python, where exceptions are such
an integral part of the execution model. It seems to me, that if I'm
executing a loop over a bunch of items, and I expect some condition to
hold for a majority of the cases, then a "try" block would be in order,
since I could eliminate a bunch of potentially costly comparisons for each
item.
Exactly.
But in cases where I'm only trying a single getattr (for example),
using "if" might be a cheaper way to go.
Relying on exceptions is faster. In the Python world, this coding style
is called EAFP (easier to ask forgiveness than permission). You can try
it out, just do something 10**n times and measure the time it takes. Do
this twice, once with prior checking and once relying on exceptions.

And JFTR: the very example you chose gives you yet another choice:
getattr can take a default parameter.
What do I mean by "cheaper"? I'm basically talking about the number of
instructions that are necessary to set up and execute a try block as
opposed to an if block.


I don't know about the implementation of exceptions but I suspect most
of what try does doesn't happen at run-time at all, and things get
checked and looked for only if an exception did occur. An I suspect that
it's machine code that does that checking and looking, not byte code.
(Please correct me if I'm wrong, anyone with more insight.)

--
Thomas

Jul 21 '05 #10

P: n/a
On Sat, 9 Jul 2005 11:21:07 -0700, Steve Juranich <sj******@gmail.com>
declaimed the following in comp.lang.python:

Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
For an "old-timer" the languages are "Ada" and "FORTRAN"; "ADA"
is the American Dental Association, Americans with Disabilities Act,
etc. Or to be more explicit; Ada is a "name", FORTRAN was an
abbreviation/acronym (FORmula TRANslation). "Fortran" was decreed a
"name" with the F90 standard and may be used in general, but FORTRAN is
still proper for F77/F66 (F-IV), etc. which predate the F90 standard.

{Sorry -- but as a longtime fan, with no practical experience, of Ada,
this is a sticking point. Granted, it doesn't help that the Ada language
is, internally, case insensitive <G>}

-- ================================================== ============ <
wl*****@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG <
wu******@dm.net | Bestiaria Support Staff <
================================================== ============ <
Home Page: <http://www.dm.net/~wulfraed/> <
Overflow Page: <http://wlfraed.home.netcom.com/> <

Jul 21 '05 #11

P: n/a
Thomas Lotze wrote:
Steve Juranich wrote:
What do I mean by "cheaper"? I'm basically talking about the number of
instructions that are necessary to set up and execute a try block as
opposed to an if block.


I don't know about the implementation of exceptions but I suspect most
of what try does doesn't happen at run-time at all, and things get
checked and looked for only if an exception did occur. An I suspect that
it's machine code that does that checking and looking, not byte code.
(Please correct me if I'm wrong, anyone with more insight.)


Part right, part confusing. Definitely "try" is something that happens
at run-time, not compile time, at least in the sense of the execution of
the corresponding byte code. At compile time nothing much happens
except a determination of where to jump if an exception is actually
raised in the try block.

Try corresponds to a single bytecode SETUP_EXCEPT, so from the point of
view of Python code it is extremely fast, especially compared to
something like a function call (which some if-tests would do). (There
are also corresponding POP_BLOCK and JUMP_FORWARD instructions at the
end of the try block, and they're even faster, though the corresponding
if-test version would similarly have a jump of some kind involved.)

Exceptions in Python are checked for all the time, so there's little you
can do to avoid part of the cost of that. There is a small additional
cost (in the C code) when the exceptional condition is actually present,
of course, with some resulting work to create the Exception object and
raise it.

Some analysis of this can be done trivially by anyone with a working
interpreter, using the "dis" module.

def f():
try:
func()
except:
print 'ni!'

import dis
dis.dis(f)

Each line of the output represents a single bytecode instruction plus
operands, similar to an assembly code disassembly.

To go further, get the Python source and skim through the ceval.c
module, or do that via CVS
http://cvs.sourceforge.net/viewcvs.p....424&view=auto
, looking for the string "main loop".

And, in any case, remember that readability is almost always more
important than optimization, and you should consider first whether one
or the other approach is clearly more expressive (for future
programmers, including yourself) in the specific case involved.

-Peter
Jul 21 '05 #12

P: n/a
Roy Smith wrote:
Well, you've now got a failure. I used to write Fortran on punch cards,
which were then fed into an OCR gadget? That's an efficient approach --
where I was, we had to write the FORTRAN[*] on coding sheets; KPOs
would then produce the punched cards.

[snip]

3) In some cases, they can lead to faster code. A classic example is
counting occurances of items using a dictionary:

count = {}
for key in whatever:
try:
count[key] += 1
except KeyError:
count[key] = 1

compared to

count = {}
for key in whatever:
if count.hasKey(key):


Perhaps you mean has_key[*].
Perhaps you might like to try

if key in count:

It's believed to be faster (no attribute lookup, no function call).

[snip]
[*]
humanandcomputerlanguagesshouldnotimhousecaseandwo rdseparatorsascrutchesbuttheydosogetusedtoit
:-)
Jul 21 '05 #13

P: n/a
On Sat, 09 Jul 2005 23:10:49 +0200, Thomas Lotze wrote:
Steve Juranich wrote:
I was wondering how true this holds for Python, where exceptions are such
an integral part of the execution model. It seems to me, that if I'm
executing a loop over a bunch of items, and I expect some condition to
hold for a majority of the cases, then a "try" block would be in order,
since I could eliminate a bunch of potentially costly comparisons for each
item.


Exactly.
But in cases where I'm only trying a single getattr (for example),
using "if" might be a cheaper way to go.


Relying on exceptions is faster. In the Python world, this coding style
is called EAFP (easier to ask forgiveness than permission). You can try
it out, just do something 10**n times and measure the time it takes. Do
this twice, once with prior checking and once relying on exceptions.


True, but only sometimes. It is easy to write a test that gives misleading
results.

In general, setting up a try...except block is cheap, but actually calling
the except clause is expensive. So in a test like this:

for i in range(10000):
try:
x = mydict["missing key"]
except KeyError:
print "Failed!"

will be very slow (especially if you time the print, which is slow).

On the other hand, this will be very fast:

for i in range(10000):
try:
x = mydict["existing key"]
except KeyError:
print "Failed!"

since the except is never called.

On the gripping hand, testing for errors before they happen will be slow
if errors are rare:

for i in range(10000):
if i == 0:
print "Failed!"
else:
x = 1.0/i

This only fails on the very first test, and never again.

When doing your test cases, try to avoid timing things unrelated to the
thing you are actually interested in, if you can help it. Especially I/O,
including print. Do lots of loops, if you can, so as to average away
random delays due to the operating system etc. But most importantly, your
test data must reflect the real data you expect. Are most tests
successful or unsuccessful? How do you know?

However, in general, there are two important points to consider.

- If your code has side effects (eg changing existing objects, writing to
files, etc), then you might want to test for error conditions first.
Otherwise, you can end up with your data in an inconsistent state.

Example:

L = [3, 5, 0, 2, 7, 9]

def invert(L):
"""Changes L in place by inverting each item."""
try:
for i in range(len(L)):
L[i] = 1.0/L[i]
except ZeroDivisionError:
pass

invert(L)
print L

=> [0.333, 0.2, 0, 2, 7, 9]
- Why are you optimizing your code now anyway? Get it working the simplest
way FIRST, then _time_ how long it runs. Then, if and only if it needs to
be faster, should you worry about optimizing. The simplest way will often
be try...except blocks.
--
Steven.

Jul 21 '05 #14

P: n/a
Steve Juranich <sj******@gmail.com> wrote in
news:ma***************************************@pyt hon.org:
I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model. It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a "try" block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item. But in cases where I'm only trying
a single getattr (for example), using "if" might be a cheaper way to
go.

What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.

Could you please tell me if I'm even remotely close to understanding
this correctly?


*If* I'm not doing a lot of things once, I *try* to do one thing a lot.

Jul 21 '05 #15

P: n/a
Steven D'Aprano wrote:
On the gripping hand, testing for errors before they happen will be slow
if errors are rare:
Hm, might have something to do with why those things intended for
handling errors after they happened are called exceptions ;o)
- If your code has side effects (eg changing existing objects, writing to
files, etc), then you might want to test for error conditions first.
Otherwise, you can end up with your data in an inconsistent state.
BTW: Has the context management stuff from PEP 343 been considered for
implementing transactions?
- Why are you optimizing your code now anyway? Get it working the simplest
way FIRST, then _time_ how long it runs. Then, if and only if it needs to
be faster, should you worry about optimizing. The simplest way will often
be try...except blocks.


Basically, I agree with the "make it run, make it right, make it fast"
attitude. However, FWIW, I sometimes can't resist optimizing routines that
probably don't strictly need it. Not only does the resulting code run
faster, but it is usually also shorter and more readable and expressive.
Plus, I tend to gain further insight into the problem and tools in the
process. YMMV, of course.

--
Thomas

Jul 21 '05 #17

P: n/a
On Sun, 10 Jul 2005 12:15:25 +0530, Dark Cowherd wrote:
http://www.joelonsoftware.com/items/2003/10/13.html


Joel Spolsky might be a great C++ programmer, and his advice on user
interface design is invaluable, but Python is not C++ or Java, and his
arguments about exceptions do not hold in Python.

Joel argues:

"They are invisible in the source code. Looking at a block of code,
including functions which may or may not throw exceptions, there is no way
to see which exceptions might be thrown and from where. This means that
even careful code inspection doesn't reveal potential bugs."

I don't quiet get this argument. In a random piece of source code, there
is no way to tell whether or not it will fail just by inspection. If you
look at:

x = 1
result = myfunction(x)

you can't tell whether or not myfunction will fail at runtime just by
inspection, so why should it matter whether it fails by crashing at
runtime or fails by raising an exception?

Joel's argument that raising exceptions is just a goto in disguise is
partly correct. But so are for loops, while loops, functions and methods!
Like those other constructs, exceptions are gotos tamed and put to work
for you, instead of wild and dangerous. You can't jump *anywhere*, only
highly constrained places.

Joel also writes:

"They create too many possible exit points for a function. To write
correct code, you really have to think about every possible code path
through your function. Every time you call a function that can raise an
exception and don't catch it on the spot, you create opportunities for
surprise bugs caused by functions that terminated abruptly, leaving data
in an inconsistent state, or other code paths that you didn't think about."

This is a better argument for *careful* use of exceptions, not an argument
to avoid them. Or better still, it is an argument for writing code which
doesn't has side-effects and implements data transactions. That's a good
idea regardless of whether you use exceptions or not.

Joel's concern about multiple exit points is good advice, but it can be
taken too far. Consider the following code snippet:

def myfunc(x=None):
result = ""
if x is None:
result = "No argument given"
elif x = 0:
result = "Zero"
elif 0 < x <= 3:
resutl = "x is between 0 and 3"
else:
result = "x is more than 3"
return result

There is no benefit in deferring returning value as myfunc does, just
for the sake of having a single exit point. "Have a single exit point"
is a good heuristic for many functions, but it is pointless make-work for
this one. (In fact, it increases, not decreases, the chances of a bug. If
you look carefully, myfunc above has such a bug.

Used correctly, exceptions in Python have more advantages than
disadvantages. They aren't just for errors either: exceptions can be
triggered for exceptional cases (hence the name) without needing to track
(and debug) multiple special cases.

Lastly, let me argue against one of Joel's comments:

"A better alternative is to have your functions return error values when
things go wrong, and to deal with these explicitly, no matter how verbose
it might be. It is true that what should be a simple 3 line program often
blossoms to 48 lines when you put in good error checking, but that's life,
and papering it over with exceptions does not make your program more
robust."

Maybe that holds true for C++. I don't know the language, and wouldn't
like to guess. But it doesn't hold true for Python. This is how Joel might
write a function as a C programmer:

def joels_function(args):
error_result = 0
good_result = None
process(args)
if error_condition():
error_result = -1 # flag for an error
elif different_error_conditon():
error_result = -2
else:
more_processing()
if another_error_conditon():
error_result = -3
do_more_work()
good_result = "Success!"
if error_result != 0:
return (False, error_result)
else:
return (True, good_result)
and then call it with:

status, msg = joels_function(args)
if status == False:
print msg
# and fail...
else:
print msg
# and now continue...
This is how I would write it in Python:

def my_function(args):
process(args)
if error_condition():
raise SomeError("An error occurred")
elif different_error_conditon():
raise SomeError("A different error occurred")
more_processing()
if another_error_conditon():
raise SomeError("Another error occurred")
do_more_work()
return "Success!"

and call it with:

try:
result = my_function(args)
print "Success!!!"
except SomeError, msg:
print msg
# and fail...
# and now continue safely here...
In the case of Python, calling a function that may raise an exception is
no more difficult or unsafe than calling a function that returns a status
flag and a result, but writing the function itself is much easier, with
fewer places for the programmer to make a mistake.

In effect, exceptions allow the Python programmer to concentrate on his
actual program, rather than be responsible for building error-handling
infrastructure into every function. Python supplies that infrastructure
for you, in the form of exceptions.

--
Steven.

Jul 21 '05 #18

P: n/a
Thomas Lotze <th****@thomas-lotze.de> wrote:
Basically, I agree with the "make it run, make it right, make it fast"
attitude. However, FWIW, I sometimes can't resist optimizing routines that
probably don't strictly need it. Not only does the resulting code run
faster, but it is usually also shorter and more readable and expressive.


Optimize for readability and maintainability first. Worry about speed
later.
Jul 21 '05 #19

P: n/a

"Thomas Lotze" <th****@thomas-lotze.de> wrote in message
news:pa****************************@ID-174572.user.uni-berlin.de...
Steven D'Aprano wrote:
Basically, I agree with the "make it run, make it right, make it fast"
attitude. However, FWIW, I sometimes can't resist optimizing routines that
probably don't strictly need it. Not only does the resulting code run
faster, but it is usually also shorter and more readable and expressive.
Plus, I tend to gain further insight into the problem and tools in the
process. YMMV, of course.
Shorter, more readable and expressive are laudable goals in and
of themselves. Most of the "advice" on optimization assumes that
after optimization, routines will be less readable and expressive,
not more.

In other words, I wouldn't call the activity of making a routine
more readable and expressive of intent "optimization." If it runs
faster, that's a bonus. It frequently will, at least if you don't add
method calls to the process.

John Roth
--
Thomas


Jul 21 '05 #20

P: n/a
* John Roth (2005-07-09 21:48 +0100)
"Thorsten Kampe" <th******@thorstenkampe.de> wrote in message
news:6i**************************@40tude.net...
* Steve Juranich (2005-07-09 19:21 +0100)
I know that this topic has the potential for blowing up in my face,
but I can't help asking. I've been using Python since 1.5.1, so I'm
not what you'd call a "n00b". I dutifully evangelize on the goodness
of Python whenever I talk with fellow developers, but I always hit a
snag when it comes to discussing the finer points of the execution
model (specifically, exceptions).

Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try". I think that
the argument goes something like, "When you set up a 'try' block, you
have to set up a lot of extra machinery than is necessary just
executing a simple conditional."

I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model. It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a "try" block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item. But in cases where I'm only trying
a single getattr (for example), using "if" might be a cheaper way to
go.

What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.


"Catch errors rather than avoiding them to avoid cluttering your code
with special cases. This idiom is called EAFP ('easier to ask
forgiveness than permission'), as opposed to LBYL ('look before you
leap')."

http://jaynes.colorado.edu/PythonIdioms.html


It depends on what you're doing, and I don't find a "one size fits all"
approach to be all that useful.


I think, it's a common opinion in the Python community (see for
instance "Python in a Nutshell") that EAFP is the Pythonic way to go
and - except in very rare cases - much preferred to LBYL.

Speed considerations and benchmarking should come in after you wrote
the program. "Premature optimisation is the root of all evil" and
"first make it work, then make it right, then make it fast" (but only
if it's not already fast enough) - common quotes not only with Python
developers.
Jul 21 '05 #21

P: n/a
On Sun, 10 Jul 2005 22:10:50 +1000, Steven D'Aprano <st***@REMOVETHIScyber.com.au> wrote:
On Sun, 10 Jul 2005 12:15:25 +0530, Dark Cowherd wrote:
http://www.joelonsoftware.com/items/2003/10/13.html


Joel Spolsky might be a great C++ programmer, and his advice on user
interface design is invaluable, but Python is not C++ or Java, and his
arguments about exceptions do not hold in Python.


Of course, his arguments do not even "hold" in C++ or Java, in the sense
that everyone should be expected to accept them. Most C++ programmers would
find his view on exceptions slightly ... exotic.

He has a point though: exceptions suck. But so do error codes. Error
handling is difficult and deadly boring.

(Then there's the debate about using exceptions for handling things that
aren't really errors, and what the term 'error' really means ...)

/Jorgen

--
// Jorgen Grahn <jgrahn@ Ph'nglui mglw'nafh Cthulhu
\X/ algonet.se> R'lyeh wgah'nagl fhtagn!
Jul 21 '05 #22

P: n/a
Roy, I know you actually know this stuff, but for the benefit of
beginners....

In article <ro***********************@reader2.panix.com>,
Roy Smith <ro*@panix.com> wrote:

3) In some cases, they can lead to faster code. A classic example is
counting occurances of items using a dictionary:

count = {}
for key in whatever:
try:
count[key] += 1
except KeyError:
count[key] = 1

compared to

count = {}
for key in whatever:
if count.hasKey(key):
count[key] += 1
else:
count[key] = 1


Except that few would write the second loop that way these days::

for key in whatever:
if key in count:
...

Using ``in`` saves a bytecode of method lookup on ``has_key()`` (which is
the correct spelling). Or you could choose the slightly more convoluted
approach to save a line::

for key in whatever:
count[key] = count.get(key, 0) + 1

If whatever had ``(key, value)`` pairs, you'd do::

key_dict = {}
for key, value in whatever:
key_dict.setdefault(key, []).append(value)
--
Aahz (aa**@pythoncraft.com) <*> http://www.pythoncraft.com/

f u cn rd ths, u cn gt a gd jb n nx prgrmmng.
Jul 21 '05 #23

P: n/a
aa**@pythoncraft.com (Aahz) wrote:
Using ``in`` saves a bytecode of method lookup on ``has_key()`` (which is
the correct spelling).


You are right. My example is somewhat out of date w/r/t newer language
features, and writing hasKey() instead of has_key() was just plain a
mistake. Thanks for the corrections.
Jul 21 '05 #24

P: n/a
Roy Smith wrote:
Thomas Lotze <th****@thomas-lotze.de> wrote:
Basically, I agree with the "make it run, make it right, make it fast"
attitude. However, FWIW, I sometimes can't resist optimizing routines that
probably don't strictly need it. Not only does the resulting code run
faster, but it is usually also shorter and more readable and expressive.

Optimize for readability and maintainability first. Worry about speed
later.


Yes, and then...

If it's an application that is to be used on a lot of computers, some of
them may be fairly old. It might be worth slowing your computer down
and then optimizing the parts that need it.

When it's run on faster computers, those optimizations would be a bonus.

Cheers,
Ron

Jul 21 '05 #25

P: n/a
Steve Juranich enlightened us with:
Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try".


Then here is the counter argument:

- Starting a 'try' is, as said somewhere else in this thread, a single
bytecode, hence simple.

- Those old-timers will probably check the data before they pass it to
a function. Because their function has to be stable and idiot-proof
as well, the function itself performs another check. Maybe the data
is passed even further down a function chain, making checks before
and after the function calls. After all, we want to be sure to only
call a function when we're sure it won't fail, and every function
has to gracefully bail when it's being passed bad data.

This means that there will be a _lot_ of checks in the code. Now
compare this by setting up a single try-block, and catching the
exception at the proper place in case it's being thrown.

The important part is this: those old-timers' "if" statements are
always executed - it doesn't matter whether the data is correct or
incorrect. The "heavy" part of exceptions only comes into play when
the data is incorrect, which by proper design of the program shouldn't
happen often anyway.

As far as I see it, try/except blocks are "cheaper" than "if"
statements.

Sybren
--
The problem with the world is stupidity. Not saying there should be a
capital punishment for stupidity, but why don't we just take the
safety labels off of everything and let the problem solve itself?
Frank Zappa
Jul 21 '05 #26

P: n/a
>
def joels_function(args):
error_result = 0
good_result = None
process(args)
if error_condition():
error_result = -1 # flag for an error
elif different_error_conditon():
error_result = -2
else:
more_processing()
if another_error_conditon():
error_result = -3
do_more_work()
good_result = "Success!"
if error_result != 0:
return (False, error_result)
else:
return (True, good_result)


and then call it with:

status, msg = joels_function(args)
if status == False:
print msg
# and fail...
else:
print msg
# and now continue...


This is how I would write it in Python:

def my_function(args):
process(args)
if error_condition():
raise SomeError("An error occurred")
elif different_error_conditon():
raise SomeError("A different error occurred")
more_processing()
if another_error_conditon():
raise SomeError("Another error occurred")
do_more_work()
return "Success!"

and call it with:

try:
result = my_function(args)
print "Success!!!"
except SomeError, msg:
print msg
# and fail...
# and now continue safely here...


In the case of Python, calling a function that may raise an exception is


I tend to use exceptions, but I think Joel has a point.

Taking the example code that you have given above.

Let us assume that somebody else is using my_function and DOES NOT
write a try except block.

This code will run fine except, when the exception is thrown and it
will suddenly pop up in some other error handler which may not be
handling the situation correctly. You have to plan and create a series
of errorhandling classes to handle such situations.

However Joels_function forces the caller to write some kind of error
handler. If he doesnt write the program will not run.

After reading that I have been giving this option some thought. The
nice thing about Python is I can easily return tuples. In C++ you have
to jump through hoops because you cant return two values easily.

DarkCowherd
Jul 21 '05 #27

P: n/a
It really does depend. For instance, some other programmers where I
work came up with a way to represent a hierarchical, somewhat random
data set by creating each object and then adding attributes to those
for each subobject, and so on down the tree. However, you could never
really be sure that the object you wanted was really there, so for
every access call they just wrapped it in a try ...except loop. Now
that may seem like a good way to go, but when I rewrote some code to
use hasattr() instead, it ran a lot faster. So yeah, exceptions can be
handy, but if you code requires exception handling for everything, you
may want to rethink things.

Steve Juranich wrote:
I know that this topic has the potential for blowing up in my face,
but I can't help asking. I've been using Python since 1.5.1, so I'm
not what you'd call a "n00b". I dutifully evangelize on the goodness
of Python whenever I talk with fellow developers, but I always hit a
snag when it comes to discussing the finer points of the execution
model (specifically, exceptions).

Without fail, when I start talking with some of the "old-timers"
(people who have written code in ADA or Fortran), I hear the same
arguments that using "if" is "better" than using "try". I think that
the argument goes something like, "When you set up a 'try' block, you
have to set up a lot of extra machinery than is necessary just
executing a simple conditional."

I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model. It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a "try" block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item. But in cases where I'm only trying
a single getattr (for example), using "if" might be a cheaper way to
go.

What do I mean by "cheaper"? I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.

Could you please tell me if I'm even remotely close to understanding
this correctly?
--
Steve Juranich
Tucson, AZ
USA


Jul 21 '05 #28

P: n/a
On Mon, 11 Jul 2005 21:18:40 +0530, Dark Cowherd wrote:
I tend to use exceptions, but I think Joel has a point.
Joel being "Joel On Software" Joel.
Taking the example code that you have given above.

Let us assume that somebody else is using my_function and DOES NOT
write a try except block.
Then, like any other piece of Python code, if it raises an
exception the exception will be propagated and the interpreter will halt.
That is the correct behaviour. On crash, halt.
This code will run fine except, when the exception is thrown and it will
suddenly pop up in some other error handler which may not be handling
the situation correctly. You have to plan and create a series of
errorhandling classes to handle such situations.
Why? You should be aiming to write correct code that doesn't produce
random exceptions at random times, rather than wrapping everything in
exception handlers, which are wrapped in exception handlers, which are
wrapped in exception handlers...

Exceptions are useful, but you shouldn't just cover them up when they
occur. The idea of exceptions is to recover from them safely, if you can,
and if not, bring the program to a halt safely.

Dealing with exceptions is no different from dealing with data. If there
is data you don't deal with correctly ("what if the user passes -1 as an
argument, when we expect an integer larger than zero?") you have a bug. If
there is a exception you don't deal with correctly ("what if this function
raises ZeroDivisionError when we expect a ValueError?"), you have an
uncaught exception which will halt your program, exactly as exceptions are
designed to do.

However Joels_function forces the caller to write some kind of error
handler. If he doesnt write the program will not run.
And Python gives you that error handler already built in. It is called
try...except, and if you don't write it, and your program raises an
exception, your program will stop.
After reading that I have been giving this option some thought. The nice
thing about Python is I can easily return tuples.


Yes. But returning a tuple (success, data) where the meaning of data
varies depending on whether success is True or False is an abuse of the
language. I can think of a few places where I might do that, but not very
many.

The problem with this tuple (success, data) idiom is that it leads to
duplicated code. All your functions end up looking like this:

def func(data):
if data[0]:
result = do_real_work(data[1])
if error_condition:
return (False, msg)
return (True, result)
else:
return data

But then you want to make do_real_work bulletproof, don't you? So
do_real_work ends up filled with error checking, just in case the calling
function passes the wrong data. So you end up losing all the advantages of
wrapping your data in a tuple with a flag, but keeping the disadvantages.

In any case, writing masses of boilerplate code is a drain on programmer
productivity, and a frequent cause of bugs. Boilerplate code should be
avoided whenever you can.
--
Steven.

Jul 21 '05 #29

P: n/a
Thorsten Kampe <th******@thorstenkampe.de> writes:
Speed considerations and benchmarking should come in after you wrote
the program. "Premature optimisation is the root of all evil" and
"first make it work, then make it right, then make it fast" (but only
if it's not already fast enough) - common quotes not only with Python
developers.


Just a minor note: regarding quote

"first make it work, then make it right, then make it fast"

Shouldn't one avoid doing it the wrong way from the very beginning? If you
make it "just work" the first time, you'll probably use the old code later on
because "functionality is already there" and temptatation to build on probably
relatively bad architecture can be too strong.

How about

First make it work (but avoid ad-hoc designs), then make it right, then make
it fast

Of course, such emphasis doesn't go well with classic idioms..

(yeah, programmer's block at the moment: I should clean up a 120+ -line
if-elif-elif-elif... else -block which tests a single variable and calls
different methods with variable number of parameters depending on the value of
the variable - guess I should apply command pattern or similar...)

--
# Edvard Majakari Software Engineer
# PGP PUBLIC KEY available Soli Deo Gloria!

$_ = '456476617264204d616a616b6172692c20612043687269737 469616e20'; print
join('',map{chr hex}(split/(\w{2})/)),uc substr(crypt(60281449,'es'),2,4),"\n";
Jul 21 '05 #30

P: n/a
On Tue, 12 Jul 2005 11:38:51 +0300, Edvard Majakari wrote:
Just a minor note: regarding quote

"first make it work, then make it right, then make it fast"

Shouldn't one avoid doing it the wrong way from the very beginning? If you
make it "just work" the first time, you'll probably use the old code later on
because "functionality is already there" and temptatation to build on probably
relatively bad architecture can be too strong.

How about

First make it work (but avoid ad-hoc designs), then make it right, then make
it fast


Optimizing sometimes means refactoring your code. Sometimes it even means
throwing it away and starting again.

However, your point to bring us back to a discussion held on this
newsgroup not long ago, about whether or not it was good for computer
science students to learn how to program in low-level languages like C so
as to avoid silly mistakes like this:

result = ""
for s in really_big_list_of_strings:
result = result + s
return result

instead of just "".join(really_big_list_of_strings).

The first method is slow, but you might not know that unless you have some
understanding of the implementation details of string concatenation.

My opinion is, no, you don't need to be a C programmer, or an assembly
programmer, or a hardware level physicist who understands NAND gates, but
it is very useful to have some understanding of what is going on at the
low-level implementation.

The way I see it, programmers need to be somewhat aware of the eventual
optimization stage in their program, so as to avoid poor design choices
from the start. But you can't always recognise poor design up front, so
even more important is careful encapsulation and design, so you
can make significant implementation changes without needing to throw away
your work. (Well, that's the theory.)
--
Steven.

Jul 21 '05 #31

P: n/a
Edvard Majakari wrote:
"first make it work, then make it right, then make it fast"

Shouldn't one avoid doing it the wrong way from the very beginning? If you
make it "just work" the first time, you'll probably use the old code later on
because "functionality is already there" and temptatation to build on probably
relatively bad architecture can be too strong.


The expression describes (most recently, if not originally) the practice
in Test-Driven Development (TDD) of making your code pass the test as
quickly as possible, without worrying about how nice it is.

The "right" part doesn't refer to correctness, but to structure, style,
readability, and all those other nice things that an automated test
can't check. You aren't doing it "wrong" at first, just expediently.

And it really does make sense, because at that early stage, you aren't
even absolutely certain that your test is entirely correct, so making
your code a paragon of elegance is a potential waste of time, and
distracting. Once you've been able to pass that test (and all the
others, since you have to make sure all previous tests still pass as
well), then and only then is it sensible -- and required! -- to refactor
the code to make it elegant, concise, clean, etc.

Of course, your point about temptation is sound. Extreme Programming
tries to avoid that problem partly by pairing programmers together, and
it is the responsibility of both partners to encourage^H^H^H^H^H insist
that the refactor "make it right" stage must occur _now_, before we
check the code in. If you skip this step, you're failing to be an agile
programmer, and your code base will become a tar pit even more quickly
than it would in a traditional (non-agile) project...

-Peter
Jul 21 '05 #32

P: n/a
I use Delphi in my day job and evaluating and learning Python over the
weekends and spare time. This thread has been very enlightening to me.

The comments that Joel of Joel on Software makes here
http://www.joelonsoftware.com/items/2003/10/13.html was pretty
convincing. But I can see from the comments made by various people
here that since Python uses Duck typing and encourages typless styles
of functions exceptions may actually be the better way to go.

But one advise that he gives which I think is of great value and is
good practice is
"Always catch any possible exception that might be thrown by a library
I'm using on the same line as it is thrown and deal with it
immediately."

DarkCowherd
Jul 21 '05 #33

P: n/a
Dark Cowherd wrote:
But one advise that he gives which I think is of great value and is
good practice is
"Always catch any possible exception that might be thrown by a library
I'm using on the same line as it is thrown and deal with it
immediately."


That's fine advice, except for when it's not. Consider the following code:

try:
f = file('file_here')
do_setup_code
do_stuff_with(f)
except IOError: # File doesn't exist
error_handle

To me, this code seems very logical and straightfoward, yet it doesn't
catch the exception on the very next line following its generation. It
relies on the behavior of the rest of the try-block being skipped -- the
"implicit goto" that Joel seems to loathe. If we had to catch it on the
same line, the only alternative that comes to mind is:
try: f=file('file_here')
except IOError: #File doesn't exist
error_handle
error_flag = 1
if not error_flag:
do_setup_code
do_stuff_with(f)

which nests on weird, arbitrary error flags, and doesn't seem like good
programming to me.
Jul 21 '05 #34

P: n/a
Christopher Subich wrote:
try:
f=file('file_here')
except IOError: #File doesn't exist
error_handle
error_flag = 1
if not error_flag:
do_setup_code
do_stuff_with(f)

which nests on weird, arbitrary error flags, and doesn't seem like good
programming to me.


Neither does it to me. What about

try:
f=file('file_here')
except IOError: #File doesn't exist
error_handle
else:
do_setup_code
do_stuff_with(f)

(Not that I'd want to defend Joel's article, mind you...)

--
Thomas

Jul 21 '05 #35

P: n/a
Christopher Subich <sp****************@block.subich.spam.com> wrote:
try:
f = file('file_here')
do_setup_code
do_stuff_with(f)
except IOError: # File doesn't exist
error_handle


It's also a good idea to keep try blocks as small as possible, so you
know exactly where the error happened. Imagine if do_setup_code or
do_stuff_with(f) unexpectedly threw an IOError for some reason totally
unrelated to the file not existing.
Jul 21 '05 #36

P: n/a
Dark Cowherd <da*********@gmail.com> writes:
But one advise that he gives which I think is of great value and is
good practice is
"Always catch any possible exception that might be thrown by a library
I'm using on the same line as it is thrown and deal with it
immediately."


Yuch. That sort of defeats the *purpose* of exceptions in Python:
letting you get on with the coding, and dealing with the errors when
it's convenient. Consider:

try:
out = file(datafile, "wb")
out.write(genData1())
out.write(genData2())
out.write(genData3())
except IOError, msg:
print >>sys.stderr, "Save failed:", msg
if os.path.exists(datafile):
os.unlink(datafile)

I don't even want to *think* writing the try/except clause for each
line. It reminds me to much of:

if (!(out = open(datafile, "w"))) {
/* handle errors */
return ;
}
if (write(out, genData1()) <0) {
/* handle errors */
return ;
}
etc.

Generally, I treat exceptions as exceptional. I catch the ones that
require something to be changed to restore the program or environment
to a sane state - and I catch them when it's convenient. The rest I
catch globally and log, so I can figure out what happened and do
something to prevent it from happening again.

<mike
--
Mike Meyer <mw*@mired.org> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
Jul 21 '05 #37

P: n/a
OK, I can see that the Python way of doing things is very different.
However I think Roy made a very pertinent point
"Imagine if do_setup_code or
do_stuff_with(f) unexpectedly threw an IOError for some reason totally
unrelated to the file not existing."
This is the kind of situation that the rule 'catch it on the next
line' is trying to avoid

What I didnt realise till I read Thomas comment is that the try except
had an else clause. This is nice.

But seriously, if you expected to write reasonably large business
applications with multiple people in the team and teams changing over
time what would you give as a guideline for Error handling

DarkCowherd
Jul 21 '05 #38

P: n/a
Thomas Lotze wrote:
Neither does it to me. What about

try:
f=file('file_here')
except IOError: #File doesn't exist
error_handle
else:
do_setup_code
do_stuff_with(f)

(Not that I'd want to defend Joel's article, mind you...)


That works. I'm still not used to having 'else' available like that. I
wonder how Joel advocates managing in C++-likes that don't have a
try/catch/else semantic.
Jul 21 '05 #39

P: n/a
Steven D'Aprano <st***@REMOVETHIScyber.com.au> writes:
My opinion is, no, you don't need to be a C programmer, or an assembly
programmer, or a hardware level physicist who understands NAND gates, but
it is very useful to have some understanding of what is going on at the
low-level implementation.
Yes, I fully agree: in the example presented, it is sufficient to understand
that string concatenation is (relatively) expensive. Yet I'd emphasize that
most often speed is improved by better algorithms, not by low-level
optimisations and language-specific features (if speed is even an issue, that
is).
The way I see it, programmers need to be somewhat aware of the eventual
optimization stage in their program, so as to avoid poor design choices
from the start. But you can't always recognise poor design up front, so
even more important is careful encapsulation and design, so you
can make significant implementation changes without needing to throw away
your work. (Well, that's the theory.)


So true, extra emphasis on encapsulation and independence. Even seasoned
professionals fail to create dazzling products at version 1.0. Good component
design is crucial because you eventually want to do major rewrites later.

--
# Edvard Majakari Software Engineer
# PGP PUBLIC KEY available Soli Deo Gloria!

$_ = '456476617264204d616a616b6172692c20612043687269737 469616e20'; print
join('',map{chr hex}(split/(\w{2})/)),uc substr(crypt(60281449,'es'),2,4),"\n";
Jul 21 '05 #40

P: n/a
Peter Hansen <pe***@engcorp.com> writes:
"first make it work, then make it right, then make it fast"
....
The expression describes (most recently, if not originally) the practice in
Test-Driven Development (TDD) of making your code pass the test as quickly as
possible, without worrying about how nice it is.
Ack(nowledged).
The "right" part doesn't refer to correctness, but to structure, style,
readability, and all those other nice things that an automated test can't
check. You aren't doing it "wrong" at first, just expediently.
Yes, that I understood; if the first version worked, it had to be correct
already. But as I said, if you want to make ideas easy to remember, you have
to make them short enough, and you can probably assume the reader understands
more than what is explicitly stated. I didn't know the expression originates
from TDD, that puts it in a bit different light - and makes it more
understandable IMO.
And it really does make sense, because at that early stage, you aren't even
absolutely certain that your test is entirely correct, so making your code a
paragon of elegance is a potential waste of time, ^^^^^^^^^^^^^^^^^^^

:-D

Which is a seductive trap, that.. really, I mean, how many times you've
polished a module so much that you would want to publish it in every single
article you write about computing as an ideal example, one you care about and
nurture like it was your own child (or your fancy-schmancy, model '74
V12-engine, chrome-plated, mean monster-of-a-vehicle car, if you are one of
those types)? Then you report your progress to your superior and feel ashamed
because the only thing you've worked with in last 3 weeks is that (my)
precious(!) module.. hum. But I digress.
and distracting. Once you've been able to pass that test (and all the
others, since you have to make sure all previous tests still pass as well),
then and only then is it sensible
-- and required! -- to refactor the code to make it elegant, concise, clean,
etc.
Yep. And thats one of the reasons I really like TDD and unit testing - you
know when to stop working with a piece of code. When all the tests pass, stop.
Of course, your point about temptation is sound. Extreme Programming tries
to avoid that problem partly by pairing programmers together, and it is the
responsibility of both partners to encourage^H^H^H^H^H insist that the
refactor "make it right" stage must occur _now_, before we check the code
in. If you skip this step, you're failing to be an agile programmer, and
your code base will become a tar pit even more quickly than it would in a
traditional (non-agile) project...


Yup. Too bad I've had the opportunity to work that way (pair programming) only
few times, and even then it wasn't XP-style in any other way. It is too often
considered waste of labour, I guess.

--
# Edvard Majakari Software Engineer
# PGP PUBLIC KEY available Soli Deo Gloria!

"Debugging is twice as hard as writing the code in the firstplace. Therefore,
if you write the code as cleverly as possible, you are, by definition,
not smart enough to debug it." -- Brian W. Kernighan
Jul 21 '05 #41

This discussion thread is closed

Replies have been disabled for this discussion.