473,396 Members | 1,772 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

performance vs. readability/maintability

In another online group in which I participate, we were discussing a
particular piece of code that had a pretty high risk for breaking in the
future (because it depended on something not changing that was outside the
developer's control) but was slightly more performant. One participant
posted:

"I tend to take the performance track also, adding readability if the impact
isn't too great. There is also an odd reality that takes place even in the
software field however. Not sure I can explain it too well but simply said,
if you write code to handle the effects of changing the ordinal position of
a field, some other error will surface anyway."

which I found to be ridiculous to the point of being dangerous.

I know this is somewhat one of those opinion issues, but I just wondered
where everyone fell on the continuum.
Nov 15 '05 #1
10 2473
Personally, I prefer to write code that's readable and maintainable first
(modulo obvious perf wins like StringBuilder and DataReader in some
scenarios). After the code works, use a profiler to look for performance
bottlenecks. Having said that, I've often built prototypes for the purpose
of profiling perf . It's my experience that most people that obsess over
perf without profiling optimize in the wrong places and have butt-ugly
non-maintainable code.

--
Mickey Williams
Author, "Microsoft Visual C# .NET Core Reference", MS Press
www.servergeek.com
"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> wrote in message
news:e5**************@TK2MSFTNGP11.phx.gbl...
In another online group in which I participate, we were discussing a
particular piece of code that had a pretty high risk for breaking in the
future (because it depended on something not changing that was outside the
developer's control) but was slightly more performant. One participant
posted:

"I tend to take the performance track also, adding readability if the impact isn't too great. There is also an odd reality that takes place even in the software field however. Not sure I can explain it too well but simply said, if you write code to handle the effects of changing the ordinal position of a field, some other error will surface anyway."

which I found to be ridiculous to the point of being dangerous.

I know this is somewhat one of those opinion issues, but I just wondered
where everyone fell on the continuum.

Nov 15 '05 #2
For what it's worth I would almost always side with reliability. I hate
fixing code, especially when it's due to a bad decision on my behalf. I
suppose that if I were in a situation where some code absolutely had to run
faster (show-stopper), and the modification was the only way to get over the
line, and the reliability problem was not random, but a maintenance issue,
(only a bug if the table structure changes), and the reliability problem
would not cause more damage than running too slow..

Then maybe I would bring it up in a meeting, just so that my coworkers could
talk me out of it.

Regards

"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> wrote in message
news:e5**************@TK2MSFTNGP11.phx.gbl...
In another online group in which I participate, we were discussing a
particular piece of code that had a pretty high risk for breaking in the
future (because it depended on something not changing that was outside the
developer's control) but was slightly more performant. One participant
posted:

"I tend to take the performance track also, adding readability if the impact isn't too great. There is also an odd reality that takes place even in the software field however. Not sure I can explain it too well but simply said, if you write code to handle the effects of changing the ordinal position of a field, some other error will surface anyway."

which I found to be ridiculous to the point of being dangerous.

I know this is somewhat one of those opinion issues, but I just wondered
where everyone fell on the continuum.

Nov 15 '05 #3
Daniel Billingsley <db**********@NO.durcon.SPAAMM.com> wrote:

<snip>
I know this is somewhat one of those opinion issues, but I just wondered
where everyone fell on the continuum.


As I suspect many readers know already, I would code for readability
first, performance later, almost always.

Having said that, I will of course use a StringBuilder to create a
string over the course of a loop etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 15 '05 #4
I agree with you. Sacrificing readability and reliability for a negligent
gain in performance _is_ ludicrous.

I am only slightly more flexible with the readability part. I will confess
that there have been a few times in my career where I had to sacrifice some
readability in order to eak out some extra performance.

In all cases, it was a situation where a perf deficit was magnified because
the code was either in a loop or was called extremely often, and where the
perf difference was vital to the acceptability of the application. (where
the perf difference was meaningful)

Beyond those exceptional instances, it seems to always pay off to go with
maintainability and reliability. You might get some quick notoriety if the
application runs a little faster, but in the long run, customers always seem
to appreciate the app that never crashes/never convolutes or loses their
data/etc over the one that runs really fast when it works at all.

"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> wrote in message
news:ew**************@TK2MSFTNGP11.phx.gbl...
Yeah the quote I posted actually makes two points

1) readability vs. performance
2) reliability vs. performance

with the author says he chooses performance in both cases.

My basic position tends to be that in most cases the performance differences we're talking about our trivial (the tests used to demonstrate them often
use 10,000,000 iterations and our code is doing exactly one per hour, for
example). I can't see any wisdom whatsoever in obsessing over such trivial performance gains and sacrificing what may well be hours of future work on
the code.

As far as the reliability, I think the position of writing code that is
pretty likely to break some day because it performs slightly better is just ludicrous. In this particular case, it was in the context of database
access, which I think will far overshadow any milliseconds saved here or
there by writing theoretically perfect code.

"J.Marsch" <je****@ctcdeveloper.com> wrote in message
news:ek*************@tk2msftngp13.phx.gbl...
For what it's worth I would almost always side with reliability. I hate
fixing code, especially when it's due to a bad decision on my behalf. I
suppose that if I were in a situation where some code absolutely had to

run
faster (show-stopper), and the modification was the only way to get over

the
line, and the reliability problem was not random, but a maintenance issue, (only a bug if the table structure changes), and the reliability problem
would not cause more damage than running too slow..

Then maybe I would bring it up in a meeting, just so that my coworkers

could
talk me out of it.

Regards


Nov 15 '05 #5
I'll follow the trend in the other replies: Readability and maintainability
have highest priority.

Like Mickey, I will only sacrifice readability for performance if I have
identified the performance bottleneck with a profiler and if there is no
other way to improve it.

I don't like the idea of introducing low-level optimization hacks "a
priori". I cannot find one example where this approach really worked and
brought benefits, but lots of examples where it did not work at all. On the
other hand, spending time on choosing the right data structures (so that
everything that you will access over and over will be properly indexed in
your object graphs) and the right algorithms really pays off.

Also, I am very careful in the way I use the language constructs, trying to
make the code as easy to read as possible, and as robust as possible, and
usually, I can find elegant solutions that don't conflict with the
performance goal. They may not give the absolute best performance
(otherwise, I would be writing in C, or even worse, assembly), but they
usually give a very good balance between clarity and performance.

Bruno.

"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> a écrit dans le
message de news:e5**************@TK2MSFTNGP11.phx.gbl...
In another online group in which I participate, we were discussing a
particular piece of code that had a pretty high risk for breaking in the
future (because it depended on something not changing that was outside the
developer's control) but was slightly more performant. One participant
posted:

"I tend to take the performance track also, adding readability if the impact isn't too great. There is also an odd reality that takes place even in the software field however. Not sure I can explain it too well but simply said, if you write code to handle the effects of changing the ordinal position of a field, some other error will surface anyway."

which I found to be ridiculous to the point of being dangerous.

I know this is somewhat one of those opinion issues, but I just wondered
where everyone fell on the continuum.

Nov 15 '05 #6
Yes. And I would certainly not hire the guy who wrote the original post and
believes in writing code that depends on the ordinal position of fields. No
way!

Bruno.

"J.Marsch" <je****@ctcdeveloper.com> a écrit dans le message de
news:et**************@TK2MSFTNGP09.phx.gbl...
I agree with you. Sacrificing readability and reliability for a negligent
gain in performance _is_ ludicrous.

I am only slightly more flexible with the readability part. I will confess that there have been a few times in my career where I had to sacrifice some readability in order to eak out some extra performance.

In all cases, it was a situation where a perf deficit was magnified because the code was either in a loop or was called extremely often, and where the
perf difference was vital to the acceptability of the application. (where
the perf difference was meaningful)

Beyond those exceptional instances, it seems to always pay off to go with
maintainability and reliability. You might get some quick notoriety if the application runs a little faster, but in the long run, customers always seem to appreciate the app that never crashes/never convolutes or loses their
data/etc over the one that runs really fast when it works at all.

"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> wrote in message
news:ew**************@TK2MSFTNGP11.phx.gbl...
Yeah the quote I posted actually makes two points

1) readability vs. performance
2) reliability vs. performance

with the author says he chooses performance in both cases.

My basic position tends to be that in most cases the performance

differences
we're talking about our trivial (the tests used to demonstrate them often
use 10,000,000 iterations and our code is doing exactly one per hour, for example). I can't see any wisdom whatsoever in obsessing over such

trivial
performance gains and sacrificing what may well be hours of future work on the code.

As far as the reliability, I think the position of writing code that is
pretty likely to break some day because it performs slightly better is

just
ludicrous. In this particular case, it was in the context of database
access, which I think will far overshadow any milliseconds saved here or
there by writing theoretically perfect code.

"J.Marsch" <je****@ctcdeveloper.com> wrote in message
news:ek*************@tk2msftngp13.phx.gbl...
For what it's worth I would almost always side with reliability. I hate fixing code, especially when it's due to a bad decision on my behalf. I suppose that if I were in a situation where some code absolutely had to
run
faster (show-stopper), and the modification was the only way to get
over the
line, and the reliability problem was not random, but a maintenance

issue, (only a bug if the table structure changes), and the reliability

problem would not cause more damage than running too slow..

Then maybe I would bring it up in a meeting, just so that my coworkers

could
talk me out of it.

Regards



Nov 15 '05 #7
Daniel Billingsley <db**********@NO.durcon.SPAAMM.com> wrote:
Yeah the quote I posted actually makes two points

1) readability vs. performance
2) reliability vs. performance

with the author says he chooses performance in both cases.
Good grief.
My basic position tends to be that in most cases the performance differences
we're talking about our trivial (the tests used to demonstrate them often
use 10,000,000 iterations and our code is doing exactly one per hour, for
example). I can't see any wisdom whatsoever in obsessing over such trivial
performance gains and sacrificing what may well be hours of future work on
the code.
Agreed.
As far as the reliability, I think the position of writing code that is
pretty likely to break some day because it performs slightly better is just
ludicrous. In this particular case, it was in the context of database
access, which I think will far overshadow any milliseconds saved here or
there by writing theoretically perfect code.


Yup. This is the problem I have with the oft-quoted performance article
(http://tinyurl.com/hxo2) which includes the following:

<quote>
Don't do it. Instead, stand up and pledge along with me:

"I promise I will not ship slow code. Speed is a feature I care
about. Every day I will pay attention to the performance of my code. I
will regularly and methodically measure its speed and size. I will
learn, build, or buy the tools I need to do this. It's my
responsibility."

(Really.) So did you promise? Good for you.

So how do you write the fastest, tightest code day in and day out? It
is a matter of consciously choosing the frugal way in preference to the
extravagant, bloated way, again and again, and a matter of thinking
through the consequences. Any given page of code captures dozens of
such small decisions.
</quote>

I don't *want* to write the fastest, tightest code. I want to write
reliable, maintainable code, which performs *well enough*.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 15 '05 #8
No kidding! That's just bad form.

"Bruno Jouhier [MVP]" <bj******@club-internet.fr> wrote in message
news:uk**************@tk2msftngp13.phx.gbl...
Yes. And I would certainly not hire the guy who wrote the original post and believes in writing code that depends on the ordinal position of fields. No way!

Bruno.

"J.Marsch" <je****@ctcdeveloper.com> a écrit dans le message de
news:et**************@TK2MSFTNGP09.phx.gbl...
I agree with you. Sacrificing readability and reliability for a negligent
gain in performance _is_ ludicrous.

I am only slightly more flexible with the readability part. I will confess
that there have been a few times in my career where I had to sacrifice

some
readability in order to eak out some extra performance.

In all cases, it was a situation where a perf deficit was magnified

because
the code was either in a loop or was called extremely often, and where the perf difference was vital to the acceptability of the application. (where the perf difference was meaningful)

Beyond those exceptional instances, it seems to always pay off to go with maintainability and reliability. You might get some quick notoriety if

the
application runs a little faster, but in the long run, customers always

seem
to appreciate the app that never crashes/never convolutes or loses their
data/etc over the one that runs really fast when it works at all.

"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> wrote in message news:ew**************@TK2MSFTNGP11.phx.gbl...
Yeah the quote I posted actually makes two points

1) readability vs. performance
2) reliability vs. performance

with the author says he chooses performance in both cases.

My basic position tends to be that in most cases the performance

differences
we're talking about our trivial (the tests used to demonstrate them

often use 10,000,000 iterations and our code is doing exactly one per hour, for example). I can't see any wisdom whatsoever in obsessing over such

trivial
performance gains and sacrificing what may well be hours of future work on
the code.

As far as the reliability, I think the position of writing code that
is pretty likely to break some day because it performs slightly better is just
ludicrous. In this particular case, it was in the context of database
access, which I think will far overshadow any milliseconds saved here or there by writing theoretically perfect code.

"J.Marsch" <je****@ctcdeveloper.com> wrote in message
news:ek*************@tk2msftngp13.phx.gbl...
> For what it's worth I would almost always side with reliability. I

hate > fixing code, especially when it's due to a bad decision on my behalf. I
> suppose that if I were in a situation where some code absolutely had to run
> faster (show-stopper), and the modification was the only way to get over the
> line, and the reliability problem was not random, but a maintenance

issue,
> (only a bug if the table structure changes), and the reliability problem > would not cause more damage than running too slow..
>
> Then maybe I would bring it up in a meeting, just so that my

coworkers could
> talk me out of it.
>
> Regards



Nov 15 '05 #9
I wholeheartedly agree with your comments about using the right algorithm
and the right data structure. I find that, alot of the time, if some bit of
code isn't performing at expectations, it's because the algorithm or
datastructures are not well matched to the task. In such a situation, you
can often come out with a solution that is _more_ readable, more
maintainable, and more performant!

I side with Einstein. He said that when you find a true answer to a mystery
of the universe, that solution will be simple and elegant.

"Bruno Jouhier [MVP]" <bj******@club-internet.fr> wrote in message
news:eu**************@TK2MSFTNGP11.phx.gbl...
I'll follow the trend in the other replies: Readability and maintainability have highest priority.

Like Mickey, I will only sacrifice readability for performance if I have
identified the performance bottleneck with a profiler and if there is no
other way to improve it.

I don't like the idea of introducing low-level optimization hacks "a
priori". I cannot find one example where this approach really worked and
brought benefits, but lots of examples where it did not work at all. On the other hand, spending time on choosing the right data structures (so that
everything that you will access over and over will be properly indexed in
your object graphs) and the right algorithms really pays off.

Also, I am very careful in the way I use the language constructs, trying to make the code as easy to read as possible, and as robust as possible, and
usually, I can find elegant solutions that don't conflict with the
performance goal. They may not give the absolute best performance
(otherwise, I would be writing in C, or even worse, assembly), but they
usually give a very good balance between clarity and performance.

Bruno.

"Daniel Billingsley" <db**********@NO.durcon.SPAAMM.com> a écrit dans le
message de news:e5**************@TK2MSFTNGP11.phx.gbl...
In another online group in which I participate, we were discussing a
particular piece of code that had a pretty high risk for breaking in the
future (because it depended on something not changing that was outside the developer's control) but was slightly more performant. One participant
posted:

"I tend to take the performance track also, adding readability if the

impact
isn't too great. There is also an odd reality that takes place even in

the
software field however. Not sure I can explain it too well but simply

said,
if you write code to handle the effects of changing the ordinal position

of
a field, some other error will surface anyway."

which I found to be ridiculous to the point of being dangerous.

I know this is somewhat one of those opinion issues, but I just wondered
where everyone fell on the continuum.


Nov 15 '05 #10
Yeah the quote I posted actually makes two points

1) readability vs. performance
2) reliability vs. performance

with the author says he chooses performance in both cases.

My basic position tends to be that in most cases the performance differences
we're talking about our trivial (the tests used to demonstrate them often
use 10,000,000 iterations and our code is doing exactly one per hour, for
example). I can't see any wisdom whatsoever in obsessing over such trivial
performance gains and sacrificing what may well be hours of future work on
the code.

As far as the reliability, I think the position of writing code that is
pretty likely to break some day because it performs slightly better is just
ludicrous. In this particular case, it was in the context of database
access, which I think will far overshadow any milliseconds saved here or
there by writing theoretically perfect code.

"J.Marsch" <je****@ctcdeveloper.com> wrote in message
news:ek*************@tk2msftngp13.phx.gbl...
For what it's worth I would almost always side with reliability. I hate
fixing code, especially when it's due to a bad decision on my behalf. I
suppose that if I were in a situation where some code absolutely had to run faster (show-stopper), and the modification was the only way to get over the line, and the reliability problem was not random, but a maintenance issue,
(only a bug if the table structure changes), and the reliability problem
would not cause more damage than running too slow..

Then maybe I would bring it up in a meeting, just so that my coworkers could talk me out of it.

Regards

Nov 15 '05 #11

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

25
by: Brian Patterson | last post by:
I have noticed in the book of words that hasattr works by calling getattr and raising an exception if no such attribute exists. If I need the value in any case, am I better off using getattr...
12
by: serge | last post by:
I have an SP that is big, huge, 700-800 lines. I am not an expert but I need to figure out every possible way that I can improve the performance speed of this SP. In the next couple of weeks I...
7
by: Randell D. | last post by:
Folks, I have a Javascript performance question that I might have problems explaining... In PHP, better performance can be obtained dealing directly with a variable, as opposed to an element...
5
by: Anders Both | last post by:
Does someone knows links to web-pages, or book contatining, information about the most important things to consider, while trying to make the .NET code perform faster. e.g. In my code I make big...
8
by: Ken Wilson | last post by:
In spite of the obvious advantage of not encountering a NullReferenceException unexpectedly in your running program is there an offset cost in performance for using String.IsNullOrEmpty() over !=...
18
by: mrshrinkray | last post by:
Are there any know bugs with the ListView in .NET 2? I'm having problems with an application that takes 15 seconds in 1.1, and now takes over a minute. The code in question uses: listViewItem =...
14
by: Merovingian | last post by:
I'm maintaining an ASP Classic file that has HTML withing Response.Write methods...such as. <% Response.Write "<table><tr><td>" Response.Write testVariable Response.Write...
14
by: Sugandh Jain | last post by:
Hi, The warning from Microsoft.Performance Code Analysis check that, its not required to initialize numeric variables to zero, boolean to false and object to null is a good one because CLR does...
30
by: galiorenye | last post by:
Hi, Given this code: A** ppA = new A*; A *pA = NULL; for(int i = 0; i < 10; ++i) { pA = ppA; //do something with pA
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.