473,414 Members | 1,764 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,414 software developers and data experts.

C++ Exceptions Cause Performance Hit?

Hi. I wanted to use exceptions to handle error conditions in my code.
I think doing that is useful, as it helps to separate "go" paths from
error paths. However, a coding guideline has been presented that says
"Use conventional error-handling techniques rather than exception
handling for straightforward local error processing in which a program
is easily able to deal with its own errors."

By "conventional error-handling," I believe they mean returning an
error code, or just handling the error without going to a catch block.
When I said that I'd prefer throwing and catching exceptions--and that,
in fact, exceptions is the "conventional error-handling technique" in
C++, I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?

Thanks for any info,

Ken
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05
59 4364
George Neuner wrote:
[snip]
I ask a bit more obviously: Is there such a decision point after
*every* function call?
The strict answer is "no" - decision points are app specific -
function call sites are just obvious and convenient places to evaluate
the need for a decision. RT isn't a set of rules to follow - it's a
dicipline of always being aware of the time consequences of code.


I suspected as much.
If yes, then your original statement that "the C++
exception mechanism doesn't work very well for real time coding" is
correct. If no, then I don't see why real-time code would not gain
something from using exceptions.


As I said in a previous post, I have no issue with appropriately tamed
exceptions.


Ok.
I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
.... something I see not infrequently in conventional applications.
Why only two frames, not more? It seems that certain coding styles
(many, many functions doing very little each, or recursive functions)
combined with inlining could easily lead to C++ code where an exception
is propagated over say 20 frames but in the optimized machine code the
resulting stack-unwind doesn't do much more than call the exception's
ctor, reset the stack-pointer and call the exception handler. Using
error return codes in such a scenario could thwart inlining up to the
point of noticeably slowing your code, even for the case when an error
is propagated.

[snip] Naivete regarding the language becomes a major problem when people
[such as the OP of this thread] who have no experience in RT are
pressed into doing it
I don't think the OPs question was naive. If I was told to follow such a
coding standard I would probably ask a very similar question. Call me
naive too but I really have a problem if people talk about
performance/timing problems before having profiled/measured actual or at
least typical code. In absence of a proper rationale for the
"no-exceptions" rule, such a coding standard pushes premature
optimization, which - as we all know - is the root of all evil.
Don't get me wrong, I don't have any problems to follow such a standard
if it contains conclusive evidence that using exceptions in a particular
environment will indeed cause an unacceptable performance hit. Not being
an RT programmer, I have yet to see such evidence.
- particularly in situations where no one is
around to teach techniques and explain why things can't or shouldn't
be done in the conventional way the programmer is accustomed to.

A decent C programmer who is new to RT can usually figure out what the
problem is and devise a way around it. My experience has been that
[even experienced] C++ coders attempting to do RT have much more
trouble discovering the cause of their problems and when faced with a
significant problem, the C++ programmer frequently has much more
difficulty resolving it because there are more hidden interactions to
consider.


Right. But that could easily be corrected with better education.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 24 '05 #51
On 24 Jul 2005 19:36:56 -0400, "Andreas Huber"
<ah********************@yahoo.com> wrote:
George Neuner wrote:
[snip]
I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
.... something I see not infrequently in conventional applications.
Why only two frames, not more? It seems that certain coding styles
(many, many functions doing very little each, or recursive functions)
combined with inlining could easily lead to C++ code where an exception
is propagated over say 20 frames but in the optimized machine code the
resulting stack-unwind doesn't do much more than call the exception's
ctor, reset the stack-pointer and call the exception handler.


As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.

2 frames? Exceptions which throw to the next higher frame have
execution time which is, at most, equivalent to the normal function
return. Once you go beyond 1 frame it becomes easy to make seemingly
innocuous code changes in intermediate layers that look local and
would have little impact on a normal return sequence where the
programmer could intervene at each step, but which cause a timing
failure when added to the cumulative execution time of an atomic
multiple frame throw.

Using
error return codes in such a scenario could thwart inlining up to the
point of noticeably slowing your code, even for the case when an error
is propagated.


Sigh!

RT is *not* about "fast" code - it is about *predictable* code whose
time related behavior is known under all circumstances. Sometimes
code will be deliberately written to run slower than it could because
the faster version will not play well with other code in the system.

Naivete regarding the language becomes a major problem when people
[such as the OP of this thread] who have no experience in RT are
pressed into doing it


I don't think the OPs question was naive. If I was told to follow such a
coding standard I would probably ask a very similar question.


I have no issue with the question ... it is perfectly understandable
to me why it should be asked. I also wonder why the question was
asked *here* in Usenet rather than in the office, where, presumably,
the OP's RT code writing colleagues would know the technical reasons
for the company's practices and be able to explain them to newcomers.
George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 26 '05 #52
George Neuner wrote:
I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
.... something I see not infrequently in conventional applications.
Why only two frames, not more? It seems that certain coding styles
(many, many functions doing very little each, or recursive functions)
combined with inlining could easily lead to C++ code where an exception
is propagated over say 20 frames but in the optimized machine code the
resulting stack-unwind doesn't do much more than call the exception's
ctor, reset the stack-pointer and call the exception handler.


As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.


Right, you did say that, but this is not apparent in your statement
quoted above.
2 frames? Exceptions which throw to the next higher frame have
execution time which is, at most, equivalent to the normal function
return. Once you go beyond 1 frame it becomes easy to make seemingly
innocuous code changes in intermediate layers that look local and
would have little impact on a normal return sequence where the
programmer could intervene at each step, but which cause a timing
failure when added to the cumulative execution time of an atomic
multiple frame throw.
What you say here only applies if the code that uses exception handling
does not contain as many decision points as the equivalent code using
error codes. Isn't that comparing apples to oranges?
Using
error return codes in such a scenario could thwart inlining up to the
point of noticeably slowing your code, even for the case when an error
is propagated.


Sigh!

RT is *not* about "fast" code - it is about *predictable* code whose
time related behavior is known under all circumstances. Sometimes


Your sighing is unwarranted. I know that RT is all about
predictability, i.e. being able to calculate absolute upper limits for
runtimes. I don't see how exception handling per se prevents that in
any way. Sure, some EH implementations may push such an upper limit
well beyond of what you aim to guarantee. However, I'd expect that a
compiler for an RT system employs an implementation that does not favor
the non-exceptional paths so much that the exceptional ones become
painfully slow (as some desktop compilers do).
code will be deliberately written to run slower than it could because
the faster version will not play well with other code in the system.


You mean that the faster variant is non-predictable, right?

[snip]
I don't think the OPs question was naive. If I was told to follow such a
coding standard I would probably ask a very similar question.


I have no issue with the question ... it is perfectly understandable
to me why it should be asked. I also wonder why the question was
asked *here* in Usenet rather than in the office, where, presumably,
the OP's RT code writing colleagues would know the technical reasons
for the company's practices and be able to explain them to newcomers.


Presumably, the answer the OP got from the internal staff was
unsatisfactory. If the only reason is the alleged "performance hit" -
as the OP implies - then I wouldn't be satisfied either. This smells of
FUD...

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 26 '05 #53
On 26 Jul 2005 06:37:55 -0400, "Andreas Huber"
<ah********************@yahoo.com> wrote:
George Neuner wrote:
>> I believe there is an inherent problem which makes them
>> unsuitable for control transfers that span more than one or two frames
>> .... something I see not infrequently in conventional applications.
>
>Why only two frames, not more? It seems that certain coding styles
>(many, many functions doing very little each, or recursive functions)
>combined with inlining could easily lead to C++ code where an exception
>is propagated over say 20 frames but in the optimized machine code the
>resulting stack-unwind doesn't do much more than call the exception's
>ctor, reset the stack-pointer and call the exception handler.


As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.


Right, you did say that, but this is not apparent in your statement
quoted above.
2 frames? Exceptions which throw to the next higher frame have
execution time which is, at most, equivalent to the normal function
return. Once you go beyond 1 frame it becomes easy to make seemingly
innocuous code changes in intermediate layers that look local and
would have little impact on a normal return sequence where the
programmer could intervene at each step, but which cause a timing
failure when added to the cumulative execution time of an atomic
multiple frame throw.


What you say here only applies if the code that uses exception handling
does not contain as many decision points as the equivalent code using
error codes. Isn't that comparing apples to oranges?


An exception has exactly 2 decision points - the throw point and the
catch point - control transfer between them is atomic.

Obviously you can add as many intermediate catch/rethrow points as
necessary, but IMO that defeats the purpose. Having to catch and
rethrow exception multiple times adds back the code complexity that
non-local exceptions were intended to remove. Apart from structuring
my code a bit differently, what have I really gained by using them?

[Before you answer that, keep in mind that the question's context is
limited to RT coding. I don't question the utility of exceptions in
other areas of programming.]

>Using
>error return codes in such a scenario could thwart inlining up to the
>point of noticeably slowing your code, even for the case when an error
>is propagated.


Sigh!

RT is *not* about "fast" code - it is about *predictable* code whose
time related behavior is known under all circumstances. Sometimes


Your sighing is unwarranted. I know that RT is all about
predictability, i.e. being able to calculate absolute upper limits for
runtimes.


Predictability in RT means an accurate presentation of a series of
time related events as viewed by an observer outside of the system.
Determining upper limits for execution time is only a part of it.

Timing constraints are defined by windows within which the code must
deliver some result - ie. produce some value or take some action.
Windows can have both upper and lower boundaries and may be soft or
hard. For a hard window the result must be delivered within the
window - the result for a hard window is useless if delivered too
early or too late. A soft window makes allowances for the result to
be delivered late but specifies the preferred delivery situation.
It has been said that an RT program is the very hardest type of
program to write. RT programming, in general, has all the problems of
reliable concurrent programming and adds to them the requirement to
ensure predictable timed execution under all circumstances. On top of
that, many RT systems are used in safety critical applications, which
adds yet another dimension of complexity.
George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 26 '05 #54
>As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.


But that makes no sense. The non-trivial destructor will get called
if I return as well. There are lots of ways to go out of scope
and make non-trivial destructors execute, exceptions are just one.
If that is your problem, you shouldn't object to exceptions, you
should object to destructors :-).
--
==>> The *Best* political site <URL:http://www.vote-smart.org/> >>==+

email: To*********@worldnet.att.net icbm: Delray Beach, FL |
<URL:http://home.att.net/~Tom.Horsley> Free Software and Politics <<==+

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 27 '05 #55
George Neuner wrote:
2 frames? Exceptions which throw to the next higher frame have
execution time which is, at most, equivalent to the normal function
return. Once you go beyond 1 frame it becomes easy to make seemingly
innocuous code changes in intermediate layers that look local and
would have little impact on a normal return sequence where the
programmer could intervene at each step, but which cause a timing
failure when added to the cumulative execution time of an atomic
multiple frame throw.


What you say here only applies if the code that uses exception handling
does not contain as many decision points as the equivalent code using
error codes. Isn't that comparing apples to oranges?


An exception has exactly 2 decision points - the throw point and the
catch point - control transfer between them is atomic.

Obviously you can add as many intermediate catch/rethrow points as
necessary, but IMO that defeats the purpose. Having to catch and
rethrow exception multiple times adds back the code complexity that
non-local exceptions were intended to remove. Apart from structuring
my code a bit differently, what have I really gained by using them?


Since you earlier confirmed that you not normally have a decision point
before/after every function call, EH would allow you to automate error
propagation *between* decision points. That is, in a program using EH
you would have a lot fewer try-catch blocks than if-then blocks in an
equivalent program using error codes (note that I assume that both
programs contain an equal number of decision points). IOW, there is
less code that solely progagates errors. Of course, not being an
RT-programmer I can't judge whether the resulting code/complexity
reduction is significant for typical RT programs.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 27 '05 #56
On 27 Jul 2005 04:12:18 -0400, to*********@att.net (Thomas A. Horsley)
wrote:
As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.
But that makes no sense. The non-trivial destructor will get called
if I return as well. There are lots of ways to go out of scope
and make non-trivial destructors execute, exceptions are just one.


Exceptions are the only way to simultaneously cause multiple call
frames to go out of scope.

If that is your problem, you shouldn't object to exceptions, you
should object to destructors :-).


The context of this sub-discussion is real time programming ...
nothing being said here is relevent outside that context. Within that
context, I object to exceptions being an atomic operation whose
duration is only weakly and indirectly controllable.

The only method to control the duration being the manipulation of the
number of intervening call frames between the throw and catch point
and the number of live objects in those frames which have non-trivial
destructors. In the presence of multiple frame exceptions, it is too
easy to make a seemingly trivial change to an intermediate frame which
causes no problem in the normal return sequence but which breaks
timing constraints in the exceptional case because the cumulative time
to destruct all the frames becomes too great. Such a situation can
lead to significant refactoring to restore the timing.
George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 27 '05 #57


George Neuner skrev:
On 27 Jul 2005 04:12:18 -0400, to*********@att.net (Thomas A. Horsley)
wrote:
As I said previously ... now a couple of times ... the problem is with
*non-trivial* destructors. A trivial dtor adds nothing to the
execution time of a throw.
But that makes no sense. The non-trivial destructor will get called
if I return as well. There are lots of ways to go out of scope
and make non-trivial destructors execute, exceptions are just one.


Exceptions are the only way to simultaneously cause multiple call
frames to go out of scope.

If that is your problem, you shouldn't object to exceptions, you
should object to destructors :-).


The context of this sub-discussion is real time programming ...
nothing being said here is relevent outside that context. Within that
context, I object to exceptions being an atomic operation whose
duration is only weakly and indirectly controllable.


An exception is not an atomic operation. Lots of stuff takes place from
the time an axception is thrown to the time it gets caught.
The only method to control the duration being the manipulation of the
number of intervening call frames between the throw and catch point
and the number of live objects in those frames which have non-trivial
destructors. In the presence of multiple frame exceptions, it is too
easy to make a seemingly trivial change to an intermediate frame which
causes no problem in the normal return sequence but which breaks
timing constraints in the exceptional case because the cumulative time
to destruct all the frames becomes too great. Such a situation can
lead to significant refactoring to restore the timing.
One way to get control back while an exception is unwinding is to
insert a class whos destructor can take care of any decisions as to
what to do.

class decisionmaker
{
decisionmaker(): begun(now()) {}
~decisionmaker() { if (now() - begun > delta) {
appropriate_action(); }
private:
time begun;
}

Notice that this will work in both the exceptional and normal case.


George


/Peter
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 28 '05 #58
In article <nj********************************@4ax.com>, George Neuner
<gneuner2/@comcast.net> writes
The only method to control the duration being the manipulation of the
number of intervening call frames between the throw and catch point
and the number of live objects in those frames which have non-trivial
destructors. In the presence of multiple frame exceptions, it is too
easy to make a seemingly trivial change to an intermediate frame which
causes no problem in the normal return sequence but which breaks
timing constraints in the exceptional case because the cumulative time
to destruct all the frames becomes too great. Such a situation can
lead to significant refactoring to restore the timing.


Agreed, but at a minimum RT programs should be tested with exceptions
being thrown. It isn't that hard to make small modifications to a
program or its data so as to force an exception condition. That should
be part of the test harness of any critical RT program. Note that
because of the overhead for calling a dtor, it is easy to make an
apparently insignificant change to any program however it does its error
handling that breaks the timing constraints.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 28 '05 #59
"Andreas Huber" <ah********************@yahoo.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
George Neuner wrote:
>> 2 frames? Exceptions which throw to the next higher frame have
>> execution time which is, at most, equivalent to the normal function
>> return. Once you go beyond 1 frame it becomes easy to make seemingly
>> innocuous code changes in intermediate layers that look local and
>> would have little impact on a normal return sequence where the
>> programmer could intervene at each step, but which cause a timing
>> failure when added to the cumulative execution time of an atomic
>> multiple frame throw.
>
>What you say here only applies if the code that uses exception handling
>does not contain as many decision points as the equivalent code using
>error codes. Isn't that comparing apples to oranges?


An exception has exactly 2 decision points - the throw point and the
catch point - control transfer between them is atomic.

Obviously you can add as many intermediate catch/rethrow points as
necessary, but IMO that defeats the purpose. Having to catch and
rethrow exception multiple times adds back the code complexity that
non-local exceptions were intended to remove. Apart from structuring
my code a bit differently, what have I really gained by using them?


Since you earlier confirmed that you not normally have a decision point
before/after every function call, EH would allow you to automate error
propagation *between* decision points. That is, in a program using EH
you would have a lot fewer try-catch blocks than if-then blocks in an
equivalent program using error codes (note that I assume that both
programs contain an equal number of decision points). IOW, there is
less code that solely progagates errors. Of course, not being an
RT-programmer I can't judge whether the resulting code/complexity
reduction is significant for typical RT programs.


That's what I also was wondering. Whatever the reason to have a "decision
point" can be, it's easier done with exceptions than with error codes. It
should be even easier for RT code to guarantee measured execution with
exceptions than calculating all the different conditional branches that
error codes create. For example,
// error codes are messy and error-prone
RetCode foo()
{
....
RetCode ret_code = bar1();
sleep(10); // yield to another thread
if(ret_code != success)
{
...
return another_ret_code;
} else {
...
RetCode ret_code = bar2();
sleep(10); // yield to another thread
if(ret_code != success)
{
...
return yet_another_ret_code;
} else {
...
}
}
// exceptions are much easier to handle
void foo()
{
try
{
...
bar1();
sleep(10); // yield to another thread
...
bar2();
sleep(10); // yield to another thread
...
} catch (...)
{
sleep(10); // yield to another thread
throw();
}
}

Additionally, one cannot return error codes from operators, so I guess RT
programmers do not use those either. And if a function already has something
to return, error codes would either create a mess being bundled with the
return value, or function would have to return its result in one of its
parameters. All this creates a messy error prone code, which can also be
slower. I wish the no-exception proponents provided a code example to
illustrate their point, otherwise it makes no sense to me.

- gene
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 28 '05 #60

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: Mark Oueis | last post by:
I've been struggling with this question for a while. What is better design? To design functions to return error codes when an error occures, or to have them throw exceptions. If you chose the...
6
by: RepStat | last post by:
I've read that it is best not to use exceptions willy-nilly for stupid purposes as they can be a major performance hit if they are thrown. But is it a performance hit to use a try..catch..finally...
14
by: dcassar | last post by:
I have had a lively discussion with some coworkers and decided to get some general feedback on an issue that I could find very little guidance on. Why is it considered bad practice to define a...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.