By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,732 Members | 1,429 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,732 IT Pros & Developers. It's quick & easy.

C++ Exceptions Cause Performance Hit?

P: n/a
Hi. I wanted to use exceptions to handle error conditions in my code.
I think doing that is useful, as it helps to separate "go" paths from
error paths. However, a coding guideline has been presented that says
"Use conventional error-handling techniques rather than exception
handling for straightforward local error processing in which a program
is easily able to deal with its own errors."

By "conventional error-handling," I believe they mean returning an
error code, or just handling the error without going to a catch block.
When I said that I'd prefer throwing and catching exceptions--and that,
in fact, exceptions is the "conventional error-handling technique" in
C++, I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?

Thanks for any info,

Ken
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #1
Share this Question
Share on Google+
59 Replies


P: n/a
kk****@yahoo.com wrote:
Hi. I wanted to use exceptions to handle error conditions in my code.
I think doing that is useful, as it helps to separate "go" paths from
error paths. However, a coding guideline has been presented that says
"Use conventional error-handling techniques rather than exception
handling for straightforward local error processing in which a program
is easily able to deal with its own errors."
The key word here is *local*. Throwing an exception is essentially a
non-local `go to' that does the appropriate bookkeeping -- at the cost
of setting up that bookkeeping.
By "conventional error-handling," I believe they mean returning an
error code, or just handling the error without going to a catch block.
That is correct.
When I said that I'd prefer throwing and catching exceptions--and that,
in fact, exceptions is the "conventional error-handling technique" in
C++, I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?

The cost is (the compiler) setting up the bookkeeping information
necessary to make the non-local jump (in order call all necessary
destructors for any locally created objects). When an error can be dealt
with locally (i.e. there's no need to explicitly propagate the error as
deep call stack unwinds) it is worth doing so. If, on the other hand,
the call stack *is* deep, the use of exceptions (naturally, in
*exceptional* situations), while having a cost, makes the code
conceptually much clearer. As with anything else, it's a trade off.

HTH,
--ag

[c.l.c.m elided]
--
Artie Gold -- Austin, Texas
http://it-matters.blogspot.com (new post 12/5)
http://www.cafepress.com/goldsays
Jul 23 '05 #2

P: n/a
kk****@yahoo.com wrote:
I want to use exceptions to handle error conditions in my code.
I think doing that is useful,
as it helps to separate "go" paths from error paths.
?
However, a coding guideline has been presented that says
"Use conventional error-handling techniques rather than exception handling
for straightforward local error processing
in which a program is easily able to deal with its own errors."
That's vague but possibly sound advice.
By "conventional error-handling,"
I believe they mean returning an error code
or just handling the error without going to a catch block.
You had better ask for some clarification on this.

Exceptions (what you call errors) should be handled
at the point where they are first detected if possible.
If they can't be completely handled
in the function where they are first detected,
you *must* create an exception object
which contains all of the information required
to handle the exception in the calling program
and return or throw the exception object.
In some cases, an error code may be sufficient
to contain all of the information
required to handle the exception.
If not, you may need a more complicated exception object.
When I said that I'd prefer throwing and catching exceptions and that,
in fact, exceptions [are] the "conventional error-handling technique" in C++,
I was told that, since we have a real-time system,
we can't afford the performance hit caused by using exceptions.

Do exception blocks cause big performance hits?
If so, what causes the hit?
Or is the person just misinformed?


Again, you should ask for clarification.
Exceptions do *not* affect performance
unless an exception is encountered
and then they are probably just about as efficient
as any so-called conventional error-handling technique.
The problem with real-time programming is that
you *must* know how long it takes to execute your code.
If any of your exception (error) handling is on the critical path,
you must be able to establish an upper bound
for the time that it will take to complete.
I suspect that your supervisor thinks (s)he knows how to do that
for "conventional error-handling" but not
for the C++ exception handling mechanism.

Anyway, I used Google

http://www.google.com/

to search for

+"real-time programming" +"C++ exception handling"

and I found lots of stuff.
Jul 23 '05 #3

P: n/a
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:db**********@nntp1.jpl.nasa.gov
Exceptions do *not* affect performance
unless an exception is encountered


That is compiler dependent. On VC++, there is a small performance cost even
when exceptions are not thrown.

--
John Carson

Jul 23 '05 #4

P: n/a
kk****@yahoo.com writes:
Hi. I wanted to use exceptions to handle error conditions in my code.
I think doing that is useful, as it helps to separate "go" paths from
error paths. However, a coding guideline has been presented that says
"Use conventional error-handling techniques rather than exception
handling for straightforward local error processing in which a program
is easily able to deal with its own errors."

By "conventional error-handling," I believe they mean returning an
error code, or just handling the error without going to a catch block.

When I said that I'd prefer throwing and catching exceptions--and that,
in fact, exceptions is the "conventional error-handling technique" in
C++, I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?


Whether or not they are misinformed may depend on your compiler, and
it certainly depends on your definition of "big". Some relevant
information is at:

http://tinyurl.com/8rljh
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #5

P: n/a
kk****@yahoo.com wrote:
When I said that I'd prefer throwing and catching exceptions--and that,
in fact, exceptions is the "conventional error-handling technique" in
C++, I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?


I think the key here is "real-time system". The definition of performance in
real-time systems is slightly different. Real-time systems have to react in
a defined absolute time, in all cases.

I think that this is hard to guarantee when using exceptions (but probably
not impossible).

Exception performance depends on the compilers implementation of exceptions.
In the best case code that uses exceptions is faster in the "normal" (i.e.
no exception thrown) code path, but throwing an exceptions causes
overhead.

In most systems this is what you want. In real-time systems this additional
overhead can break your "reaction-time" guarantee

HTH

Fabio

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #6

P: n/a
<kk****@yahoo.com> wrote in message
news:11**********************@g49g2000cwa.googlegr oups.com...
....

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?


Ideally you shouldn't have any overhead until the error occurs and exception
is actually thrown. But you need to do experiments with your compiler to
know for sure. I think most (all) compilers create some sort of exception
prolog for functions that may need stack unwinding when an exception is
thrown. How big an overhead does it introduce? My testing with VC7.1 on
several "tight" programs didn't show any impact of enabling/disabling
exceptions on run time as long as there are no errors and therefore no
exceptions. Throwing an exception can be expensive, but the functions that
return error codes should also do something with them, and in non-trivial
cases should clean up resources and propagate error codes to the site where
they can be processed. It may not be less expensive at all. Moreover, such
code quickly becomes unmaintainable, with convoluted control paths, and
flaky. As a result error codes are routinely ignored and error conditions
are not processed correctly. So there isn't really any alternative to
exceptions if you want to write safe and clear code.

- gene
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #7

P: n/a
On Wed, 13 Jul 2005 20:17:39 +0400, <kk****@yahoo.com> wrote:

[]
Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?


It's pretty much compiler specific. A good discussion about the issues can
be found in the paper at http://netlab.ru.is/exception/LinuxCXX.shtml

--
Maxim Yegorushkin
<fi****************@gmail.com>

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #8

P: n/a
John Carson wrote:
"E. Robert Tisdale" <E.**************@jpl.nasa.gov> wrote in message
news:db**********@nntp1.jpl.nasa.gov
Exceptions do *not* affect performance
unless an exception is encountered

That is compiler dependent. On VC++, there is a small performance cost
even when exceptions are not thrown.


The main performance drag in error handling is the test for the error
condition. Remove the test and the need to handle exceptions goes away.
Jul 23 '05 #9

P: n/a
"Gene Bushuyev" <sp**@smapguard.com> writes:
<kk****@yahoo.com> wrote in message
news:11**********************@g49g2000cwa.googlegr oups.com...
...

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?
Ideally you shouldn't have any overhead until the error occurs and exception
is actually thrown. But you need to do experiments with your compiler to
know for sure. I think most (all) compilers create some sort of exception
prolog for functions that may need stack unwinding when an exception is
thrown.


No, not "all," and IMO not even "most," but it probably depends on how
you count. A few do.
How big an overhead does it introduce? My testing with VC7.1
That one does on IA32, but not IA64.
on several "tight" programs didn't show any impact of
enabling/disabling exceptions on run time as long as there are no
errors and therefore no exceptions.
I agree with almost everything below...
Throwing an exception can be expensive, but the functions that
return error codes should also do something with them, and in
non-trivial cases should clean up resources and propagate error
codes to the site where they can be processed. It may not be less
expensive at all. Moreover, such code quickly becomes
unmaintainable, with convoluted control paths, and flaky. As a
result error codes are routinely ignored and error conditions are
not processed correctly. So there isn't really any alternative to
exceptions if you want to write safe and clear code.


.....except for that. Exceptions have great benefits in those areas,
but _that_ is overstating the case a bit.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #10

P: n/a
On Thu, 14 Jul 2005 12:01:57 +0400, Gene Bushuyev <sp**@smapguard.com>
wrote:
<kk****@yahoo.com> wrote in message
news:11**********************@g49g2000cwa.googlegr oups.com...
...

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?
Ideally you shouldn't have any overhead until the error occurs and
exception
is actually thrown. But you need to do experiments with your compiler to
know for sure. I think most (all) compilers create some sort of exception
prolog for functions that may need stack unwinding when an exception is
thrown.


Please define that "most". This is true for MSVC, and false for g++.
... Moreover, such
code quickly becomes unmaintainable, with convoluted control paths, and
flaky. As a result error codes are routinely ignored and error conditions
are not processed correctly.


This is all fud and red herring we are all tired of.

--
Maxim Yegorushkin
<fi****************@gmail.com>

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #11

P: n/a
RH
>From the compiler's perspective, in the presence of exceptions (haven't
been disabled via command line), in the control flow graph (CFG) every
function call site gets an additional edge to a corresponding landing
pad (all implementation dependent, different nomenclature is used),
corresponding to the code that perform necessary cleanup code or
dispatch to the corresponding try blocks.

This can lead to missed other optimization opportunities. For example,
if such a call is in a loop, this loop now has an additional exit, and
certain transformation might just bail in such a scenario.

So, assuming one has a compiler with a no overhead implementation for
cases where no exception is thrown (which should be the default for
about every compiler out there by now) the overall performance question
- is very much benchmark dependent.

In our compiler lab, we try to compiler benchmarks with and without
exceptions - and always find some unexpected behavior. Some programs
get faster, others slower, sometimes caused by above mentioned
problems, sometimes by compiler problems - always interesting.

Exceptions are supposed to occur, well, under exceptional circumstances
only. I have seen people using them to get regular control flow... ;-)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #12

P: n/a
<kk****@yahoo.com> wrote in message
news:11**********************@g49g2000cwa.googlegr oups.com
Hi. I wanted to use exceptions to handle error conditions in my code.
I think doing that is useful, as it helps to separate "go" paths from
error paths. However, a coding guideline has been presented that says
"Use conventional error-handling techniques rather than exception
handling for straightforward local error processing in which a program
is easily able to deal with its own errors."
On one interpretation, this is not necessarily bad advice. If, for example,
a
function processing user input can deal with invalid user input by prompting
the user to try again, then this is perfectly sensible. Exceptions are most
useful when errors cannot be dealt with locally and so the problem needs to
be propagated upwards until a point is reached where the error can be
handled.
By "conventional error-handling," I believe they mean returning an
error code, or just handling the error without going to a catch block.

When I said that I'd prefer throwing and catching exceptions--and
that, in fact, exceptions is the "conventional error-handling
technique" in C++, I was told that since we have a real-time system,
we can't afford the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes
the hit? Or is the person just misinformed?


That is compiler specific. I suggest you run some tests to see what sort of
performance hit you get on your system. Note that other forms of error
handling carry their own performance costs.

--
John Carson
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #13

P: n/a
On 13 Jul 2005 12:17:39 -0400, kk****@yahoo.com wrote:
I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.
Speaking from 10 years experience writing real time apps, I will say
that if your "real time" system really *is*, then I'm surprised it
would be using (m)any of the advanced features of C++ in the first
place. Virtual function calls, dynamic heap allocation and exception
handling are all, at best, problematic for verifying performance of
real time code.

I am *not* saying C++ is the wrong tool for such programming ... only
that it must be used judiciously. Many of the standard idioms and
practices are better suited for general application programming than
for real time programming. I'm not going to defend this point of view
here ... feel free to browse comp.arch.embedded for more than you ever
wanted to know about using C++ for real time programming.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?


The general thinking about exceptions views them as a rather low
frequency signal event rather than a standard control mechanism.
Exceptions were intended to provide a clean alternate path for error
handling ... which by itself implies no preference for the execution
path ... however, the working presumption has always been that the
error path was taken [much] less frequently.

Because of the presumption of infrequent use, many compiler vendors
did not put a lot of effort into making exception handling fast. This
makes exceptions more difficult to use in code which is routinely
expected to suffer failures - such as networking and hardware control
applications.

George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #14

P: n/a
In article <2n********************************@4ax.com>, George Neuner
<gneuner2/@comcast.net> writes
Because of the presumption of infrequent use, many compiler vendors
did not put a lot of effort into making exception handling fast. This
makes exceptions more difficult to use in code which is routinely
expected to suffer failures - such as networking and hardware control
applications.

I think that is false. Rather the implementers have made major efforts
to minimise the cost for code that runs without raising an exception.
Normal code will run as fast as possible and the footprint will be kept
as small as possible. Those are the primary objectives for most
implementers. If possible the entire costs of an exception are paid when
an exception is actually raised. This cost will almost inevitably be
high. However, given that the program is in a problem state there seems
no reason to spend development resources reducing that cost when it
would likely remain high even with the most sophisticated
implementation.

In general programmers want normal code to run fast and in a small
space. It has always been made clear that the exception mechanism is NOT
intended as an alternative general return mechanism.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #15

P: n/a
RH wrote:
From the compiler's perspective, in the presence of exceptions
(haven't been disabled via command line), in the control flow
graph (CFG) every function call site gets an additional edge
to a corresponding landing pad (all implementation dependent,
different nomenclature is used), corresponding to the code
that perform necessary cleanup code or dispatch to the
corresponding try blocks. This can lead to missed other optimization opportunities. For
example, if such a call is in a loop, this loop now has an
additional exit, and certain transformation might just bail in
such a scenario.


This is quite true in theory. In practice, however, I suspect
that there are very few, if any, compilers which optimize to a
point where this makes a difference.

Another point to keep in mind (although it probably isn't
relevant to embedded systems) is that exceptions make it very
easy for the compiler to isolate the clean-up code which is
called in the error case. This can, in turn, result if smaller
functions, increasing the probability of the function fitting
entirely in the cache.

In the end, you don't know until you've benchmarked. (EXcept
that as far as I know, the compilers I use don't allow turning
exceptions off.)

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #16

P: n/a
George Neuner wrote:
Because of the presumption of infrequent use, many compiler vendors
did not put a lot of effort into making exception handling fast. This
makes exceptions more difficult to use in code which is routinely
expected to suffer failures - such as networking and hardware control
applications.


If I expect that some result occurs frequently as part of the normal usage,
I don't consider it exceptional, and then I do not use a exception for it.

--
Salu2

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #17

P: n/a
Francis Glassborow <fr*****@robinton.demon.co.uk> writes:
In article <2n********************************@4ax.com>, George Neuner
<gneuner2/@comcast.net> writes
Because of the presumption of infrequent use, many compiler vendors
did not put a lot of effort into making exception handling fast. This
makes exceptions more difficult to use in code which is routinely
expected to suffer failures - such as networking and hardware control
applications.

I think that is false.


Not from what I hear.
Rather the implementers have made major efforts
to minimise the cost for code that runs without raising an exception.
Normal code will run as fast as possible and the footprint will be kept
as small as possible. Those are the primary objectives for most
implementers. If possible the entire costs of an exception are paid when
an exception is actually raised. This cost will almost inevitably be
high. However, given that the program is in a problem state there seems
no reason to spend development resources reducing that cost when it
would likely remain high even with the most sophisticated
implementation.

In general programmers want normal code to run fast and in a small
space. It has always been made clear that the exception mechanism is NOT
intended as an alternative general return mechanism.


In a true real-time system, completing any operation in bounded and
well-understood time is crucial. If you use exception handling, you
are expecting to be able to handle exceptions and recover in a
reasonable way, or there's no point in writing the exception-handling
code in the first place. So "problem state" (whatever that means) or
no, it's important that throwing and handling an exception has
predictable performance in a real-time system.

All that said, I don't see why it should be so hard to measure the
performance of those exceptional paths and make an informed decision
on acceptability.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #18

P: n/a
RH schreef:
From the compiler's perspective, in the presence of exceptions (haven't

been disabled via command line), in the control flow graph (CFG) every
function call site gets an additional edge to a corresponding landing
pad (all implementation dependent, different nomenclature is used),
corresponding to the code that perform necessary cleanup code or
dispatch to the corresponding try blocks.

This can lead to missed other optimization opportunities. For example,
if such a call is in a loop, this loop now has an additional exit, and
certain transformation might just bail in such a scenario.


No, it doesn't have an additional exit. The "error return value" case
will have an 'if( value==bad ) break;' statement. That is the same
exit from the loop.

In addition, this exit is more clearly recognizable as exceptional if
it's an exception. Which in turn means the compiler can put that
part of the code aside, keeping the actual loop tight.

Still: the best part of exceptions is a debugger with a 'break on
exceptions' feature. Error returns cause a performance hit on my
debugging time, and that's more than a few microseconds.

Regards, Michiel Salters
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #19

P: n/a
On 15 Jul 2005 11:06:40 -0400, Julián Albo <JU********@terra.es>
wrote:
George Neuner wrote:
Because of the presumption of infrequent use, many compiler vendors
did not put a lot of effort into making exception handling fast. This
makes exceptions more difficult to use in code which is routinely
expected to suffer failures - such as networking and hardware control
applications.
If I expect that some result occurs frequently as part of the normal usage,
I don't consider it exceptional


I agree with that philosophy. Personally I never equated the term
"exception" with "error", to me it always meant "unexpected". As far
as I'm concerned, predictable errors are not exceptions. However,
other definitions prevailed.
Anyway, the whole point of the exception mechanism was to clearly
separate the success path from the failure path so that code could be
written along each path assuming a particular program state and not
having to write code for all cases at each state decision point.

But, as I said previously, there was a presumption that the failure
path was taken with low frequency. The problem with that approach is
that there are applications for which failure is not only expected,
but for which success is the rare case that occurs only if everything
that could go wrong doesn't. Such applications are not rare and it
should be distressing to everyone that the "conventional
error-handling technique" [ OP's words ] in C++ is not well suited to
use in them ... at least as it is currently implemented by most
compilers.

and then I do not use a exception for it.


But if you don't use the "conventional" techniques, the "standard"
idioms and the "well known" patterns, then your code is harder for
others to understand and maintain.

FWIW: in such a case I wouldn't use exceptions either.
George
--
for email reply remove "/" from address
Jul 23 '05 #20

P: n/a
On 15 Jul 2005 10:49:14 -0400, ka***@gabi-soft.fr wrote:
Another point to keep in mind (although it probably isn't
relevant to embedded systems) is that exceptions make it very
easy for the compiler to isolate the clean-up code which is
called in the error case. This can, in turn, result if smaller
functions, increasing the probability of the function fitting
entirely in the cache.


That's just as relevant in embedded code as anywhere else. Reducing
the size of the object code is frequently paramount. And in any case,
reducing the complexity of the source is always desirable.

IMO, that's why it's such a pity that the C++ exception mechanism
doesn't work very well for real time coding - which is a sizable
percentage of all embedded coding.

George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #21

P: n/a
Julián Albo wrote:
George Neuner wrote:
Because of the presumption of infrequent use, many compiler
vendors did not put a lot of effort into making exception
handling fast. This makes exceptions more difficult to use in
code which is routinely expected to suffer failures - such as
networking and hardware control applications.

If I expect that some result occurs frequently as part of the normal usage, I don't consider it exceptional, and then I do not use a exception

for it.

I'm not sure that George expressed himself clearly here.
Networking and hardware control application failures are (or
should be) "exceptional", in the sense that most of the time,
they don't occur. On the other hand, in embedded systems, it is
routinely expected to be able to handle and recover from them,
no matter how rarely they occur. Often in a specified finite
interval of time.

--
James Kanze mailto: ja*********@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34
Jul 23 '05 #22

P: n/a
RH
> No, it doesn't have an additional exit. The "error return value" case
will have an 'if( value==bad ) break;' statement. That is the same
exit from the loop.


Sorry if I was imprecise - the loop will have an additional 'loop
exit', not a program exit. There is no explicit control flow after a
function call to check for a possibly thrown exception out of this
function. If there was, you were right, the break would just be a jump
to the loop end, not an additional exit.

But this is not how it is done.

Instead, the runtime will find the landing pad during stack unrolling
with help of language specific data stored somewhere aside in the
binary. This is usually how those no-overhead implementations are done.
Therefore, during CFG construction, the compiler will add an edge from
each possibly throwing function call to a landing pad.

In practice - in particular for the loop optimizer - this _is_ a big
problem, because many loop opts cannot deal with multiple loop-exits.
All high performance compilers I know off (Intel, Open64, HP-UX) have
difficulties with that.

A viable inter procedural optimization is to determine which functions
can actually throw and then to remove corresponding unreachable landing
pads...
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #23

P: n/a
Short answer: Don't use exceptions in real-time systems.
There may be lot's of maybe's involved but overall, every experienced
realtime programmer I have spoken to says don't use them. At the very
least, even if your compiler optimizes exceptions to the max, you will
hurt the portability of your app.

Just a clarification to one of the below posters comments. Virtual
functions are usually ok in RT systems, they should have a bounded
execution time, unless the V-Table has trillions and trillions of
entries. :)

JJJ
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #24

P: n/a
George Neuner <gneuner2/@comcast.net> writes:
...the C++ exception mechanism doesn't work very well for real time
coding


What are the problems?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #25

P: n/a
George Neuner wrote:
On 15 Jul 2005 10:49:14 -0400, ka***@gabi-soft.fr wrote:
Another point to keep in mind (although it probably isn't
relevant to embedded systems) is that exceptions make it very
easy for the compiler to isolate the clean-up code which is
called in the error case. This can, in turn, result if
smaller functions, increasing the probability of the function
fitting entirely in the cache.

That's just as relevant in embedded code as anywhere else.
Yes and no.
Reducing the size of the object code is frequently paramount.
Agreed. My only point was that on an embedded processor,
reducing the size isn't as likely to affect speed. If, as is
often the case, memory is limited, then reducing the size will
reduce memory use.
And in any case, reducing the complexity of the source is
always desirable.


Again, yes and no. In very critical systems, where you have to
literally prove the code correct, under all possible input,
adding additional flow paths which don't show up in the source
is probably not a good idea. And in fact, in such systems, I
would ban exceptions.

Luckily, most systems (including most embedded systems) aren't
quite that critical, and we accept that exiting because of an
exception will not meet all of the functions post-conditions,
but only a small set of global invariants, which are (fairly)
easily checked, provided correct programming techniques are
used.

--
James Kanze mailto: ja*********@free.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #26

P: n/a
James Kanze wrote:
If I expect that some result occurs frequently as part of the normal
usage,I don't consider it exceptional, and then I do not use
exception for it.
I'm not sure that George expressed himself clearly here.
Networking and hardware control application failures are (or
should be) "exceptional", in the sense that most of the time,
they don't occur.


Then use the sense of "exceptional" that most adequately fits to the
application.
On the other hand, in embedded systems, it is routinely expected to
be able to handle and recover from them, no matter how rarely they
Occur. Often in a specified finite interval of time.


I don't consider important if the situation fits on some preconception of
what is exceptional and what not. Each concrete case may have his own
rules. Then don't use expections for situations that in this case are
inadequate, but that's not a reason to completely ban exceptions.

--
Salu2
Jul 23 '05 #27

P: n/a
In article <11*********************@z14g2000cwz.googlegroups. com>,
ar******@myrealbox.com writes
Short answer: Don't use exceptions in real-time systems.

I can understand such a guideline and in small real-time systems it
makes excellent sense (as does using C for such systems) , but as
systems get larger I think an alternative mid-term objective would be
desirable, have compilers that specifically optimise for real-time.

At the moment, most C++ implementors are aiming at minimizing the
program footprint in memory and maximizing the execution speed for
programs that have not raised an exception. The whole cost of exceptions
is then paid when an exception is actually raised. For most
desk-top/work-station/number-crunching supper-computers that is the most
appropriate strategy for most applications. If I am writing a weather
forecasting program I want it to work as fast as possible so that it
turns out forecasts in time to be useful (note that there is a sense in
which this is real-time). If something untoward happens that results in
an exception being raised I loose that run of the program (as I would if
there were a power cut).

However there is a whole class of programs that must produce results in
a bounded time, loading all the cost of exceptions on the point where
one is raised almost certainly exceeds the maximum allowed time. In such
cases error handling costs need to be distributed over the whole
program. The traditional error return mechanism does that, but makes the
process of producing and maintaining high quality code significantly
harder.

As long as exceptions are banned from such code the implementers have no
motive for producing implementations that 'optimise' exception handling
by distributing its costs across the normal code and execution paths. I
am not a compiler writer (and it is 20 years since I last wrote anything
of that kind) but it seems to me that writing such a compiler should be
possible and there might be a real market for it as long as there was
more than one such implementation.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #28

P: n/a
On 16 Jul 2005 19:51:19 -0400, David Abrahams
<da**@boost-consulting.com> wrote:
George Neuner <gneuner2/@comcast.net> writes:
...the C++ exception mechanism doesn't work very well for real time
coding


What are the problems?


The basic issue is that, from the programmer's perspective, an
exception throw is an atomic event which has a duration that can only
be indirectly controlled.

The duration of any particular throw is serially deterministic - it
depends only on the difference in call depth and the number of frame
local objects that have non-trivial destructors. These are both fixed
for any particular throw/catch pairing (assuming no recursion).
However the only way to control the duration is by manipulating those
parameters - call depth and number of non-trivial objects. This can
lead to a lot of refactoring to duplicate functions or move objects
higher into the call chain and then pass them around. It can be
particularly troublesome if the functions involved are shared heavily.

The atomicity of the throw is the more serious issue. Many real time
systems perform multiple tasks but do not use preemptive scheduling.
Many systems are designed as cooperative taskers, co-routines, or as
some kind of state machine. A lengthy throw which occurs at the wrong
moment could result in a failure by delaying execution of a more
important operation.

Before someone objects that the return from a function with many local
objects to destruct may be equally lengthy, and that the programmer
similarly has no control over it ... Yes. But the point is that
exceptions may involve multiple call frames which increases the
likelihood that more objects will be involved. In the normal return
scenario the programmer would regain control in each frame and could
decide what to do next - in the exception scenario the programmer is
locked out until all the intervening frames are destructed.

Additionally, function calls and returns are natural scheduling points
which the RT programmer is used to considering, whereas exceptions are
more easily overlooked as scheduling points because their timing
effects depend on how many call frames they traverse.

There is also the [diminishing] danger that programmers moving from C
to C++ may mentally equate exceptions with longjmp, which is normally
a constant time operation regardless of call depth.

Also certain types of programs depend on the ability to hop around at
will. When used purely for upward continuations, longjmp and
exceptions are functionally equivalent modulo destructor calls. But
setjmp/longjmp can also be used for general continuations. Exceptions
can't, and C++ doesn't provide any object safe construct or standard
library function that can.
[Yes ... I know that objects and longjmp can be safely mixed if you
are very careful.]
-
Exception atomicity is not an issue for a preemptive tasking system
and it may eventually cease to be an issue going forward when
functionality demands increase to the point where preemptive threading
is the only viable solution. However, I expect to continue to see
systems designed without it for a while yet because certified RTOSes
and tasking kernels remain expensive and most managers are
[rightfully] wary of liability arising from a home built solution.

And, of course, exceptions can be tamed through coding practices.
However, the effect of tightly controlling them results either in
programs which are written unnaturally (using TLS perhaps) or in
restricting them to protecting, at most, a single level of function
call so that their effect is no more than a normal function return.

Ultimately it does come down to the programmer knowing his tools. My
own concern, exemplified by this thread, is that a lot of people are
doing things they probably shouldn't be, in unfamiliar contexts and
with tools they don't fully understand.

George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #29

P: n/a
David Abrahams wrote:
kk****@yahoo.com writes:
By "conventional error-handling," I believe they mean returning an
error code, or just handling the error without going to a catch block.

When I said that I'd prefer throwing and catching exceptions--and that,
in fact, exceptions is the "conventional error-handling technique" in
C++, I was told that since we have a real-time system, we can't afford
the performance hit caused by using exceptions.

Do exception blocks cause big performance hits? If so, what causes the
hit? Or is the person just misinformed?


Whether or not they are misinformed may depend on your compiler, and
it certainly depends on your definition of "big". Some relevant
information is at:

http://tinyurl.com/8rljh


Cannot connect to this URL. Maybe you should paste the original URL as
well next time. TinyURL is cool but its chance of failing is bigger
than the normal URL (this time there seems to be problems with its DNS
servers).

Best regards,

Yongwei
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #30

P: n/a
Very true indeed, but we are stuck with the chicken/egg conundrum.

I especially see exceptions as being useful in underflow/overflow
errors, which are common to some realtime apps. I currently use
exceptions in a realtime sound streaming library. It is basically
'soft' realtime, because it is used for 'prelistening' and another more
predictable (and less capable) library is used for actual rendering
during performance. However I have marvelled at how robust (if a little
slow) my error handling has become, especially in overflow conditions.
I use an exception class to carry the overflowed object to a catch
handler where I can usually figure out a way to reschedule sending it,
or at the very worst, can release any resources it used.

One way to use exceptions flexibly in time constrained code is by
creating a macro or template function to 'raise an error' ala:

template<typename T> raiseError(typename
boost::call_traits<T>::const_reference err) { throw err; }

and then if you want to handle, say, common errors without the overhead
of exceptions:

template<> raiseError<QueueOverflow> (QueueOverflow const& err) {
blockFreeNotify(err); }

Although I am a big fan of exceptions, they really have cleaned up my
code, I still stick to my (I mean my mentors) prior advice, don't use
it in (hard) realtime systems. I guess trying them in 'soft' realtime
is OK if, like me, you enjoy living on the edge.

JJJ
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #31

P: n/a
> Exception performance depends on the compilers implementation of exceptions.
In the best case code that uses exceptions is faster in the "normal" (i.e.
no exception thrown) code path, but throwing an exceptions causes
overhead.

In most systems this is what you want. In real-time systems this additional
overhead can break your "reaction-time" guarantee


How do you measure the timing to check for the desired bounds and
limits?
Do you count instructions and processor cyles in the generated
(assembler) code?

Regards,
Markus.

Jul 23 '05 #32

P: n/a
"Wu Yongwei" <wu*******@gmail.com> writes:
David Abrahams wrote:
kk****@yahoo.com writes: http://tinyurl.com/8rljh


Cannot connect to this URL. Maybe you should paste the original URL as
well next time. TinyURL is cool but its chance of failing is bigger
than the normal URL (this time there seems to be problems with its DNS
servers).


http://groups-beta.google.com/group/...9f0ef8fe76c60f

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #33

P: n/a
"George Neuner" <gneuner2/@comcast.net> wrote in message
news:um********************************@4ax.com...
....

Before someone objects that the return from a function with many local
objects to destruct may be equally lengthy, and that the programmer
similarly has no control over it ... Yes. But the point is that
exceptions may involve multiple call frames which increases the
likelihood that more objects will be involved. In the normal return
scenario the programmer would regain control in each frame and could
decide what to do next - in the exception scenario the programmer is
locked out until all the intervening frames are destructed.
I don't see any diffeence. Programmer has the same degree of control. He can
catch exception at any place appropriately structuring his try/catch blocks,
same as he can do it with the error codes. How are the errors codes any
better? They would require roughly the same amount of processor time, except
the programmer has to clean up and propagate the error codes manually.

Additionally, function calls and returns are natural scheduling points
which the RT programmer is used to considering, whereas exceptions are
more easily overlooked as scheduling points because their timing
effects depend on how many call frames they traverse.
Same is with error codes, you need to consired how many stack frames you
need to go until the place where the error can be resolved.

Also certain types of programs depend on the ability to hop around at
will. When used purely for upward continuations, longjmp and
exceptions are functionally equivalent modulo destructor calls. But
setjmp/longjmp can also be used for general continuations. Exceptions
can't, and C++ doesn't provide any object safe construct or standard
library function that can.
[Yes ... I know that objects and longjmp can be safely mixed if you
are very careful.]


Co-routines (fibers, setjmp/longjmp) which usually run in an infinte loop
must never leak errors outside, because there is no recovery mechanism in
this stack hopping scheme other than program reset. It doesn't matter
whether exceptions, or error codes are used.

- gene
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #34

P: n/a
George Neuner wrote:
I'm surprised [a real-time system]
would be using (m)any of the advanced features of C++ in the first
place. Virtual function calls, dynamic heap allocation and exception
handling are all, at best, problematic for verifying performance of
real time code.


Why do you put virtual function calls on that list?

Do you use, or do you know someone who uses, a CPU where a "call
indirect" takes significantly longer than a "call"? (By "significantly
longer" I mean more than 2-3 clock cycles.) Because AFAIK every single
C++ implementation of virtual functions uses some variation of vtables;
the object has a pointer to the vtable, and (for instance) offset 32 in
that table has the address of the function we wish to call.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #35

P: n/a
In article <UW***************@newssvr21.news.prodigy.com>, Gene Bushuyev
<sp**@smapguard.com> writes
Co-routines (fibers, setjmp/longjmp) which usually run in an infinte loop
must never leak errors outside, because there is no recovery mechanism in
this stack hopping scheme other than program reset. It doesn't matter
whether exceptions, or error codes are used.


But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised. For many applications
that is exactly what is wanted, however in time constrained systems it
can push the cost of a raised exception beyond what can be accepted.

If we had compilers that had a switch to minimise exception handling
costs when an exception was raised, my guess is that that would make
exceptions useful in hard RT code.
--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #36

P: n/a
On 19 Jul 2005 05:20:42 -0400, "Gene Bushuyev" <sp**@smapguard.com>
wrote:
"George Neuner" <gneuner2/@comcast.net> wrote in message
news:um********************************@4ax.com.. .
...

Before someone objects that the return from a function with many local
objects to destruct may be equally lengthy, and that the programmer
similarly has no control over it ... Yes. But the point is that
exceptions may involve multiple call frames which increases the
likelihood that more objects will be involved. In the normal return
scenario the programmer would regain control in each frame and could
decide what to do next - in the exception scenario the programmer is
locked out until all the intervening frames are destructed.


I don't see any diffeence. Programmer has the same degree of control. He can
catch exception at any place appropriately structuring his try/catch blocks,
same as he can do it with the error codes. How are the errors codes any
better? They would require roughly the same amount of processor time, except
the programmer has to clean up and propagate the error codes manually.

Additionally, function calls and returns are natural scheduling points
which the RT programmer is used to considering, whereas exceptions are
more easily overlooked as scheduling points because their timing
effects depend on how many call frames they traverse.


Same is with error codes, you need to consired how many stack frames you
need to go until the place where the error can be resolved.

You've entirely missed the point.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.

Second, an exception allows direct control transfer to *any* higher
frame. If the programmer must insert redundant try blocks and
manually propagate exceptions just to deliberately avoid the
possibility of jumping a frame, then exception returns have no
advantage over use of error codes.
George

--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #37

P: n/a
Francis Glassborow wrote:

But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised.


Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?

/David

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #38

P: n/a
"Francis Glassborow" <fr*****@robinton.demon.co.uk> wrote in message
news:ws**************@robinton.demon.co.uk...
In article <UW***************@newssvr21.news.prodigy.com>, Gene Bushuyev
<sp**@smapguard.com> writes
Co-routines (fibers, setjmp/longjmp) which usually run in an infinte loop
must never leak errors outside, because there is no recovery mechanism in
this stack hopping scheme other than program reset. It doesn't matter
whether exceptions, or error codes are used.


But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised. For many applications
that is exactly what is wanted, however in time constrained systems it
can push the cost of a raised exception beyond what can be accepted.

If we had compilers that had a switch to minimise exception handling
costs when an exception was raised, my guess is that that would make
exceptions useful in hard RT code.


That is only a factor if:
a) a resource clean up is taking relatively little time compared to compiler
generated code for stack unwinding, and
b) exceptions are relatively frequently thrown

If a) is a possible in some situations, b) indicates that exceptional paths
are not that exceptional at all, but rather common. In which case an
application designer should probably reconsider the definition of what
constitutes an error.

- gene
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #39

P: n/a
George Neuner wrote:
[snip]
Additionally, function calls and returns are natural scheduling
points which the RT programmer is used to considering, whereas
exceptions are more easily overlooked as scheduling points because
their timing effects depend on how many call frames they traverse.


Same is with error codes, you need to consired how many stack frames
you need to go until the place where the error can be resolved.

You've entirely missed the point.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.


Have you ever actually seen code ...
1. establish whether a called function returned an error code
2. establish whether the error code must be propagated out of the
calling function
3. establish whether error code propagation might take longer than the
remaining time until the next deadline
4. yield control to a different thread/function, which could then ensure
that the looming deadline is not missed

?

I'm asking because what you describe above only seems to make sense if
you go through steps 1-4 after *every* function call that can fail?

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #40

P: n/a
David Rasmussen <da*************@gmx.net> writes:
Francis Glassborow wrote:

But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised.


Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?


If you just "compile it without exceptions" it isn't the same program,
because you've just dropped all of the handling for those exceptional
conditions. If, however, you rewrite the exceptional condition
handling functionality using a different mechanism (e.g. error return
codes) it is quite possible -- even likely -- that it will run slower.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #41

P: n/a
Allan W <al*****@my-dejanews.com> wrote:
George Neuner wrote:
I'm surprised [a real-time system]
would be using (m)any of the advanced features of C++ in the first
place. Virtual function calls, dynamic heap allocation and exception
handling are all, at best, problematic for verifying performance of
real time code.


Why do you put virtual function calls on that list?

Do you use, or do you know someone who uses, a CPU where a "call
indirect" takes significantly longer than a "call"? (By "significantly
longer" I mean more than 2-3 clock cycles.)

<snip>

Every memory read potentially takes hundreds of cycles if it misses
all caches, and an indirect branch typically requires more memory
reads than a direct branch. In addition, branch prediction for
indirect branches is generally poorer and consequently virtual
function calls are more likely to require a pipeline flush and refill.
I would be surprised if the average overhead was as little as 2-3
clock cycles on any modern processor, not to mention the worst case.

--
Ben Hutchings
Having problems with C++ templates? Your questions may be answered by
<http://womble.decadentplace.org.uk/c++/template-faq.html>.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #42

P: n/a
David Rasmussen <da*************@gmx.net> writes:
Francis Glassborow wrote:

But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised.


Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?


I don't know what you mean by a program "compiled" without exceptions;
either you use exceptions in your source code, or you don't. Some
compilers may provide special options which "disable" exceptions,
meaning they do not include runtime support for exceptions in your
binary, assuming you do not throw or catch any exceptions in your
code.

I guess what was meant is that a program which uses exceptions as its
error-handling strategy _may_ run faster than a program which uses
some other error-handling strategy, such as checking the return value
of every function call. If no errors are encountered, you will still
have to do all the return value checking, which is some overhead.

Kind regards,
- Thomas

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #43

P: n/a
In article <42*********************@dtext02.news.tele.dk>, David
Rasmussen <da*************@gmx.net> writes
Francis Glassborow wrote:

But it does in the real world because of the way that compilers
implement exceptions by trying to make their presence zero cost (or even
negative cost) if no actual exception is raised.


Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?


Yes, faster than the equivalent program written using other error
handling mechanisms. Of course not doing any error checking will beat
both until something goes wrong:)

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #44

P: n/a
"George Neuner" <gneuner2/@comcast.net> wrote in message
news:p6********************************@4ax.com...
On 19 Jul 2005 05:20:42 -0400, "Gene Bushuyev" <sp**@smapguard.com>
wrote:
"George Neuner" <gneuner2/@comcast.net> wrote in message
news:um********************************@4ax.com. ..
...

Before someone objects that the return from a function with many local
objects to destruct may be equally lengthy, and that the programmer
similarly has no control over it ... Yes. But the point is that
exceptions may involve multiple call frames which increases the
likelihood that more objects will be involved. In the normal return
scenario the programmer would regain control in each frame and could
decide what to do next - in the exception scenario the programmer is
locked out until all the intervening frames are destructed.
I don't see any diffeence. Programmer has the same degree of control. He
can
catch exception at any place appropriately structuring his try/catch
blocks,
same as he can do it with the error codes. How are the errors codes any
better? They would require roughly the same amount of processor time,
except
the programmer has to clean up and propagate the error codes manually.

Additionally, function calls and returns are natural scheduling points
which the RT programmer is used to considering, whereas exceptions are
more easily overlooked as scheduling points because their timing
effects depend on how many call frames they traverse.


Same is with error codes, you need to consired how many stack frames you
need to go until the place where the error can be resolved.

You've entirely missed the point.


I don't think so.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.


No, it doesn't follow from what I have said. It doesn't make any sense to
try/catch if you can't do anything with the error. There are only certain
places where error recovery is possible. Either you get there forwarding
error codes from function to function and cleaning resources on the way
manually, or an exception brings you there. There is no need to have
try/catch in every function, because it would mean that every function can
recover from error, and therefore doens't need to throw at all.
Processing error codes leads to the same amount of manual stack unwinding as
if exceptions were thrown. There is still the same amount of resource
cleaning and recovery needs to be done. Compiler generated code for stack
unwinding may add some overhead, which depends on the individual compiler.
Whether that overhead is significant or not depends on the amount of
resource clean up that needs to be done and frequency the exceptions are
thrown. Maybe embedded applications are completely different, but in server
data-crunching applications that I'm familiar with, exceptions add nothing
mesuarable to the program run-time.

- gene
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #45

P: n/a
David Rasmussen wrote:
Negative cost? Meaning that a program compiled with exceptions, in which
no exceptions are raised will actually run faster than the same program
compiled without exceptions?


Yes. The reason is that without exceptions programs must check
for errors using return values. So even if there are no errors,
the program spends extra time verifying that there are no errors.

With exceptions, the normal execution path runs as if no error
checking is required. Meanwhile, the error handling code just
sits apart from the normal path, and if there is an exception
that error handling code is invoked.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #46

P: n/a
On 20 Jul 2005 19:29:26 -0400, "Andreas Huber"
<ah********************@yahoo.com> wrote:
George Neuner wrote:
[snip]
Additionally, function calls and returns are natural scheduling
points which the RT programmer is used to considering, whereas
exceptions are more easily overlooked as scheduling points because
their timing effects depend on how many call frames they traverse.

Same is with error codes, you need to consired how many stack frames
you need to go until the place where the error can be resolved.

You've entirely missed the point.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.


Have you ever actually seen code ...
1. establish whether a called function returned an error code


Yes.
2. establish whether the error code must be propagated out of the
calling function
Yes.
3. establish whether error code propagation might take longer than the
remaining time until the next deadline
Not directly. But not every error needs to be propagated immediately
..... the intervening function may have something of its own to finish
first.
4. yield control to a different thread/function, which could then ensure
that the looming deadline is not missed
Yes.

I'm asking because what you describe above only seems to make sense if
you go through steps 1-4 after *every* function call that can fail?


And it is necessary with functions that succeed as well.

Normal practice when writing real time code is to determine the
cumulative time spent in the current context at each decision point
and decide whether it has become "too much". Typical decision points
are before/after a function call, before/after a loop or, if it's a
lengthy loop, after every so many iterations. Real time programmers
consider these things all the time.

George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #47

P: n/a
On 20 Jul 2005 19:28:16 -0400, "Gene Bushuyev" <sp**@spamguard.com>
wrote:
"George Neuner" <gneuner2/@comcast.net> wrote in message
news:p6********************************@4ax.com.. .
On 19 Jul 2005 05:20:42 -0400, "Gene Bushuyev" <sp**@smapguard.com>
wrote:
"George Neuner" <gneuner2/@comcast.net> wrote in message
news:um********************************@4ax.com ...
...

Additionally, function calls and returns are natural scheduling points
which the RT programmer is used to considering, whereas exceptions are
more easily overlooked as scheduling points because their timing
effects depend on how many call frames they traverse.

Same is with error codes, you need to consired how many stack frames you
need to go until the place where the error can be resolved.

You've entirely missed the point.


I don't think so.

First, the programmer does *not* have the same degree of control
unless *every* function call in the chain is separately protected by
its own try block and any exceptions are manually propagated to the
appropriate frame.


No, it doesn't follow from what I have said.


I think it does.
It doesn't make any sense to
try/catch if you can't do anything with the error.
Tell that to Java fans. Please!
There are only certain
places where error recovery is possible. Either you get there forwarding
error codes from function to function and cleaning resources on the way
manually, or an exception brings you there. There is no need to have
try/catch in every function, because it would mean that every function can
recover from error, and therefore doens't need to throw at all.
The manual option and the automatic option are not functionally
equivalent for reasons I've already articulated.

Processing error codes leads to the same amount of manual stack unwinding as
if exceptions were thrown. There is still the same amount of resource
cleaning and recovery needs to be done. Compiler generated code for stack
unwinding may add some overhead, which depends on the individual compiler.
Whether that overhead is significant or not depends on the amount of
resource clean up that needs to be done and frequency the exceptions are
thrown. Maybe embedded applications are completely different, but in server
data-crunching applications that I'm familiar with, exceptions add nothing
mesuarable to the program run-time.


Again, it's not about the cumulative time - its about having control.

Real time operations frequently have microsecond range tolerances.
Such things are usually handled directly by interrupt handlers.
However higher level code which monitors or sequences the operations
may still have millisecond or even sub-millisecond tolerances.
Despite this the program might be expected to accomplish several high
level operations simultaneously.

The total time to pop, say 3 frames, from the stack may be roughly the
same whether the functions return normally or an exception is thrown.
But in the exception case the time to return to the top frame is all
spent in a single indivisible action. In the return case the total
time is spread over 3 actions between which the programmer regains
control.
George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #48

P: n/a
George Neuner wrote:
[snip]
I'm asking because what you describe above only seems to make sense
if you go through steps 1-4 after *every* function call that can
fail?


And it is necessary with functions that succeed as well.

Normal practice when writing real time code is to determine the
cumulative time spent in the current context at each decision point
and decide whether it has become "too much". Typical decision points
are before/after a function call, before/after a loop or, if it's a
lengthy loop, after every so many iterations. Real time programmers
consider these things all the time.


I ask a bit more obviously: Is there such a decision point after *every*
function call? If yes, then your original statement that "the C++
exception mechanism doesn't work very well for real time coding" is
correct. If no, then I don't see why real-time code would not gain
something from using exceptions. The programmer would still have full
control: Whenever he wants to introduce a decision point he simply puts
a call (or multiple calls) into a try block. In the catch block he then
does the same as he would do when he gets back an error code from a
failing function.

Regards,

--
Andreas Huber

When replying by private email, please remove the words spam and trap
from the address shown in the header.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #49

P: n/a
On 21 Jul 2005 18:16:54 -0400, "Andreas Huber"
<ah********************@yahoo.com> wrote:
George Neuner wrote:
[snip]
I'm asking because what you describe above only seems to make sense
if you go through steps 1-4 after *every* function call that can
fail?
And it is necessary with functions that succeed as well.

Normal practice when writing real time code is to determine the
cumulative time spent in the current context at each decision point
and decide whether it has become "too much". Typical decision points
are before/after a function call, before/after a loop or, if it's a
lengthy loop, after every so many iterations. Real time programmers
consider these things all the time.


I ask a bit more obviously: Is there such a decision point after *every*
function call?


The strict answer is "no" - decision points are app specific -
function call sites are just obvious and convenient places to evaluate
the need for a decision. RT isn't a set of rules to follow - it's a
dicipline of always being aware of the time consequences of code.

If yes, then your original statement that "the C++
exception mechanism doesn't work very well for real time coding" is
correct. If no, then I don't see why real-time code would not gain
something from using exceptions.


As I said in a previous post, I have no issue with appropriately tamed
exceptions. I believe there is an inherent problem which makes them
unsuitable for control transfers that span more than one or two frames
..... something I see not infrequently in conventional applications.
The basic RT coding skill that needs to be acquired is to *always* be
aware of potential timing issues in your code - while first writing
it. Once you've gotten yourself into a timing problem it can be very
difficult to get out of it without a lot of refactoring. RT code
requires careful upfront planning and continual awareness of the time
consequences of the code you are working on - the conventional
desktop/server technique of sketching code and then tweaking it by
profiling usually won't work.

C was designed to do system programming - it has a slight abstraction
penalty relative to assembler which is far more than made up for by
the gain in expressiveness. The more important point is that
virtually nothing is hidden from the programmer.

C++, OTOH, was designed for more conventional application programming
while still *permitting* system programming. It's additional
expressiveness [compared to C] is achieved largely through layered
abstractions which hide the implementation mechanisms from the
programmer. This leads to the vast majority of programmers not having
any clue about the implementation of language constructs or the
abstraction penalties paid for using them.

Naivete regarding the language becomes a major problem when people
[such as the OP of this thread] who have no experience in RT are
pressed into doing it - particularly in situations where no one is
around to teach techniques and explain why things can't or shouldn't
be done in the conventional way the programmer is accustomed to.

A decent C programmer who is new to RT can usually figure out what the
problem is and devise a way around it. My experience has been that
[even experienced] C++ coders attempting to do RT have much more
trouble discovering the cause of their problems and when faced with a
significant problem, the C++ programmer frequently has much more
difficulty resolving it because there are more hidden interactions to
consider. A stroll through comp.arch.embedded will show that I'm far
from alone in this observation.
George
--
for email reply remove "/" from address

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Jul 23 '05 #50

59 Replies

This discussion thread is closed

Replies have been disabled for this discussion.