473,387 Members | 1,463 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

Execption Handling disection


I have read that Exception Handling is expensive performance wise ( other
than Throw ) , but exactly how ?
Please consider the following example ...

////////////////// Code Block 1 //////////////////////////////////
Try
{
for ( ... ){// Code that might throw exception}
}catch(...){}
//////////////////////////////////////////////////////////////////////////

////////////////// Code Block 2 //////////////////////////////////
for ( ... )
{
Try {// Code that might throw exception}
catch(...){}
}
//////////////////////////////////////////////////////////////////////////
(NOTE: Code in CSharp for compactness)

Is the declaration of try/catch block one of the thing that is performance
intensive, which will mean Code Block 2 is better ?

Any comments, links are appreciated.

Thanx in advance
rawCoder
Nov 17 '05 #1
34 2190
Greetings,

I think that what you've read about is the relative difference between
allowing the exception to be thrown versus testing for potential exception
conditions in code. For instance, if you are going to convert the value
from a TextBox to an integer, you could use int.Parse and catch the
exception as in your example or, alternatively, you could test the contents
of the TextBox to make sure that it can be converted to an integer: is it
all numeric and is it short enough to be between int.MaxValue and
int.MinValue?

The latter option may take, just for arguments' sake, 20 to 50 lines of
code, and the try/catch may only take half a dozen lines of code. Many new
C# developers look at that ratio and immediately start throwing try/catch
blocks at everything, either in the mistaken belief that it must be faster -
it's less code, or out of laziness because it takes less brain work and
keystrokes.

The fact is that having the exception thrown is so much more expensive, in
terms of CPU cycles and time, than testing the value ahead of time, it could
be from 10s of times more expensive to well over 1000 times more expensive,
that it is well worth the effort to code the validation first.

So the answer to your question is, when you read discussions about the
expense of error handling versus error prevention, the discussion is
probably not as much about the expense of adding the try/catch handler as it
is about allowing the exception to be thrown in the first place.

If you can test before calling your "Code that might throw exception" and
respond to the condition yourself, then don't use the try/catch block at
all. If you still need to use the try/catch, put it outside the for loop.
Creating the try/catch once has to be better than creating it potentially
hundreds or thousands of times.

HTH

Dale Preston
MCAD, MCDBA, MCSE
"rawCoder" <ra******@hotmail.com> wrote in message
news:#y**************@tk2msftngp13.phx.gbl...

I have read that Exception Handling is expensive performance wise ( other
than Throw ) , but exactly how ?
Please consider the following example ...

////////////////// Code Block 1 //////////////////////////////////
Try
{
for ( ... ){// Code that might throw exception}
}catch(...){}
//////////////////////////////////////////////////////////////////////////

////////////////// Code Block 2 //////////////////////////////////
for ( ... )
{
Try {// Code that might throw exception}
catch(...){}
}
//////////////////////////////////////////////////////////////////////////
(NOTE: Code in CSharp for compactness)

Is the declaration of try/catch block one of the thing that is performance
intensive, which will mean Code Block 2 is better ?

Any comments, links are appreciated.

Thanx in advance
rawCoder

Nov 17 '05 #2
what exactly is an exceptional circumstance?

You should realize by struggling to answer this question that it doesn't
define an exception at all because an exceptional circumstance is not
necessarily an exception. For instance, reading past an end of file is
certainly not exceptional circumstance but it is considered an exception. On
the other hand, dividing an integer by zero may be exceptional but not
necessarily an exception in certain math applications.

I prefer to define an exception as a violation of an implied assumption. If
you read from a file, the implicit assumption is that there is no more data
when the end of file marker is reached therefore it is an exception to read
past the end of file. Likewise for divide by zero conditions where the
result must be a finite positive number.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"SemiproCappa" <Se**********@discussions.microsoft.com> wrote in message
news:36**********************************@microsof t.com...
It's the actual exception that is the problem not the try...catch. As
long
as exceptions are only used in exceptional circumstances then it won't
really
matter where your try...catch statements are used.

"rawCoder" wrote:

I have read that Exception Handling is expensive performance wise ( other
than Throw ) , but exactly how ?
Please consider the following example ...

////////////////// Code Block 1 //////////////////////////////////
Try
{
for ( ... ){// Code that might throw exception}
}catch(...){}
//////////////////////////////////////////////////////////////////////////

////////////////// Code Block 2 //////////////////////////////////
for ( ... )
{
Try {// Code that might throw exception}
catch(...){}
}
//////////////////////////////////////////////////////////////////////////
(NOTE: Code in CSharp for compactness)

Is the declaration of try/catch block one of the thing that is
performance
intensive, which will mean Code Block 2 is better ?

Any comments, links are appreciated.

Thanx in advance
rawCoder

Nov 17 '05 #3
The rule that is bandied about in this newsgroup goes something like
this.

Use exceptions only for cases that you never expect to come up in the
normal operation of your code. Or, put another way, there should be no
common scenario that results in an exception. Exceptions are, as
SemiproCappa said... exceptional.

For example, if it happens all the time in your system that you look up
a customer number and don't find a customer record, then you should
code a test for that. If, on the other hand, when you look for a
customer record you are "always" supposed to find one, not finding one
is an exception.

Personally, I don't always follow this rule. When parsing user input, I
_do_ use Int32.Parse() and catch the exception, just because the user
can't type fast enough to make a single exception a performance
problem. I'd never do that when reading thousands of rows of database
data, though: I test for invalid field contents "manually" in code
because thousands of exceptions would be a performance problem.

Nov 17 '05 #4
In the code sample posted, I would not worry about which is more
efficient, but that they represent two fundamentally different
algorithms. In Code Block 1, the for loop is broken on an exception. In
Code Block 2, the for loop may continue on exception.

As for exceptions:

Herb Sutter concludes "Distinguish between errors and non-errors. A
failure is an error if and only if it violates a function's ability to
meets its callees' pre-conditions, to establish its own post-conditions,
or to establish an invariant it shares responsibility for maintaining.
Every thing else is not an error...

Finally, prefer to use exceptions instead of error codes to report
errors. Use error codes only when exceptions cannot be used ... and for
conditions that are not errors."
Regards,
Jeff

*** Sent via Developersdex http://www.developersdex.com ***
Nov 17 '05 #5
Dale Preston <da******@nospam.nospam> wrote:
I think that what you've read about is the relative difference between
allowing the exception to be thrown versus testing for potential exception
conditions in code. For instance, if you are going to convert the value
from a TextBox to an integer, you could use int.Parse and catch the
exception as in your example or, alternatively, you could test the contents
of the TextBox to make sure that it can be converted to an integer: is it
all numeric and is it short enough to be between int.MaxValue and
int.MinValue?

The latter option may take, just for arguments' sake, 20 to 50 lines of
code, and the try/catch may only take half a dozen lines of code. Many new
C# developers look at that ratio and immediately start throwing try/catch
blocks at everything, either in the mistaken belief that it must be faster -
it's less code, or out of laziness because it takes less brain work and
keystrokes.
Less brainwork is good. In general, the less code I have, the less of
it can be wrong. I often gladly take a performance hit where it's
unimportant in order to get cleaner, more easily readable code. I'd
always rather have a program which does its job properly in 10 seconds
than one which produces the wrong answer in half the time.
The fact is that having the exception thrown is so much more expensive, in
terms of CPU cycles and time, than testing the value ahead of time, it could
be from 10s of times more expensive to well over 1000 times more expensive,
that it is well worth the effort to code the validation first.


For converting the value in a TextBox? No it's not.

Throwing an exception is slower than not throwing an exception, but
exceptions aren't nearly as expensive as some people seem to think.

My laptop can throw an exception a hundred thousand times in a second.
Do you think the potential delay of 0.01 *milliseconds* is going to be
even the slightest bit noticeable to a user?

Exceptions are only likely to cause noticeable performance problems
when they're being thrown a *lot* - such as in a very short loop
executed a large number of times.

There are good reasons for not throwing exceptions when they're not
suitable, in terms of readability and code flow, but performance rarely
comes into it in my experience.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #6
RawCoder,

What is the difference, somewhere is set a piece of code to catch an event
and to handle that when the event of that type is raised. The structure of
the language tells that it has to be on that place.

I am almost sure that Jay will give an answer as well in this thread, when
he misses it, than you have here a message from him about general
exceptions. (There is more so when he not sees this, you can as well search
in Google newsgroups for "Jay general exceptions")

http://groups-beta.google.com/group/...fe1015e7ef70e0

I hope this helps,

Cor
Nov 17 '05 #7
Daniel Jin <Da*******@discussions.microsoft.com> wrote:
There are good reasons for not throwing exceptions when they're not
suitable, in terms of readability and code flow, but performance rarely
comes into it in my experience.
this is really one area that could use some clear guidelines. I've seen
literature make statements like "exceptions are a performance hit" and "they
should be reserved for exceptional situations". but nothing really outlines
how they should really be used.


I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these exceptional
situations since we are clearly expecting these conditions to occur? and how
do we indicate these to the presentation? return values (as used in many
samples) just seem to be such an antiquated method. in the end I went with
the exception route, contrary to what many of the guidelines seem to
suggest.


Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #8
Well, luckily, the TryParse method that is in the Double class (ok...Double
struct for the nit-picky) will be included in all of the integral types in
V2.0 of the .Net framework. That should reduce some coding-by-exception
practices.

But I agree with you that there is no clear cut or single answer.

No matter what, good code is always better than bad code, and testing for
likely errors is always better code than just throwing errors - from a
coding perspective. From a business perspective, we all know that not all
projects have the budget and schedule to allow us to do everything by best
practices. Sometimes we have to make compromise, following the highest
priority best practices and letting some go because of budget and time
constraints.

So, in that regard, the best code is the code that gets the client or user
the functionality they require, with risks, bugs, and performance all at
levels they can live with, and on time and on budget - even if that means
that we code by exception at times.

Dale
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Daniel Jin <Da*******@discussions.microsoft.com> wrote:
There are good reasons for not throwing exceptions when they're not
suitable, in terms of readability and code flow, but performance rarely comes into it in my experience.


this is really one area that could use some clear guidelines. I've seen
literature make statements like "exceptions are a performance hit" and "they should be reserved for exceptional situations". but nothing really outlines how they should really be used.


I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these exceptional situations since we are clearly expecting these conditions to occur? and how do we indicate these to the presentation? return values (as used in many samples) just seem to be such an antiquated method. in the end I went with the exception route, contrary to what many of the guidelines seem to
suggest.


Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Nov 17 '05 #9
>I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations. That's not true at all. The hard and fast rule is throw an exception when an
assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om... Daniel Jin <Da*******@discussions.microsoft.com> wrote:
> There are good reasons for not throwing exceptions when they're not
> suitable, in terms of readability and code flow, but performance rarely
> comes into it in my experience.


this is really one area that could use some clear guidelines. I've seen
literature make statements like "exceptions are a performance hit" and
"they
should be reserved for exceptional situations". but nothing really
outlines
how they should really be used.


I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these
exceptional
situations since we are clearly expecting these conditions to occur? and
how
do we indicate these to the presentation? return values (as used in many
samples) just seem to be such an antiquated method. in the end I went
with
the exception route, contrary to what many of the guidelines seem to
suggest.


Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too

Nov 17 '05 #10
"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc> wrote in message
news:Ou**************@TK2MSFTNGP09.phx.gbl...
I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations. That's not true at all. The hard and fast rule is throw an exception when
an assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.


Java has this "feature" too, it is called "checked exceptions". When I
started to use Java, in 96, I thought that it was a good idea because it
seems to enforce stronger compile time verifications, but after struggling a
lot with them, I came to the conclusion that checked exceptions are a "bad"
good idea and that they do a lot more harm than good, for many reasons, the
main one being that they encourage the programmer to catch exceptions
locally instead of letting them bubble up to a generic catch handler. The
end result is code that is polluted with catch clauses all over the place,
and usually very poor exception handling in the end.

So, "checked exception" is a bad thing, and actually, if you analyze the
Java libraries, you will see that all the early ones (the JDK of course)
made extensive use of them, and that the more recent ones tend to reject
them. And some Java gurus advocate against them (see
http://www.mindview.net/Etc/Discussi...ckedExceptions from Bruce Eckel,
the author of "Thinking in Java").

So, the C# designers made the right choice here.

Bruno

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Daniel Jin <Da*******@discussions.microsoft.com> wrote:
> There are good reasons for not throwing exceptions when they're not
> suitable, in terms of readability and code flow, but performance
> rarely
> comes into it in my experience.

this is really one area that could use some clear guidelines. I've seen
literature make statements like "exceptions are a performance hit" and
"they
should be reserved for exceptional situations". but nothing really
outlines
how they should really be used.


I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these
exceptional
situations since we are clearly expecting these conditions to occur?
and how
do we indicate these to the presentation? return values (as used in
many
samples) just seem to be such an antiquated method. in the end I went
with
the exception route, contrary to what many of the guidelines seem to
suggest.


Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too


Nov 17 '05 #11
Alvin Bruney [MVP - ASP.NET] wrote:
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.


C++ (thankfully :) doesn't have compile-time checked exceptions like
JAVA, but runtime-checked. C++ throw declarations have semantics which
severely limits their usefullness.

A function-invocation which throws something not declared in the throw
clause doesn't get to pass that exception up the stack but instead
invokes std::unexpected. std::unexpected is not allowed to return, but
must abort the program, throw a "bad_exception", or throw an exception
in the throw clause. Note that std::unexpected is a global function, so
you really can't expect anything other than "bad_exception" or the
default: std::terminate().

This means that "throw InvalidArgument" doesn't publish any assumptions,
it restricts which errors the caller is *allowed* to react on.

Especially anywhere using decoupling, type-limiting on exceptions just
really isn't that usefull.

It's nice to have destructors that doesn't throw, but writing "throw()"
really doesn't help any more than /* doesn't throw */.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #12
<"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc>> wrote:
I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations. That's not true at all. The hard and fast rule is throw an exception when an
assumption is violated.
That's it. That's all you need to know and do.


That's just a restatement of the problem in terms of assumptions rather
than exceptions. (I'd use the word "contract" rather than "assumption"
though - if I *assumed* a parameter would be valid, I wouldn't then
check its validity and throw an exception before doing any work.)

This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.


Without knowing C++ well, I don't know exactly what you mean. Could you
elaborate?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #13

"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
<"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc>> wrote:
>I suspect that's because it varies so much, to be honest. It's
>difficult to get hard and fast rules which apply in all situations. That's not true at all. The hard and fast rule is throw an exception when
an
assumption is violated.
That's it. That's all you need to know and do.


That's just a restatement of the problem in terms of assumptions rather
than exceptions. (I'd use the word "contract" rather than "assumption"
though - if I *assumed* a parameter would be valid, I wouldn't then
check its validity and throw an exception before doing any work.)


This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)


I agree. I think the question of when to throw an exception versus returning
a sentinel value is one of the least understood and most error prone aspects
of .NET/C#.

I find that in practical code it often gets messy because many developers
wind up dealing with both exceptions and error codes, rather then one or the
other, and the result is that it actually increases complexity rather then
reduces it.

Many of the decisions revolve around issues of performance. For example, if
the business logic is in a server that is dealing with hundreds or thousands
of requests/transactions in a batch format that it might make more sense to
identify errors in a status structure, one per transaction, rather then
throw an exception for every error across a machine boundary.

But if performance is the only concern then eventually faster processors
will make some of those concerns less important. I find the issue of the
number of code paths to be more troubling. If the code deals both with
exceptions and still other forms of error codes then the result is less
reliable code, not more. Essentially, if I see code using exceptions to
determine flow control then there's a problem with the design. But I also
find it troubling to examine code that has no exception handling at all -
relying on a UE handler is poor design.


Nov 17 '05 #14
David Levine <no******************@wi.rr.com> wrote:
I agree. I think the question of when to throw an exception versus returning
a sentinel value is one of the least understood and most error prone aspects
of .NET/C#.
Yup.
I find that in practical code it often gets messy because many developers
wind up dealing with both exceptions and error codes, rather then one or the
other, and the result is that it actually increases complexity rather then
reduces it.
Absolutely. Dealing with both is a nightmare.
Many of the decisions revolve around issues of performance. For example, if
the business logic is in a server that is dealing with hundreds or thousands
of requests/transactions in a batch format that it might make more sense to
identify errors in a status structure, one per transaction, rather then
throw an exception for every error across a machine boundary.
Yup - there are certainly times when performance *is* important and
exceptions would prove prohibitive. Unfortunately, the "exceptions are
slow" mantra has gone *way* over the top, to the extent where people
don't really ask themselves whether the performance hit is actually a
problem in their situation.
But if performance is the only concern then eventually faster processors
will make some of those concerns less important. I find the issue of the
number of code paths to be more troubling. If the code deals both with
exceptions and still other forms of error codes then the result is less
reliable code, not more. Essentially, if I see code using exceptions to
determine flow control then there's a problem with the design. But I also
find it troubling to examine code that has no exception handling at all -
relying on a UE handler is poor design.


What exactly do you mean by "using exceptions to determine flow
control" though? To me, using exceptions to quickly and reliably (in
the absence of anything deliberately catching an exception too early)
aborting a potentially "deep" operation *is* using them to determine
flow control, but in a good way. What kind of thing are you thinking of
as being definitely bad?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #15
i catch your drift about bad *good ideas, but you can't fault the language
for programmer misuse. That's bound to happen anyway. It doesn't/shouldn't
detract from value though IMO

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Bruno Jouhier [MVP]" <bj******@club-internet.fr> wrote in message
news:Oz**************@TK2MSFTNGP09.phx.gbl...
"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc> wrote in message
news:Ou**************@TK2MSFTNGP09.phx.gbl...
>I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.

That's not true at all. The hard and fast rule is throw an exception when
an assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.


Java has this "feature" too, it is called "checked exceptions". When I
started to use Java, in 96, I thought that it was a good idea because it
seems to enforce stronger compile time verifications, but after struggling
a lot with them, I came to the conclusion that checked exceptions are a
"bad" good idea and that they do a lot more harm than good, for many
reasons, the main one being that they encourage the programmer to catch
exceptions locally instead of letting them bubble up to a generic catch
handler. The end result is code that is polluted with catch clauses all
over the place, and usually very poor exception handling in the end.

So, "checked exception" is a bad thing, and actually, if you analyze the
Java libraries, you will see that all the early ones (the JDK of course)
made extensive use of them, and that the more recent ones tend to reject
them. And some Java gurus advocate against them (see
http://www.mindview.net/Etc/Discussi...ckedExceptions from Bruce
Eckel, the author of "Thinking in Java").

So, the C# designers made the right choice here.

Bruno

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Daniel Jin <Da*******@discussions.microsoft.com> wrote:
> There are good reasons for not throwing exceptions when they're not
> suitable, in terms of readability and code flow, but performance
> rarely
> comes into it in my experience.

this is really one area that could use some clear guidelines. I've
seen
literature make statements like "exceptions are a performance hit" and
"they
should be reserved for exceptional situations". but nothing really
outlines
how they should really be used.

I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.

for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these
exceptional
situations since we are clearly expecting these conditions to occur?
and how
do we indicate these to the presentation? return values (as used in
many
samples) just seem to be such an antiquated method. in the end I went
with
the exception route, contrary to what many of the guidelines seem to
suggest.

Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too



Nov 17 '05 #16
> This means that "throw InvalidArgument" doesn't publish any assumptions,
it restricts which errors the caller is *allowed* to react on.
well that's the whole point. It's poor design to catch any and everything in
the first place - a few situations warrant that kind of practice by the way.
But in general, handling the exception indicates the callers intent to take
action for the issue. All other exceptions should be left to bubble up. So
yes, unexpected should be left to bring down the house. quite rightly.
Especially anywhere using decoupling, type-limiting on exceptions just
really isn't that usefull. I disagree strongly. You will need to justify your position.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Helge Jensen" <he**********@slog.dk> wrote in message
news:Ou**************@TK2MSFTNGP12.phx.gbl... Alvin Bruney [MVP - ASP.NET] wrote:
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.


C++ (thankfully :) doesn't have compile-time checked exceptions like JAVA,
but runtime-checked. C++ throw declarations have semantics which severely
limits their usefullness.

A function-invocation which throws something not declared in the throw
clause doesn't get to pass that exception up the stack but instead invokes
std::unexpected. std::unexpected is not allowed to return, but must abort
the program, throw a "bad_exception", or throw an exception in the throw
clause. Note that std::unexpected is a global function, so you really
can't expect anything other than "bad_exception" or the default:
std::terminate().

This means that "throw InvalidArgument" doesn't publish any assumptions,
it restricts which errors the caller is *allowed* to react on.

Especially anywhere using decoupling, type-limiting on exceptions just
really isn't that usefull.

It's nice to have destructors that doesn't throw, but writing "throw()"
really doesn't help any more than /* doesn't throw */.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-

Nov 17 '05 #17
>
What exactly do you mean by "using exceptions to determine flow
control" though? To me, using exceptions to quickly and reliably (in
the absence of anything deliberately catching an exception too early)
aborting a potentially "deep" operation *is* using them to determine
flow control, but in a good way. What kind of thing are you thinking of
as being definitely bad?

The prototypical example is using int.Parse rather then int.TryParse. Back
before I knew there was a TryParse I used to write code like this...

try
{
int.Parse(someString);
// execute code based on fact that input was an integer
}
catch(Exception)
}
// do something different in this path and continue executing.
}

It really gets bad when I see code that catches an exception and returns a
bool (false) (i.e. converts an exception to a return value). Quite often the
routine that sees the bool false return value turns around and throws an
exception (converts it back the other way...ouch!).

These are extreme examples, but I have seen them.

Bottom line - the distinction between a failure that violates the
programmatic assumptions of a method that should result in an exception and
the normal failures that occur (record not found) and which should use a
return value is very thin and blurry. Coming up with practical guidelines to
cover all cases is IMO very difficult; not quite pointless but almost.

I think it would also be useful to have the language or environment itself
aid us in laying out exception handling policies and implementations. In
other words, which modules are supposed to catch which exceptions that can
be generated by other modules? It isn't possible to look at a method,
determine which exceptions can escape it, and which module, method,
whatever, is supposed to handle it. We currently get no help whatsoever from
the environment to aid us. At least with sentinel values we knew that it was
always the caller's responsibility to deal with it, even it that meant to
explicitly pass it upstream. Now we don't even know that much. I'm not
saying I prefer return values (I don't) but the current sitiuation really
puts a larger burden on the designer then the previous system.



Nov 17 '05 #18
On 2005-04-15, Jon Skeet C# MVP <sk***@pobox.com> wrote:
<"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc>> wrote:
>I suspect that's because it varies so much, to be honest. It's
>difficult to get hard and fast rules which apply in all situations.

<snip>
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.


Without knowing C++ well, I don't know exactly what you mean. Could you
elaborate?


I think he is refering to the C++ throw specification... It's similar
to Java's throws - but it isn't as strict :) It might look something
like:

// no throw
void aFunc () throw ();

// the function can only throw bad_alloc
void anotherFunc () throw (bad_alloc);

There is special behavior if you throw something not in the exception
list, but I don't remember the exact details... (boy, it's been a while
:)

--
Tom Shelton [MVP]
Nov 17 '05 #19
>> This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)


I agree. I think the question of when to throw an exception versus
returning a sentinel value is one of the least understood and most error
prone aspects of .NET/C#.


This is the central issue, and I think that the right way to go is to have
"rich" APIs, to make a clear distinction between "special cases" and
"exceptions" and to deal with special cases through return codes or sentinel
values rather than through EH mechanisms.

The basic idea is to have pairs of entry points like:

int Parse(string); // throws exception if string does not represent an int
bool TryParse(string, ref int); // returns false if string does not
represent an int

FileStream OpenFile(string name) // never returns null, throw exception if
file does not exist
FileStream TryOpenFile(string name) // returns null if file does not exist
(but still throws exception if file exists but cannot be opened)

Then, depending on the context, you call one or the other:

1) If you are in a situation where the "exception" must be dealt with
"locally", i.e. where you would put a try/catch around the call to catch a
FileNotFoundException, then you use the "Try" form and you don't catch any
exception locally.

2) Otherwise, you use the non-Try form and you let the exception bubble up.

If you are in case 1, it means that the exception that you would be catching
is not really an exception, it is a "special case" that you are actually
expecting and upon which you need to react in a special way. For example, if
you are parsing user input, you know in advance that the input may not be
valid, and you know that you have to handle this "special case". So, you
should use TryParse. Also, if you are trying to open a file by looking up up
a list of search paths (if not found in path1, try path2, ..), then, the
fact that the file does not exist is not really an exception, it is
something that is part of your design, and you should use TryOpenFile.

If you are in case 2, it means that the exception is really an exception,
i.e. something caused by an "abnormal" context of execution, and you do not
have any "local" action to take to deal with it. For example, if you are
parsing a stream that has been formatted by another program according to a
well defined protocol that ensures that ints are correctly formatted, you
should use Parse rather than TryParse. Also, if you are trying to open a
vital file that has been setup by your installation program, or if you are
trying to open a file that another method has created just before, you
should use OpenFile rather than TryOpenFile.

Of course, this approach forces you to duplicate some entry points in your
APIs, but it has many advantages:

* You reduce the amount of EH code. You get rid of all the local try/catch,
and you only need a few try/catchall in "strategic" places of your
application, where you are going to log the error, alert the user, and
continue. With this scheme, exceptions are handled in a uniform way (you
don't need to discriminate amongst exception types) and the EH code becomes
very simple (only 2 basic patterns for try/catch) and very robust.

* You clearly separate the "application" logic from the "exception handling"
logic. All the "special cases" that your application must deal with are
handled via "normal" constructs (return values, if/then/else), and all the
"exceptional" cases are handled in a uniform way and go through the
try/catchall constructs that you have put in strategic places. You can
review the application logic without having to analyze complex try/catch
constructs spread throughout the code. You can also more easily review the
EH and verify that all exceptions will be propertly logged and that the user
will know about them if he needs to, without having to go into the details
of the application logic.

* It enforces clear "contracts" on your methods: OpenFile and TryOpenFile do
basically the same thing but they have different contracts and choosing one
or the other "means" something: if you read a piece of code and see a call
to TryOpenFile, you know that there is no guarantee "by design" that the
file will be there; on the other hand, if you see a call to OpenFile, you
know that the file should be there "by design" (it was created by another
method before, or it is a vital file created by the installation program,
etc.). Of couse, the fact that the file should be there "by design" does not
mean that it will always be there, but from your standpoint, the fact that
it would not be there is just as exceptional as it being corrupt or the disk
having bad sectors, and the best thing your program can do in this case is
log the error with as much information as possible and tell the user that
something abnormal happened.

* You will get optimal performance because the exception channel will only
be used in exceptional situations (and the cost of logging the exception
will probably outweight the cost of catching it anyway).

So, when I see "local" EH constructs that catch "specific" exceptions, my
first reaction is: API Problem!
In some cases, the caller is at fault and he should use another API or
perform some additional tests before the call.
In other cases, the callee is at fault because he provided an incomplete API
that does not provide a way to perform this specific test without catching
an exception. In this second case, we have the review the API and enhance it
(unless it is a third party API that we don't control, in which case we
usually introduce a helper method that does the dirty try/catch work and
exposes the "richer" API).

Morale: Don't "program" with exceptions (by catching specific exceptions)
but design good APIs what will let you program without them (by letting the
real exceptions bubble up to a generic catch handler).

Bruno.
Nov 17 '05 #20

"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc> wrote in message
news:%2****************@TK2MSFTNGP12.phx.gbl...
i catch your drift about bad *good ideas, but you can't fault the language
for programmer misuse. That's bound to happen anyway. It doesn't/shouldn't
detract from value though IMO
I don't agree. Of course, no language will prevent a bad programmer from
writing bad code, but the language designers should make their best efforts
to encourage good programming practices and make it harder for people to
write bad code (see the old debates about goto). And, IMO, "checked
exceptions" encourage bad programming practices like using exceptions as a
flow control mechanism in situations where the flow should be expressed
though if/then/else tests (see my other post).

Bruno


--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Bruno Jouhier [MVP]" <bj******@club-internet.fr> wrote in message
news:Oz**************@TK2MSFTNGP09.phx.gbl...
"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc> wrote in message
news:Ou**************@TK2MSFTNGP09.phx.gbl...
>I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
That's not true at all. The hard and fast rule is throw an exception
when an assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.


Java has this "feature" too, it is called "checked exceptions". When I
started to use Java, in 96, I thought that it was a good idea because it
seems to enforce stronger compile time verifications, but after
struggling a lot with them, I came to the conclusion that checked
exceptions are a "bad" good idea and that they do a lot more harm than
good, for many reasons, the main one being that they encourage the
programmer to catch exceptions locally instead of letting them bubble up
to a generic catch handler. The end result is code that is polluted with
catch clauses all over the place, and usually very poor exception
handling in the end.

So, "checked exception" is a bad thing, and actually, if you analyze the
Java libraries, you will see that all the early ones (the JDK of course)
made extensive use of them, and that the more recent ones tend to reject
them. And some Java gurus advocate against them (see
http://www.mindview.net/Etc/Discussi...ckedExceptions from Bruce
Eckel, the author of "Thinking in Java").

So, the C# designers made the right choice here.

Bruno

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Jon Skeet [C# MVP]" <sk***@pobox.com> wrote in message
news:MP************************@msnews.microsoft.c om...
Daniel Jin <Da*******@discussions.microsoft.com> wrote:
> > There are good reasons for not throwing exceptions when they're not
> > suitable, in terms of readability and code flow, but performance
> > rarely
> > comes into it in my experience.
>
> this is really one area that could use some clear guidelines. I've
> seen
> literature make statements like "exceptions are a performance hit" and
> "they
> should be reserved for exceptional situations". but nothing really
> outlines
> how they should really be used.

I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.

> for example. in a n-layered architecture, somewhere in the BL,
> certain
> validations will be performed based on various business rules. and
> operations should fail when pre-requisits are not met. are these
> exceptional
> situations since we are clearly expecting these conditions to occur?
> and how
> do we indicate these to the presentation? return values (as used in
> many
> samples) just seem to be such an antiquated method. in the end I went
> with
> the exception route, contrary to what many of the guidelines seem to
> suggest.

Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too



Nov 17 '05 #21

"Bruno Jouhier [MVP]" <bj******@club-internet.fr> wrote in message
news:uZ**************@TK2MSFTNGP10.phx.gbl...
This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)

I agree. I think the question of when to throw an exception versus
returning a sentinel value is one of the least understood and most error
prone aspects of .NET/C#.


This is the central issue, and I think that the right way to go is to have
"rich" APIs, to make a clear distinction between "special cases" and
"exceptions" and to deal with special cases through return codes or
sentinel values rather than through EH mechanisms.

The basic idea is to have pairs of entry points like:

int Parse(string); // throws exception if string does not represent an int
bool TryParse(string, ref int); // returns false if string does not
represent an int

FileStream OpenFile(string name) // never returns null, throw exception if
file does not exist
FileStream TryOpenFile(string name) // returns null if file does not exist
(but still throws exception if file exists but cannot be opened)

Then, depending on the context, you call one or the other:

1) If you are in a situation where the "exception" must be dealt with
"locally", i.e. where you would put a try/catch around the call to catch a
FileNotFoundException, then you use the "Try" form and you don't catch any
exception locally.

2) Otherwise, you use the non-Try form and you let the exception bubble
up.


I think there are times where this approach works well and is the one I
would take, so partially agree.

But I think that this really only works well in the small, and it does not
scale well, either into large projects with large development teams, or just
with components in general. It essentially requires that all APIs would
double in size, one for each variant. It also means that you have
effectively moved flow control from the caller to the callee. One could
argue that this is still an improvement, but I'm not sure I agree.

It means that the surface area of your API just doubled, needs documentation
and testing, etc. A supplier of a library/component could never be sure
which APIs needed this doubling, because typically you don't always know how
someone else will use it, so you would have to double almost all APIs just
to be sure that caught all the cases.

Another side effect is that the number of combinations of calls into your
API just increased exponentially, and issues related to the
coherency/consistency between the different code paths need to be addressed.
For example, if there are two methods, each with a throw/non-throw variant,
then there are 4 combinations of calls that can be made...

{ Method1_Throws(); Method2_Throws(); }
{ Method1_Throws(); Method2_NonThrows(); }
{ Method2_Throws(); Method1_Throws(); }
{ Method2_Throws(); Method1_NonThrows(); }

And each combination needs to be tested to ensure that invariants are
maintained, fields are getting set/reset correctly, caches are correct, etc.

It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception and
returned a sentinel value (or vice-versa), otherwise the try-catch flow
control code goes back into the main body of code.


If you are in case 2, it means that the exception is really an exception,
i.e. something caused by an "abnormal" context of execution, and you do
not have any "local" action to take to deal with it. For example, if you
are parsing a stream that has been formatted by another program according
to a well defined protocol that ensures that ints are correctly formatted,
you should use Parse rather than TryParse. Also, if you are trying to open
a vital file that has been setup by your installation program, or if you
are trying to open a file that another method has created just before, you
should use OpenFile rather than TryOpenFile.

I agree with the intent but the problem is that in any decent sized project
there are thousands of decision points like the ones you describe and many
of them are a lot less obvious. It is not at all obvious when one should use
versus the other.


* You reduce the amount of EH code. You get rid of all the local
try/catch, and you only need a few try/catchall in "strategic" places of
your application, where you are going to log the error, alert the user,
and continue. With this scheme, exceptions are handled in a uniform way
(you don't need to discriminate amongst exception types) and the EH code
becomes very simple (only 2 basic patterns for try/catch) and very robust.
Agreed.

* You clearly separate the "application" logic from the "exception
handling" logic. All the "special cases" that your application must deal
with are handled via "normal" constructs (return values, if/then/else),
and all the "exceptional" cases are handled in a uniform way and go
through the try/catchall constructs that you have put in strategic places.
Again, I think this moves the flow control more then it eliminates it.
You can review the application logic without having to analyze complex
try/catch constructs spread throughout the code. You can also more easily
review the EH and verify that all exceptions will be propertly logged and
that the user will know about them if he needs to, without having to go
into the details of the application logic.
Agreed, with the proviso that I disagree with the notion that one should
never catch an exception that cannot be programmtically recovered from. I
believe there are many cases where it is beneficial to catch-wrap-throw as a
means of adding context information to the exception. The reason is that one
of the primary beneficiaries is the end-user. The user does not get any
benefit whatsoever from a message that says "Null reference exception"...it
might as well say "I fell down and can't get up.". What should the user do
to fix it and continue? Adding context will provide a more meaningful
message and ideally will aid the user in determining what to do to fix,
workaround, etc. the problem so they can get their work done. So my
recommendation is to use try-catch statements at strategic points as a means
of adding context, not as a program flow device.

I don't believe this will unduly affect performance because the code is
already in the exception path, so the additional overhead will likely not be
noticeable.

* It enforces clear "contracts" on your methods: OpenFile and TryOpenFile
do basically the same thing but they have different contracts and choosing
one or the other "means" something: if you read a piece of code and see a
call to TryOpenFile, you know that there is no guarantee "by design" that
the file will be there; on the other hand, if you see a call to OpenFile,
you know that the file should be there "by design" (it was created by
another method before, or it is a vital file created by the installation
program, etc.). Of couse, the fact that the file should be there "by
design" does not mean that it will always be there, but from your
standpoint, the fact that it would not be there is just as exceptional as
it being corrupt or the disk having bad sectors, and the best thing your
program can do in this case is log the error with as much information as
possible and tell the user that something abnormal happened.

* You will get optimal performance because the exception channel will only
be used in exceptional situations (and the cost of logging the exception
will probably outweight the cost of catching it anyway).
It is not necessary to log at each try-catch handler. I recommend logging at
the initial catch site and again (if necessary) if the exception is about to
leave the module boundary.

So, when I see "local" EH constructs that catch "specific" exceptions, my
first reaction is: API Problem!
In some cases, the caller is at fault and he should use another API or
perform some additional tests before the call.
In other cases, the callee is at fault because he provided an incomplete
API that does not provide a way to perform this specific test without
catching an exception. In this second case, we have the review the API and
enhance it (unless it is a third party API that we don't control, in which
case we usually introduce a helper method that does the dirty try/catch
work and exposes the "richer" API).

Morale: Don't "program" with exceptions (by catching specific exceptions)
but design good APIs what will let you program without them (by letting
the real exceptions bubble up to a generic catch handler).

Bruno.

Nov 17 '05 #22
Bruno,

I agree completly with you, however would it not be nice when we had some
nice classes to do some checking work.

In my opinion there is a lack of those. When you want samples.

Checking for proper typed strings about values.
Checking for proper typed strings about dates.

Now we have all to do that ourselves while it is a general problem.

Just my thought,

Cor
Nov 17 '05 #23

"David Levine" <no******************@wi.rr.com> wrote in message
news:O$**************@TK2MSFTNGP15.phx.gbl...

"Bruno Jouhier [MVP]" <bj******@club-internet.fr> wrote in message
news:uZ**************@TK2MSFTNGP10.phx.gbl...
This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)
I agree. I think the question of when to throw an exception versus
returning a sentinel value is one of the least understood and most error
prone aspects of .NET/C#.
This is the central issue, and I think that the right way to go is to
have "rich" APIs, to make a clear distinction between "special cases" and
"exceptions" and to deal with special cases through return codes or
sentinel values rather than through EH mechanisms.

The basic idea is to have pairs of entry points like:

int Parse(string); // throws exception if string does not represent an
int
bool TryParse(string, ref int); // returns false if string does not
represent an int

FileStream OpenFile(string name) // never returns null, throw exception
if file does not exist
FileStream TryOpenFile(string name) // returns null if file does not
exist (but still throws exception if file exists but cannot be opened)

Then, depending on the context, you call one or the other:

1) If you are in a situation where the "exception" must be dealt with
"locally", i.e. where you would put a try/catch around the call to catch
a FileNotFoundException, then you use the "Try" form and you don't catch
any exception locally.

2) Otherwise, you use the non-Try form and you let the exception bubble
up.


I think there are times where this approach works well and is the one I
would take, so partially agree.

But I think that this really only works well in the small, and it does not
scale well, either into large projects with large development teams, or
just with components in general. It essentially requires that all APIs
would double in size, one for each variant. It also means that you have
effectively moved flow control from the caller to the callee. One could
argue that this is still an improvement, but I'm not sure I agree.

It means that the surface area of your API just doubled, needs
documentation and testing, etc. A supplier of a library/component could
never be sure which APIs needed this doubling, because typically you don't
always know how someone else will use it, so you would have to double
almost all APIs just to be sure that caught all the cases.


Well, I have been using this methodology on a fairly large project (6 years,
more than 10 developpers involved, multi-tier application, multi-threaded
kernel, probably over a million lines of code at this stage) and it does
scale up!

It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things by
name, open files, load resources by name, etc. This is a very small fraction
of the APIs, and the overhead of doubling these calls is really not a
problem, especially if you have a good naming convention (we use Find for
the methods that throw and Lookup for the methods that return null). The
rest of the API only comes in one flavor.

Also, you don't always need to duplicate the entry points. Sometimes, it is
better to pass an extra parameter to indicate whether errors should be
signaled through exception or whether they should be returned through some
kind of error object. For example, most of our parsing routines take an
"errors" argument. If you pass null, the parser will throw exceptions and
will always return a valid parse tree. If you pass an errors object, the
parser will collect the errors into it, and will return null if the parsing
fails. This is a typical example of clever API design, that gives us the two
flavors in one call without adding much complexity to the API (I did not
invent it, there are plenty of examples in the LISP APIs of emacs).

Another side effect is that the number of combinations of calls into your
API just increased exponentially, and issues related to the
coherency/consistency between the different code paths need to be
addressed. For example, if there are two methods, each with a
throw/non-throw variant, then there are 4 combinations of calls that can
be made...

{ Method1_Throws(); Method2_Throws(); }
{ Method1_Throws(); Method2_NonThrows(); }
{ Method2_Throws(); Method1_Throws(); }
{ Method2_Throws(); Method1_NonThrows(); }

And each combination needs to be tested to ensure that invariants are
maintained, fields are getting set/reset correctly, caches are correct,
etc.

It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception and
returned a sentinel value (or vice-versa), otherwise the try-catch flow
control code goes back into the main body of code.
Yes, this is a problem, and we setup such wrappers for calls that are used
in many places in our code (fortunately, this does not happen very often).


If you are in case 2, it means that the exception is really an exception,
i.e. something caused by an "abnormal" context of execution, and you do
not have any "local" action to take to deal with it. For example, if you
are parsing a stream that has been formatted by another program according
to a well defined protocol that ensures that ints are correctly
formatted, you should use Parse rather than TryParse. Also, if you are
trying to open a vital file that has been setup by your installation
program, or if you are trying to open a file that another method has
created just before, you should use OpenFile rather than TryOpenFile.

I agree with the intent but the problem is that in any decent sized
project there are thousands of decision points like the ones you describe
and many of them are a lot less obvious. It is not at all obvious when one
should use versus the other.


In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that). So,
the right thing to do is to call the "non-Try" version and let the exception
bubble up. In the remaining 5%, you know that you have to deal with a
special case (your functional analysis should tell you that) and you call
the "Try" version. So, this is actually rather straightforward, and the
programmers who joined our team and did not use this methodology before
adjusted rather quickly.

If you don't know which call to use, it means that you don't have a good
functional analysis and that you don't know which cases your program should
handle and where it should handle them, and which ones it should not handle
and only try to recover from.


* You reduce the amount of EH code. You get rid of all the local
try/catch, and you only need a few try/catchall in "strategic" places of
your application, where you are going to log the error, alert the user,
and continue. With this scheme, exceptions are handled in a uniform way
(you don't need to discriminate amongst exception types) and the EH code
becomes very simple (only 2 basic patterns for try/catch) and very
robust.
Agreed.

* You clearly separate the "application" logic from the "exception
handling" logic. All the "special cases" that your application must deal
with are handled via "normal" constructs (return values, if/then/else),
and all the "exceptional" cases are handled in a uniform way and go
through the try/catchall constructs that you have put in strategic
places.


Again, I think this moves the flow control more then it eliminates it.
You can review the application logic without having to analyze complex
try/catch constructs spread throughout the code. You can also more easily
review the EH and verify that all exceptions will be propertly logged and
that the user will know about them if he needs to, without having to go
into the details of the application logic.


Agreed, with the proviso that I disagree with the notion that one should
never catch an exception that cannot be programmtically recovered from. I
believe there are many cases where it is beneficial to catch-wrap-throw as
a means of adding context information to the exception. The reason is that
one of the primary beneficiaries is the end-user. The user does not get
any benefit whatsoever from a message that says "Null reference
exception"...it might as well say "I fell down and can't get up.".


Yes, this is actually one of the 2 try/catch patterns that we use

try {..}
catch (Exception ex) { throw new MyException("higher level message",
ex); }
What should the user do to fix it and continue? Adding context will
provide a more meaningful message and ideally will aid the user in
determining what to do to fix, workaround, etc. the problem so they can
get their work done. So my recommendation is to use try-catch statements
at strategic points as a means of adding context, not as a program flow
device.
Yes.

I don't believe this will unduly affect performance because the code is
already in the exception path, so the additional overhead will likely not
be noticeable.
Yes.

* It enforces clear "contracts" on your methods: OpenFile and TryOpenFile
do basically the same thing but they have different contracts and
choosing one or the other "means" something: if you read a piece of code
and see a call to TryOpenFile, you know that there is no guarantee "by
design" that the file will be there; on the other hand, if you see a call
to OpenFile, you know that the file should be there "by design" (it was
created by another method before, or it is a vital file created by the
installation program, etc.). Of couse, the fact that the file should be
there "by design" does not mean that it will always be there, but from
your standpoint, the fact that it would not be there is just as
exceptional as it being corrupt or the disk having bad sectors, and the
best thing your program can do in this case is log the error with as much
information as possible and tell the user that something abnormal
happened.

* You will get optimal performance because the exception channel will
only be used in exceptional situations (and the cost of logging the
exception will probably outweight the cost of catching it anyway).
It is not necessary to log at each try-catch handler. I recommend logging
at the initial catch site and again (if necessary) if the exception is
about to leave the module boundary.


No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once. Our second try/catch pattern
is:

try {..}
catch (Exception ex) { LogAndAlertUser(ex); GetReadyToContinue(); }

Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it is
an encouragement to use exceptions as flow control in application logic. If
you don't use exception as flow control for special cases that should be
tested by your application logic, they should all bubble up the same way
(get wrapped with higher level message, and then logged).

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that this
someone else is not very rigorous about logging exceptions. In this case, we
do our own logging in the entry points of our component and we rethrow the
exception (so that the someone else still gets an exception). But this is
just so that we don't loose the information if the client of our component
does not follow the rules (does not log every exception that he gets from
our component).

Bruno.

So, when I see "local" EH constructs that catch "specific" exceptions, my
first reaction is: API Problem!
In some cases, the caller is at fault and he should use another API or
perform some additional tests before the call.
In other cases, the callee is at fault because he provided an incomplete
API that does not provide a way to perform this specific test without
catching an exception. In this second case, we have the review the API
and enhance it (unless it is a third party API that we don't control, in
which case we usually introduce a helper method that does the dirty
try/catch work and exposes the "richer" API).

Morale: Don't "program" with exceptions (by catching specific exceptions)
but design good APIs what will let you program without them (by letting
the real exceptions bubble up to a generic catch handler).

Bruno.


Nov 17 '05 #24

"Cor Ligthert" <no************@planet.nl> wrote in message
news:On**************@tk2msftngp13.phx.gbl...
Bruno,

I agree completly with you, however would it not be nice when we had some
nice classes to do some checking work.

In my opinion there is a lack of those. When you want samples.

Checking for proper typed strings about values.
Checking for proper typed strings about dates.

Now we have all to do that ourselves while it is a general problem.
Yes. In our framework, we have one helper class for every basic type. These
helper classes contain handy methods that we don't find in the .NET
framework, and they include methods to verify the validity of strings, to
find the end of the formatted value in a larger string, etc.

Of course, it would be nicer if these methods were included in the .NET
framework.

Just my thought,

Cor

Nov 17 '05 #25
Yes, that's what i am talking about. Why can't c# have that? I miss that.
It's painful especially when using code tucked away in a library that i
didn't write. So now i have to turn around and catch all exceptions - and
that's not elegant at all.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Tom Shelton" <ts******@YOUKNOWTHEDRILLcomcast.net> wrote in message
news:Ou*************@TK2MSFTNGP15.phx.gbl...
On 2005-04-15, Jon Skeet C# MVP <sk***@pobox.com> wrote:
<"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc>> wrote:
>I suspect that's because it varies so much, to be honest. It's
>difficult to get hard and fast rules which apply in all situations.
<snip>
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that
confusion
is absent in well written C++.


Without knowing C++ well, I don't know exactly what you mean. Could you
elaborate?


I think he is refering to the C++ throw specification... It's similar
to Java's throws - but it isn't as strict :) It might look something
like:

// no throw
void aFunc () throw ();

// the function can only throw bad_alloc
void anotherFunc () throw (bad_alloc);

There is special behavior if you throw something not in the exception
list, but I don't remember the exact details... (boy, it's been a while
:)

--
Tom Shelton [MVP]

Nov 17 '05 #26
Alvin Bruney [MVP - ASP.NET] wrote:
This means that "throw InvalidArgument" doesn't publish any assumptions,
it restricts which errors the caller is *allowed* to react on.
well that's the whole point. It's poor design to catch any and everything in
the first place - a few situations warrant that kind of practice by the way.
Yes, I agree here.

But i don't see why functions should limit the errors that the caller is
allowed to know.
But in general, handling the exception indicates the callers intent to take
action for the issue. All other exceptions should be left to bubble up. So
yes, unexpected should be left to bring down the house. quite rightly.


You are "robbing" the caller of the ability to regain his control-flow,
which he has been kind enough to pass to your code, he may need it back
-- even if he behaved badly. For one thing the caller has no chance to
run cleanup-code (except for the global code registered in unexpected).

While you may feel the right to "punish" the caller for passing you
invalid input or whatnot, you should use the standard way to inform him
of that: a precise error-description in an exception.

If the user doesn't do proper exception-handling, and just ignores the
exception or something like that there really isn't much you can do for
him, and certainly he won't learn anything by having his application
close on him through unexpected().

Any component exhibiting the bevaiour you describe would be close to
useless to me, I would never dare invoke it's functions.
Especially anywhere using decoupling, type-limiting on exceptions just
really isn't that usefull.


I disagree strongly. You will need to justify your position.


Decoupling removes knowledge of implementation. If you don't know the
implementations you can't even come up with a proper *guess* of the
possible exceptions thrown by implementations.

In JAVA, this results in interfaces without throw-declarations, or with
"throws Exception", which are both bad:

- no throw declaration means implementations have to catch, and in
friendly code translate and rethrow an unchecked, checked exceptions.
Even though the implementation has *no* valid error handling to actually
do, and would just like to pass the original exception if at all possible.

- throws Exception doesn't add any knowledge, so what have you gained?

The last option is to hazard a guess as to which checked exceptions
implemeners might throw. What happens then is that you find yourself
extending the list of thrown exceptions, untill you end up with throws
Exception. If you refuse to extend the "throws" clause when required you
end up either preventing some implementations or, if the coder rethrows
in an unchecked exception, having a mix exceptions: some wrapped, some not.

In C++, the problem is the same, you just don't see it untill runtime.
Further, wrapping often isn't even a possibility in C++ due to
memory-management.

I can see how c++ "throw()" helps the compiler, and possibly the user,
on destructors, but in other places it really doesn't help any more than
/* @throws .... */, and is often worse.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #27
> You are "robbing" the caller of the ability to regain his control-flow,
which he has been kind enough to pass to your code, he may need it back --
even if he behaved badly. For one thing the caller has no chance to run
cleanup-code (except for the global code registered in unexpected).

This is not accurate. The caller can use try-finally constructs to ensure
that cleanup code is run.
While you may feel the right to "punish" the caller for passing you
invalid input or whatnot, you should use the standard way to inform him of
that: a precise error-description in an exception.

If the user doesn't do proper exception-handling, and just ignores the
exception or something like that there really isn't much you can do for
him, and certainly he won't learn anything by having his application close
on him through unexpected().


Again, proper use of try-finally constructs should ensure proper cleanup for
the typical case.
Nov 17 '05 #28
>
Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!
Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?

It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things by
name, open files, load resources by name, etc. This is a very small
fraction of the APIs, and the overhead of doubling these calls is really
not a problem, especially if you have a good naming convention (we use
Find for the methods that throw and Lookup for the methods that return
null). The rest of the API only comes in one flavor.
What other conventions have you adopted to support this?

Also, you don't always need to duplicate the entry points. Sometimes, it
is better to pass an extra parameter to indicate whether errors should be
signaled through exception or whether they should be returned through some
kind of error object. For example, most of our parsing routines take an
"errors" argument. If you pass null, the parser will throw exceptions and
will always return a valid parse tree. If you pass an errors object, the
parser will collect the errors into it, and will return null if the
parsing fails. This is a typical example of clever API design, that gives
us the two flavors in one call without adding much complexity to the API
(I did not invent it, there are plenty of examples in the LISP APIs of
emacs).
I've seen other APIs that take a "throwOnError" argument but I am not fond
of it (yet). I prefer a single path throw the code, not two. Have you ever
encountered problems related to this?
It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception
and returned a sentinel value (or vice-versa), otherwise the try-catch
flow control code goes back into the main body of code.


Yes, this is a problem, and we setup such wrappers for calls that are used
in many places in our code (fortunately, this does not happen very often).


In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that).
So, the right thing to do is to call the "non-Try" version and let the
exception bubble up. In the remaining 5%, you know that you have to deal
with a special case (your functional analysis should tell you that) and
you call the "Try" version.
What kind of functional analysis are you referring to? Perhaps you analyze
things a bit differently then what I am accustomed to.

It is not necessary to log at each try-catch handler. I recommend logging
at the initial catch site and again (if necessary) if the exception is
about to leave the module boundary.


No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once.

I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++ Win32
to managed code no one really knew what best practices to apply...it
evolved. As a result the original code base was littered with empty
try-catches and exceptions were getting swallowed, converted, etc. all over.
My reaction to that was to establish requirements to never allow an
exception to get silently dropped again. The result was double-logging - the
first time when it was initially thrown and the last time when it was
handled or left the module boundary - this way if it got dropped somewhere
in the middle we would be able to detect it.

The 2nd practical result was that I made it a requirement that all swallowed
exceptions must call a central method (called SwallowException) that by
default printed out the exception message to the Trace - one of the
arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed work
to either remove the source of the exception or do some other rewrite. There
are circumstances when swallowing an exception makes sense, and most of
those fall into the category that we are discussing - when to throw versus
return some other value. IOW, wrapping the API into a call that does not
throw would accomplish the same thing, and I'll probably switch over to
using that mechanism - it makes sense.
Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it
is an encouragement to use exceptions as flow control in application
logic. If you don't use exception as flow control for special cases that
should be tested by your application logic, they should all bubble up the
same way (get wrapped with higher level message, and then logged).

Agreed. It is a silly rule.

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that this
someone else is not very rigorous about logging exceptions. In this case,
we do our own logging in the entry points of our component and we rethrow
the exception (so that the someone else still gets an exception). But this
is just so that we don't loose the information if the client of our
component does not follow the rules (does not log every exception that he
gets from our component).


That sounds like the same sort of rule I use, which is to log when an
exception leaves the boundaries of the module.

Nov 17 '05 #29
David Levine wrote:
You are "robbing" the caller of the ability to regain his control-flow,
which he has been kind enough to pass to your code, he may need it back --
even if he behaved badly. For one thing the caller has no chance to run
cleanup-code (except for the global code registered in unexpected).
This is not accurate. The caller can use try-finally constructs to ensure
that cleanup code is run.
This specific discussion concerns C++. The topic is the effects of
having throw-declarations on functions, which C# doesn't have.

My point was, that the "feature" throw declarations in c++ is to prevent
cleanup using the C++ correspondant to finally.
While you may feel the right to "punish" the caller for passing you
invalid input or whatnot, you should use the standard way to inform him of
that: a precise error-description in an exception.

If the user doesn't do proper exception-handling, and just ignores the
exception or something like that there really isn't much you can do for
him, and certainly he won't learn anything by having his application close
on him through unexpected().

Again, proper use of try-finally constructs should ensure proper cleanup for
the typical case.


I don't understand how that is related to the specific argument I made.

I state that nothing is gained by using throw-clauses, and that if you
just removed them the caller would be better off.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #30

"David Levine" <no******************@wi.rr.com> wrote in message
news:es**************@TK2MSFTNGP14.phx.gbl...

Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!
Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?


Overall, I would say that it works quite well because the rules and the
patterns are simple. We sometimes find a bit of resitance with developers
that join the team and that are used to or have learnt other methods (or
quite often no method at all), but they have to accept the rules, and after
they do, they see the benefits and they don't question it any more (so far).

Some developers take a bit of time to adjust because they are caught in the
"error code mindset" and they feel guilty about letting exceptions bubble up
without catching them (not testing an error code is very bad, but not
catching an exception locally is usually the best thing you can do). But
with a bit of guidance, they quickly get the point. One of the reasons why
it works quite smoothly is because they end up writing very little try/catch
constructs (almost none in the business logic, and the ones that they need
in the framework are already in place).

So, it is easier to "not write any try/catch" than to learn complex rules
about how you should write them. On the other hand, they are encouraged to
use "throw" rather freely (as soon as they detect a case that should never
happen). For example most of our switch/default contain something like throw
new MyException("bad case value: " + val).


It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things
by name, open files, load resources by name, etc. This is a very small
fraction of the APIs, and the overhead of doubling these calls is really
not a problem, especially if you have a good naming convention (we use
Find for the methods that throw and Lookup for the methods that return
null). The rest of the API only comes in one flavor.
What other conventions have you adopted to support this?


Not much actually. The main other one is the Try prefix for the version that
does not throw.

Also, most of our code was in J# and we don't have indexers by name. So, it
was natural to use Find/Lookup for these methods. In C#, the natural way to
write these lookup APIs is with indexers and then you need another
convention (for ex, indexer implements the "Find" version and the "Lookup"
version is provided via a separate method, or an additional argument to the
indexer).


Also, you don't always need to duplicate the entry points. Sometimes, it
is better to pass an extra parameter to indicate whether errors should be
signaled through exception or whether they should be returned through
some kind of error object. For example, most of our parsing routines take
an "errors" argument. If you pass null, the parser will throw exceptions
and will always return a valid parse tree. If you pass an errors object,
the parser will collect the errors into it, and will return null if the
parsing fails. This is a typical example of clever API design, that gives
us the two flavors in one call without adding much complexity to the API
(I did not invent it, there are plenty of examples in the LISP APIs of
emacs).


I've seen other APIs that take a "throwOnError" argument but I am not fond
of it (yet). I prefer a single path throw the code, not two. Have you ever
encountered problems related to this?


Here is how we typically use this dual parse method:

* when we are parsing inside an "authoring tool", we pass a non null
"errors" object and we have some logic after the parsing call to display the
errors contained in this errors object. The user is supposed to analyze
these errors and fix them.

* but in most of the other cases where we need to parse, we don't usually
have any mean to "fix" the errors, because we are not "authoring" any more,
we are usually parsing expressions that someone else has written and the
actual user of our program does not have a clue about them (the program may
even be non-interactive). So, we pass null, and if the expression is
invalid, an exception will be thrown, logged and the program will recover
where we designed it to recover (not locally, higher in the call chain). If
you think of it, there is not much more you can do.

This example shows another thing: the non-Try version is the one that is
used 95% of the time, the Try version is only used in special cases where we
can handle the problem locally. This is another reason why this methodology
is easy to learn: you just use the non-Try version by default (and
exceptions will bubble up in this case), and when you need to handle the
problem "locally" (which is not often), you use the Try version (and in both
cases, you don't write any try/catch locally).
It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception
and returned a sentinel value (or vice-versa), otherwise the try-catch
flow control code goes back into the main body of code.
Yes, this is a problem, and we setup such wrappers for calls that are
used in many places in our code (fortunately, this does not happen very
often).


In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that).
So, the right thing to do is to call the "non-Try" version and let the
exception bubble up. In the remaining 5%, you know that you have to deal
with a special case (your functional analysis should tell you that) and
you call the "Try" version.


What kind of functional analysis are you referring to? Perhaps you analyze
things a bit differently then what I am accustomed to.


This level of functional analysis is not always "written" because it is
sometimes too low level. But if it is not written, the coder should know
(for example from general design guidelines) whether he has to do something
locally or not. If he does not know, he should have an architect that can
advise him on this (and usually he will become autonomous after consulting
the architect a few times).

It is not necessary to log at each try-catch handler. I recommend
logging at the initial catch site and again (if necessary) if the
exception is about to leave the module boundary.
No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once.

I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++
Win32 to managed code no one really knew what best practices to apply...it
evolved. As a result the original code base was littered with empty
try-catches and exceptions were getting swallowed, converted, etc. all
over. My reaction to that was to establish requirements to never allow an
exception to get silently dropped again. The result was double-logging -
the first time when it was initially thrown and the last time when it was
handled or left the module boundary - this way if it got dropped somewhere
in the middle we would be able to detect it.


If you have placed catchall handlers at the bottom of all the stacks that
get into your code (event dispatcher + main loops of the threads that you
create + your main + API entry points that you expose to the outside +
remoting sinks), and if you stick to the patterns that I described, you have
the garantee that all exceptions will be logged and logged only once. The
advantage of logging at the bottom of the call chain (where you recover)
rather than in the first catch handler is that you get an exception that has
been wrapped, so you can log all the info: low level message at the origin
of the exception + all the higher level messages that have been added when
the exception bubbled up.

The 2nd practical result was that I made it a requirement that all
swallowed exceptions must call a central method (called SwallowException)
that by default printed out the exception message to the Trace - one of
the arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed
work to either remove the source of the exception or do some other
rewrite. There are circumstances when swallowing an exception makes sense,
and most of those fall into the category that we are discussing - when to
throw versus return some other value. IOW, wrapping the API into a call
that does not throw would accomplish the same thing, and I'll probably
switch over to using that mechanism - it makes sense.
Yes, you should give it a try.

Also, I am very assertive when I describe this scheme, and it may look like
I won't accept any deviation from it. But in practice, there are of course
some deviations. I just try to minimise them, and the best way it to try to
enforce the methodology as much as possible. So, if you try it, do the same,
give yourself some room for "local" try/catch (but you will see that you can
very often do without and that the result is better).

Bruno


Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it
is an encouragement to use exceptions as flow control in application
logic. If you don't use exception as flow control for special cases that
should be tested by your application logic, they should all bubble up the
same way (get wrapped with higher level message, and then logged).


Agreed. It is a silly rule.

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that this
someone else is not very rigorous about logging exceptions. In this case,
we do our own logging in the entry points of our component and we rethrow
the exception (so that the someone else still gets an exception). But
this is just so that we don't loose the information if the client of our
component does not follow the rules (does not log every exception that he
gets from our component).


That sounds like the same sort of rule I use, which is to log when an
exception leaves the boundaries of the module.

Nov 17 '05 #31
In article <#L**************@TK2MSFTNGP15.phx.gbl>, Alvin Bruney [MVP - ASP.NET] wrote:
Yes, that's what i am talking about. Why can't c# have that? I miss that.
It's painful especially when using code tucked away in a library that i
didn't write. So now i have to turn around and catch all exceptions - and
that's not elegant at all.


I wouldn't mind that, really. What I don't want is Java's checked
exceptions...

--
Tom Shelton [MVP]
Nov 17 '05 #32
that java part went over my head, i don't know java so i'm not getting much
of that checked/unchecked exception issue.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"Tom Shelton" <to*@YOUKNOWTHEDRILLmtogden.com> wrote in message
news:%2****************@TK2MSFTNGP15.phx.gbl...
In article <#L**************@TK2MSFTNGP15.phx.gbl>, Alvin Bruney [MVP -
ASP.NET] wrote:
Yes, that's what i am talking about. Why can't c# have that? I miss that.
It's painful especially when using code tucked away in a library that i
didn't write. So now i have to turn around and catch all exceptions - and
that's not elegant at all.


I wouldn't mind that, really. What I don't want is Java's checked
exceptions...

--
Tom Shelton [MVP]

Nov 17 '05 #33
>> Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!
You making this up Bruno? You guys hiring :-)

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"David Levine" <no******************@wi.rr.com> wrote in message
news:es**************@TK2MSFTNGP14.phx.gbl...

Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!


Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?

It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things
by name, open files, load resources by name, etc. This is a very small
fraction of the APIs, and the overhead of doubling these calls is really
not a problem, especially if you have a good naming convention (we use
Find for the methods that throw and Lookup for the methods that return
null). The rest of the API only comes in one flavor.


What other conventions have you adopted to support this?

Also, you don't always need to duplicate the entry points. Sometimes, it
is better to pass an extra parameter to indicate whether errors should be
signaled through exception or whether they should be returned through
some kind of error object. For example, most of our parsing routines take
an "errors" argument. If you pass null, the parser will throw exceptions
and will always return a valid parse tree. If you pass an errors object,
the parser will collect the errors into it, and will return null if the
parsing fails. This is a typical example of clever API design, that gives
us the two flavors in one call without adding much complexity to the API
(I did not invent it, there are plenty of examples in the LISP APIs of
emacs).


I've seen other APIs that take a "throwOnError" argument but I am not fond
of it (yet). I prefer a single path throw the code, not two. Have you ever
encountered problems related to this?
It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception
and returned a sentinel value (or vice-versa), otherwise the try-catch
flow control code goes back into the main body of code.


Yes, this is a problem, and we setup such wrappers for calls that are
used in many places in our code (fortunately, this does not happen very
often).


In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that).
So, the right thing to do is to call the "non-Try" version and let the
exception bubble up. In the remaining 5%, you know that you have to deal
with a special case (your functional analysis should tell you that) and
you call the "Try" version.


What kind of functional analysis are you referring to? Perhaps you analyze
things a bit differently then what I am accustomed to.

It is not necessary to log at each try-catch handler. I recommend
logging at the initial catch site and again (if necessary) if the
exception is about to leave the module boundary.


No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once.

I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++
Win32 to managed code no one really knew what best practices to apply...it
evolved. As a result the original code base was littered with empty
try-catches and exceptions were getting swallowed, converted, etc. all
over. My reaction to that was to establish requirements to never allow an
exception to get silently dropped again. The result was double-logging -
the first time when it was initially thrown and the last time when it was
handled or left the module boundary - this way if it got dropped somewhere
in the middle we would be able to detect it.

The 2nd practical result was that I made it a requirement that all
swallowed exceptions must call a central method (called SwallowException)
that by default printed out the exception message to the Trace - one of
the arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed
work to either remove the source of the exception or do some other
rewrite. There are circumstances when swallowing an exception makes sense,
and most of those fall into the category that we are discussing - when to
throw versus return some other value. IOW, wrapping the API into a call
that does not throw would accomplish the same thing, and I'll probably
switch over to using that mechanism - it makes sense.
Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it
is an encouragement to use exceptions as flow control in application
logic. If you don't use exception as flow control for special cases that
should be tested by your application logic, they should all bubble up the
same way (get wrapped with higher level message, and then logged).


Agreed. It is a silly rule.

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that this
someone else is not very rigorous about logging exceptions. In this case,
we do our own logging in the entry points of our component and we rethrow
the exception (so that the someone else still gets an exception). But
this is just so that we don't loose the information if the client of our
component does not follow the rules (does not log every exception that he
gets from our component).


That sounds like the same sort of rule I use, which is to log when an
exception leaves the boundaries of the module.

Nov 17 '05 #34

"Alvin Bruney [MVP - ASP.NET]" <www.lulu.com/owc> wrote in message
news:OH*************@TK2MSFTNGP15.phx.gbl...
Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!
You making this up Bruno? You guys hiring :-)
I was actually underestimating. I just ran wc to count the lines and I get
1.7 MLines J# + 280 KLines C# (don't know if I should feel good or bad about
these figures). Not all was written by hand, fortunately, the boring stuff
was generated by a modeling tool.

You'd like to move to France?

Bruno.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
"David Levine" <no******************@wi.rr.com> wrote in message
news:es**************@TK2MSFTNGP14.phx.gbl... >
Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!


Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?

It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things
by name, open files, load resources by name, etc. This is a very small
fraction of the APIs, and the overhead of doubling these calls is really
not a problem, especially if you have a good naming convention (we use
Find for the methods that throw and Lookup for the methods that return
null). The rest of the API only comes in one flavor.


What other conventions have you adopted to support this?

Also, you don't always need to duplicate the entry points. Sometimes, it
is better to pass an extra parameter to indicate whether errors should
be signaled through exception or whether they should be returned through
some kind of error object. For example, most of our parsing routines
take an "errors" argument. If you pass null, the parser will throw
exceptions and will always return a valid parse tree. If you pass an
errors object, the parser will collect the errors into it, and will
return null if the parsing fails. This is a typical example of clever
API design, that gives us the two flavors in one call without adding
much complexity to the API (I did not invent it, there are plenty of
examples in the LISP APIs of emacs).


I've seen other APIs that take a "throwOnError" argument but I am not
fond of it (yet). I prefer a single path throw the code, not two. Have
you ever encountered problems related to this?
It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception
and returned a sentinel value (or vice-versa), otherwise the try-catch
flow control code goes back into the main body of code.

Yes, this is a problem, and we setup such wrappers for calls that are
used in many places in our code (fortunately, this does not happen very
often).


In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that).
So, the right thing to do is to call the "non-Try" version and let the
exception bubble up. In the remaining 5%, you know that you have to deal
with a special case (your functional analysis should tell you that) and
you call the "Try" version.


What kind of functional analysis are you referring to? Perhaps you
analyze things a bit differently then what I am accustomed to.

It is not necessary to log at each try-catch handler. I recommend
logging at the initial catch site and again (if necessary) if the
exception is about to leave the module boundary.

No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once.

I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++
Win32 to managed code no one really knew what best practices to
apply...it evolved. As a result the original code base was littered with
empty try-catches and exceptions were getting swallowed, converted, etc.
all over. My reaction to that was to establish requirements to never
allow an exception to get silently dropped again. The result was
double-logging - the first time when it was initially thrown and the last
time when it was handled or left the module boundary - this way if it got
dropped somewhere in the middle we would be able to detect it.

The 2nd practical result was that I made it a requirement that all
swallowed exceptions must call a central method (called SwallowException)
that by default printed out the exception message to the Trace - one of
the arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed
work to either remove the source of the exception or do some other
rewrite. There are circumstances when swallowing an exception makes
sense, and most of those fall into the category that we are discussing -
when to throw versus return some other value. IOW, wrapping the API into
a call that does not throw would accomplish the same thing, and I'll
probably switch over to using that mechanism - it makes sense.
Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it
is an encouragement to use exceptions as flow control in application
logic. If you don't use exception as flow control for special cases that
should be tested by your application logic, they should all bubble up
the same way (get wrapped with higher level message, and then logged).


Agreed. It is a silly rule.

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that
this someone else is not very rigorous about logging exceptions. In this
case, we do our own logging in the entry points of our component and we
rethrow the exception (so that the someone else still gets an
exception). But this is just so that we don't loose the information if
the client of our component does not follow the rules (does not log
every exception that he gets from our component).


That sounds like the same sort of rule I use, which is to log when an
exception leaves the boundaries of the module.


Nov 17 '05 #35

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

31
by: rawCoder | last post by:
I have read that Exception Handling is expensive performance wise ( other than Throw ) , but exactly how ? Please consider the following example ... ////////////////// Code Block 1...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.