473,499 Members | 1,539 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Is This Overkill?

I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!

May 3 '07 #1
28 1273
PJ6
I will be very interested to see others' responses to this, but here's my
$.02.

While it is not always practicable or appropriate to do so, tasks of this
nature are best managed with a database.

Yes, I'm leaving a mountain unsaid with this statement, most notably
answering the question "how", but I would recommend this idea and
aspect-oriented programming to you for further study.

Paul

"Mike Hofer" <kc********@gmail.comwrote in message
news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!

May 3 '07 #2

"Mike Hofer" <kc********@gmail.comwrote in message
news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!
I like the idea and have thought about doing it myself at some point, just
never got around to it.

I might take it a bit further and somehow move the validation to an engine
scheme type of approach where rules for validation could be stored perhaps
in an XML file and then parameters would be passed in along with their
function names. This would give you a central location to store rules about
what is valid and what is not as well as the ability to perhaps alter
validation schemes dynamically without altering code if needed in the
future.

It may also, depending how it was architected, allow you to better control
rules for values that may be dependent upon other values that have been
passed into the function without having to create a bunch of nested
statements in the code body itself.

No matter what you do though, enforcement of its use is always going to be
an issue. Unless you stand over the shoulder or developers as they code, the
only real way to catch non-compliance would be code reviews.
May 3 '07 #3
Mike Hofer <kc********@gmail.comwrote:
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.
<snip>

You might want to look at Spec# for inspiration.
http://research.microsoft.com/specsharp/

If you don't mind the stack trace having one more line than is terribly
helpful, you can have a lot of useful exception-throwing validation
methods, e.g.

ValidateNonNull (name, "name");
ValidateRange (siteId 0, "siteId");

etc

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 3 '07 #4
On May 3, 2:59 pm, Jon Skeet [C# MVP] <s...@pobox.comwrote:
Mike Hofer <kchighl...@gmail.comwrote:
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

<snip>

You might want to look at Spec# for inspiration.http://research.microsoft.com/specsharp/

If you don't mind the stack trace having one more line than is terribly
helpful, you can have a lot of useful exception-throwing validation
methods, e.g.

ValidateNonNull (name, "name");
ValidateRange (siteId 0, "siteId");

etc

--
Jon Skeet - <s...@pobox.com>http://www.pobox.com/~skeet Blog:http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
SpecSharp looks very interesting; my problem is that I'm dealing with
lots of VB.NET code, so I'm not sure how useful it will be. I'll keep
looking at it.

The extra items on the stack trace don't really bug me. I think that
the added value of clearer code offsets that tremendously (but that's
just my opinion, and is subject, per usual, to the usual rules
governing opinions).

The "framework" I'm developing does, in fact throw standard exceptions
if the tests don't pass. For example, the standard validator class for
a connection looks like this:

Option Explicit On
Option Strict On

Imports System.Data

Namespace Validation

Public Class ConnectionValidator
Inherits ParameterValidatorBase

Friend Sub New(ByVal connection As IDbConnection, ByVal name As
String)
MyBase.New(connection, name)
End Sub

Public Sub IsNotNull()
If Connection Is Nothing Then
Throw New ArgumentNullException(Name)
End If
End Sub

Public Sub IsOpen()
If Not Connection Is Nothing Then
If (Connection.State And ConnectionState.Open) = 0 Then
Throw New _
ArgumentException("Operation requires an open
connection.", _
Name)
End If
End If
End Sub

Public Sub IsOpenAndIsNotNull()
IsNotNull()
IsOpen()
End Sub

Private ReadOnly Property Connection() As IDbConnection
Get
Return DirectCast(InnerValue, IDbConnection)
End Get
End Property

End Class

End Namespace

A helper class simplifies invocation as follows:

Option Explicit On
Option Strict On

Imports System.Data.SqlClient

Namespace Validation

Public Class Validate

Public Shared Function That(ByVal value As IDbConnection, _
ByVal parameterName As String) As ConnectionValidator

Return New ConnectionValidator(value, parameterName)

End Function

End Class

End Namespace

(I've snipped many of the other methods to simplify the sample.) While
this class isn't very extensible in it's current incarnation, it's
just a proof of concept to verify that it works. (I'm going for The
Simplest Thing That Works at this point, just to verify that the
interface I want works.)

The end result is code that looks like this:

Validate.That(connection, "connection").IsValid()

which tests that the connection is not null and is open. If either
rule fails, an appropriate exception is thrown, with minimal overhead
on the stack trace.

I've got different validator classes for each of the known core data
types (boolean, byte, char, date, decimal, double, integer, long,
etc.) and the database classes at this point. I also included one for
Enum and Object. These are the classes I use most commonly. The only
problem I'm running into is how the heck to make it extensible so that
validators can be created in other libraries and invoked via the
Validate class, making it extensible at run-time. I know that the
current implementation won't work that way. I'm thinking that I'll
have to make it open-source, which would allow developers to extend it
at their leisure, but that still doesn't present the best solution.

It's been a long day, and I'm just having a tough time wrapping my
brain around the extensibility aspects of it. :-)

Overall, however, I'm *really* liking the effects its having on the
refactored test code. Once the validators are debugged, they work
*exceptionally* well (pardon the pun).

May 3 '07 #5
Hi Mike: I like the basic idea. If I'm reading it right I have a couple of
ideas.

First, note that when you pass a connection you don't send a property. It
can be considered implied (or simply not needed) but it results in the
syntax being different. If there is an "open" property the syntax could be
changed to Validate.That( connection, "Open").IsTrue() to maintain an
"object, property" style. Without it do you introduce more situations where
the developer needs to stop and think "what is the test, what are the
parameters?"

Alternatively the siteId example could be: Validate.That(
siteId ).SiteIdIsValid()? This trades off having to send a second parameter
for more methods. Frankly I'd rather have more methods than have to send
hard to remember (and strings at that) property names. Your
IsOpenAndIsNotNull example demonstrates that a test doesn't need to be
applied to a particular property. I also would (probably) (as a developer)
not want to have to remember all the validation tests that apply to the
"siteId". I'd like to know if it was correct or not but perhaps not want to
have to remember to issue the IsNotNull() test in advance of some other
tests.

The buildingIdToIgnore example is suspect as it places "business logic" into
the object user's hands. Should people testing whether the BuildingId is
valid be required to know that it should be greater than -2? If
IsGreaterThan is a method that the buildingIdToIgnore class should provide
why wouldn't it just report back "it isn't valid"? So it reports "yes" it
is greater than 2 but the rule is it has to be less than 5 as well.

Finally... you may not want to pass a particular object (and a property) in
every case unless the only use is to check the current state of an active
object. It seems to me that testing values "prior to assignment" would be
helpful as well. So the UI developer could check if -5 is a valid
buildingIdToIgnore before actually assigning it. The rules apply to an
instance of an object but also to anybody who might like to know if a
particular value is "valid".

Does this help at all?

Tom
"Mike Hofer" <kc********@gmail.comwrote...
Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

May 3 '07 #6
KKS
In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.

regards
Kjetil Kristoffer Solberg
"Mike Hofer" <kc********@gmail.comwrote in message
news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!

May 4 '07 #7
When validating method arguments, it's important that any ArgumentException
(and subclass) instances appear to have been thrown directly from the target
method. If you defer validation to a helper method, the call stack will
make it look like the ArgumentException was thrown by the helper method in
response to a call from the original target method, and the calling
developer will assume the problem is your fault rather than his.

Obviously, it can be quite helpful to defer complex validation to helper
methods, but any resulting ArgumentException should still be thrown from the
target method. e.g.:

If Not Validate.That(connection).IsOpenAndIsNotNull() Then Throw New
ArgumentException...

"Mike Hofer" <kc********@gmail.comwrote in message
news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!

May 4 '07 #8
<"KKS" <kks at synergi dot com>wrote:
In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.
I disagree completely:

1) The most elegant and simple way should be used wherever performance
hasn't proved itself to be an issue, IMO. Calling a few methods is
unlikely to "kill" the performance of the original method, and if at
any point the method does any IO (file, socket etc) that's almost
certainly going to dwarf the validation costs.

2) Validation code should be robust but without getting in the way of
making it clear to the reader of the code what the "meat" of the method
is doing - what the real purpose is. Spending 12 lines doing validation
where 3 concise, readable lines will do is not a good idea.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 4 '07 #9
<"Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT com>wrote:
When validating method arguments, it's important that any ArgumentException
(and subclass) instances appear to have been thrown directly from the target
method.
I'm not sure that's particularly important. I'd say it's a "nice to
have" but I wouldn't make the validation code harder to ensure it. The
framework certainly doesn't. For instance:

using System;
using System.Collections.Generic;

class Test
{
static void Main()
{
try
{
List<stringx = new List<string>();
x[5] = null;
}
catch (Exception e)
{
Console.WriteLine (e);
}
}
}

displays (on my box):

System.ArgumentOutOfRangeException: Index was out of range. Must be
non-negative
and less than the size of the collection.
Parameter name: index
at System.ThrowHelper.ThrowArgumentOutOfRangeExceptio n
(ExceptionArgument argument, ExceptionResource resource)
at System.ThrowHelper.ThrowArgumentOutOfRangeExceptio n()
at System.Collections.Generic.List`1.set_Item(Int32 index, T value)
at Test.Main()

There are two more levels than one might expect. I don't think that's
particularly confusing, so long as the message is very clear.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 4 '07 #10
I don't think that we should ever use 'XML' and 'Architected' in the
same sentence

XML is for retards that don't have the capacity to learn SQL
it's SLOWER and LARGER with no tangible benefits


On May 3, 9:24 am, "Ray Cassick" <rcass...@enterprocity.comwrote:
"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11**********************@l77g2000hsb.googlegr oups.com...


I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.
For instance, consider the following block:
If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If
Sometimes, this code is collapsed as follows:
If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")
Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)
What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.
So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.
Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.
The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.
So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?
Your input is greatly appreciated.
Thanks!

I like the idea and have thought about doing it myself at some point, just
never got around to it.

I might take it a bit further and somehow move the validation to an engine
scheme type of approach where rules for validation could be stored perhaps
in an XML file and then parameters would be passed in along with their
function names. This would give you a central location to store rules about
what is valid and what is not as well as the ability to perhaps alter
validation schemes dynamically without altering code if needed in the
future.

It may also, depending how it was architected, allow you to better control
rules for values that may be dependent upon other values that have been
passed into the function without having to create a bunch of nested
statements in the code body itself.

No matter what you do though, enforcement of its use is always going to be
an issue. Unless you stand over the shoulder or developers as they code, the
only real way to catch non-compliance would be code reviews.- Hide quoted text -

- Show quoted text -

May 4 '07 #11
sorry kid you're full of shit

databases are the fastest things in the universe
maybe you need to stfu and learn Analysis Services


On May 4, 6:05 am, "KKS" <kks at synergi dot comwrote:
In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.

regards
Kjetil Kristoffer Solberg

"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.
For instance, consider the following block:
If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If
Sometimes, this code is collapsed as follows:
If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")
Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)
What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.
So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.
Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.
The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.
So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?
Your input is greatly appreciated.
Thanks!- Hide quoted text -

- Show quoted text -

May 4 '07 #12
On May 4, 9:54 am, "Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT
comwrote:
When validating method arguments, it's important that any ArgumentException
(and subclass) instances appear to have been thrown directly from the target
method. If you defer validation to a helper method, the call stack will
make it look like the ArgumentException was thrown by the helper method in
response to a call from the original target method, and the calling
developer will assume the problem is your fault rather than his.

Obviously, it can be quite helpful to defer complex validation to helper
methods, but any resulting ArgumentException should still be thrown from the
target method. e.g.:

If Not Validate.That(connection).IsOpenAndIsNotNull() Then Throw New
ArgumentException...

"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.
For instance, consider the following block:
If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If
Sometimes, this code is collapsed as follows:
If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")
Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)
What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.
So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.
Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.
The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.
So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?
Your input is greatly appreciated.
Thanks!- Hide quoted text -

- Show quoted text -
I'm not sure I agree wholeheartedly with this one, though I understand
the concern. The question is one of risk mitigation, and whether or
not the trade-off in code clarity is worth the additional entries in
the stack trace.

I tend to agree with Jon. Other classes in the framework add
additional entries to the stack trace. However, the fact that the
validation methods in this case only add two entries, and both of them
indicate that they are Validation failures, makes it clear that we're
dealing with a parameter validation. The classes and methods are
*very* clearly named. I *think* (dangerous word, I know) that any
competent programmer would be able to discern that the problem in
question was that the validation test had failed.

Further, one of the points of the framework is to refactor complex
validation code so that it's *easier* to do, encouraging its
implementation. If I do it the way you've shown, it doesn't appear
that it would be easier to do. I might even argue that it would be
harder, and the additional overhead in code would buy me nothing. In
that event, I might as well just step away from the keyboard, and drop
the project.

My goals here are to provide a streamlined means of validating
parameters that results in a consistent set of error messages, clear
code, with a minimal impact on the stack trace. I don't think that any
framework is going to be able to provide that without impacting the
stack trace, without resulting in lots of code at the point of call,
which defeats its purpose. I could be wrong, and am certainly willing
to entertain that notion.

I am definitely interested in a differing viewpoint. It may prove to
be true that the system would benefit from *both* ways of doing
things. So by all means, come back with a response, and help me to
build something great.

Thanks!

May 4 '07 #13
On May 4, 9:05 am, "KKS" <kks at synergi dot comwrote:
In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.

regards
Kjetil Kristoffer Solberg

"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11**********************@l77g2000hsb.googlegr oups.com...
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.
For instance, consider the following block:
If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If
Sometimes, this code is collapsed as follows:
If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")
Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)
What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.
So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.
Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.
The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.
So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?
Your input is greatly appreciated.
Thanks!- Hide quoted text -

- Show quoted text -
Performance is a concern; it's one of the reasons I ruled out
reflection as a way to do it, and also pitched XML templates (parsing
the DOM tends to be slow). I'm also not doing any file IO or database
access.

The method implementations are as small, compact, and simple as can
be. They perform a concrete test and, if the test fails, throw an
exception right away.

The first implementation actually used lots of individual objects for
each test. Each test was represented by a test class that implemented
an interface, and the test was passed to a method that implemented the
interface's sole method.

However, that resulted in a lot of objects being created. I didn't
want to negatively impact performance that way, so I simply settled
for a sealed class that has static methods that return the validator
classes. (It's essentially a factory; the That method is overloaded to
provide strongly typed versions that prevent boxing and unboxing of
value types).

The validator classes are designed around the base data types and the
interfaces I find I commonly use. (Wherever possible, I use interfaces
instead of concrete classes.) The methods on the validator classes are
very small, and simply perform the test, and throw an exception if it
fails, using standard error messages that are stored in a resource
file, providing a high degree of consistency.

If I need to perform multiple tests on a single parameter in VB, I can
now do this:

With Validate.That(myParameter, "myParameter")
.IsNotNull()
.IsInRange(minValue, maxValue)
End With

So yes, performance is a consideration. I haven't forgotten that. And
if I can find any other ways to improve it, I certainly will. I'm open
to suggestions, so fire away!

May 4 '07 #14
Mike Hofer <kc********@gmail.comwrote:

<snip>
Performance is a concern; it's one of the reasons I ruled out
reflection as a way to do it, and also pitched XML templates (parsing
the DOM tends to be slow). I'm also not doing any file IO or database
access.
Reflection I can see being a potentially significant performance hit -
but I can't see that a couple of extra method calls which don't need to
do a lot in the success case will cause much performance loss.

You could always profile it, but unless you're really doing *very*
little "real" work in the methods, *and* calling them very often
(really, really often) I doubt you'd see anything.

You could always make the methods compile-time conditional (using
ConditionalAttribute), so that it's easy to repeatedly do performance
tests with and without any validation. That should make it easy to
determine whether or not the impact is significant.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 4 '07 #15
On May 4, 5:03 pm, Jon Skeet [C# MVP] <s...@pobox.comwrote:
Mike Hofer <kchighl...@gmail.comwrote:

<snip>
Performance is a concern; it's one of the reasons I ruled out
reflection as a way to do it, and also pitched XML templates (parsing
the DOM tends to be slow). I'm also not doing any file IO or database
access.

Reflection I can see being a potentially significant performance hit -
but I can't see that a couple of extra method calls which don't need to
do a lot in the success case will cause much performance loss.

You could always profile it, but unless you're really doing *very*
little "real" work in the methods, *and* calling them very often
(really, really often) I doubt you'd see anything.

You could always make the methods compile-time conditional (using
ConditionalAttribute), so that it's easy to repeatedly do performance
tests with and without any validation. That should make it easy to
determine whether or not the impact is significant.

--
Jon Skeet - <s...@pobox.com>http://www.pobox.com/~skeet Blog:http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
Thanks for the advice, Jon. I haven't noticed any significant
performance hit so far. Like I said, the functions are lean and mean,
and don't do anything except test and throw on failure. Unless I'm
doing a lot of recursion (not the case yet!), I don't see a problem.

I will run it through a profiler once I get it all set just so I can
see the difference. I'm fairly certain that a lot of this stuff can be
inlined, so it shouldn't be all that costly.

May 4 '07 #16
"Mike Hofer" <kc********@gmail.comwrote in message
news:11**********************@h2g2000hsg.googlegro ups.com...
I'm not sure I agree wholeheartedly with this one, though I understand
the concern. The question is one of risk mitigation, and whether or
not the trade-off in code clarity is worth the additional entries in
the stack trace.
That's certainly true, and only you can decide which way the trade-off tips
for your particular API. However, the larger and more diverse the intended
developer audience for your API, the more emphasis you should be placing on
the interests of the API users versus those of the API developers.

I tend to agree with Jon. Other classes in the framework add
additional entries to the stack trace.
See my response to Jon. The cases where the framework does this are
probably not desirable.

However, the fact that the
validation methods in this case only add two entries, and both of them
indicate that they are Validation failures, makes it clear that we're
dealing with a parameter validation. The classes and methods are
*very* clearly named. I *think* (dangerous word, I know) that any
competent programmer would be able to discern that the problem in
question was that the validation test had failed.
Are you sure that all the developers in your intended API audience are all
that competent, experienced, etc.? If so, then you have no worries.

Further, one of the points of the framework is to refactor complex
validation code so that it's *easier* to do, encouraging its
implementation.
Where do you get that idea? If encouraging argument validation was a
serious goal for the folks on the CLR or language teams at Microsoft,
declarative validation would have made its way out of the Spec# research pit
quite some time ago, and they would never have permitted the introduction of
automatic properties without support for declarative validation. Heck, they
could have done as little as salvaging the XC# post-compiler
(http://www.resolvecorp.com), which seems to have suffered a largely ignored
demise sometime before Visual Studio 2005 was released.

Personally, I would argue that there is next to no framework support for
doing the right thing with respect to argument validation, but then I might
just be a bit bitter... ;)

If I do it the way you've shown, it doesn't appear
that it would be easier to do. I might even argue that it would be
harder, and the additional overhead in code would buy me nothing.
Again, if it actually buys you nothing because your intended audience are
sufficiently sophisticated, then it's probably not worthwhile. However,
it's not actually all that much more work to put the throws in the target
methods. Granted, it's irritating (there is a reason I'm bitter <g>), but
it's not exactly expensive.

My goals here are to provide a streamlined means of validating
parameters that results in a consistent set of error messages, clear
code, with a minimal impact on the stack trace. I don't think that any
framework is going to be able to provide that without impacting the
stack trace, without resulting in lots of code at the point of call,
which defeats its purpose. I could be wrong, and am certainly willing
to entertain that notion.
There's a way to avoid the impact on the stack trace, but it's terribly
inelegant and would require adding more code to the target method. However,
just for the sake of completeness, here it is...

Since the call stack for an exception is generated when it is thrown, your
target method would wrap its validation method calls in a try catch that
explicitly rethrows any caught argument exceptions. e.g.:

Try
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
Catch ex As ArgumentException
Throw ex
End Try

Personally, I think it's far less palatable than the alternatives, but
ymmv...

May 5 '07 #17
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP********************@msnews.microsoft.com.. .
I'm not sure that's particularly important. I'd say it's a "nice to
have" but I wouldn't make the validation code harder to ensure it.
How important it might be really depends on the intended audience for your
API. If you're developing librairies for internal use within an
organization, and you have the opportunity to communicate directly with the
API users, then it may not be very important at all. However, if you're
producing an API for truly public use and can expect to have very
inexperienced developers as clients, then it may have a much greater impact.

The framework certainly doesn't.
Actually, in most places in the framework, the developers have gone to the
trouble of throwing directly from the method whose arguments are being
validated (see, for example, ArrayList.Item). The List<T>.Item
counter-example should probably be considered a bug. In fact, it's even
worse than you might expect since it's not actually deferring validation to
the helper method, but only the exception creation
and throwing ("If (index >= Me._size) Then
ThrowHelper.ThrowArgumentOutOfRangeException"). I suspect that the
developers might have taken the "consider using exception builder methods"
design guideline (see
http://msdn2.microsoft.com/en-us/library/ms229030.aspx) a little too far in
this particular case. At any rate, whatever the reason, the fact that
someone working on the BCL made what almost certainly amounts to a mistake
shouldn't be justification for the rest of us to get lazy. ;)

There are two more levels than one might expect. I don't think that's
particularly confusing, so long as the message is very clear.
If you were the only client of my APIs, I wouldn't worry about where the
exceptions were thrown either. :)

May 5 '07 #18
"Jon Skeet [C# MVP]" <sk***@pobox.comwrote in message
news:MP********************@msnews.microsoft.com.. .
I'm not sure that's particularly important. I'd say it's a "nice to
have" but I wouldn't make the validation code harder to ensure it.
How important it might be really depends on the intended audience for your
API. If you're developing librairies for internal use within an
organization, and you have the opportunity to communicate directly with the
API users, then it may not be very important at all. However, if you're
producing an API for truly public use and can expect to have very
inexperienced developers as clients, then it may have a much greater impact.

The framework certainly doesn't.
Actually, in most places in the framework, the developers have gone to the
trouble of throwing directly from the method whose arguments are being
validated (see, for example, ArrayList.Item). The List<T>.Item
counter-example should probably be considered a bug. In fact, it's even
worse than you might expect since it's not actually deferring validation to
the helper method, but only the exception creation
and throwing ("If (index >= Me._size) Then
ThrowHelper.ThrowArgumentOutOfRangeException"). I suspect that the
developers might have taken the "consider using exception builder methods"
design guideline (see
http://msdn2.microsoft.com/en-us/library/ms229030.aspx) a little too far in
this particular case. At any rate, whatever the reason, the fact that
someone working on the BCL made what almost certainly amounts to a mistake
shouldn't be justification for the rest of us to get lazy. ;)

There are two more levels than one might expect. I don't think that's
particularly confusing, so long as the message is very clear.
If you were the only client of my APIs, I wouldn't worry about where the
exceptions were thrown either. :)

May 5 '07 #19
<"Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT com>wrote:

<snip>
Personally, I would argue that there is next to no framework support for
doing the right thing with respect to argument validation, but then I might
just be a bit bitter... ;)
I agree wholeheartedly. Apart from the sad lack of Spec# progression,
it's a shame there's no way of applying an attribute to a method to say
"any exceptions thrown by this method should appear as if they're
thrown by the caller" which would make things a lot simpler.

I think it's just a balance thing - I think it's likely that if we ask
developers to do validation "the long way", they may well not bother,
and that's worse than exceptions having a couple of levels of stack too
many.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet Blog: http://www.msmvps.com/jon.skeet
If replying to the group, please do not mail me too
May 5 '07 #20
On May 5, 8:10 am, "Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT
comwrote:
"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11**********************@h2g2000hsg.googlegro ups.com...
I'm not sure I agree wholeheartedly with this one, though I understand
the concern. The question is one of risk mitigation, and whether or
not the trade-off in code clarity is worth the additional entries in
the stack trace.

That's certainly true, and only you can decide which way the trade-off tips
for your particular API. However, the larger and more diverse the intended
developer audience for your API, the more emphasis you should be placing on
the interests of the API users versus those of the API developers.
That's a very valid point, and one well worth consideration. Even if
the code is well documented, and the end-user documentation is clear,
a user who is simply testing code may be in for a surprise when an
exception is thrown by the library and they've never worked with it
before. It's something I hadn't thought of before.

It's a shame there's no way to build or modify the stack trace to
remove the entries so that they accurate reflect the offending
routine.
>
I tend to agree with Jon. Other classes in the framework add
additional entries to the stack trace.

See my response to Jon. The cases where the framework does this are
probably not desirable.
However, the fact that the
validation methods in this case only add two entries, and both of them
indicate that they are Validation failures, makes it clear that we're
dealing with a parameter validation. The classes and methods are
*very* clearly named. I *think* (dangerous word, I know) that any
competent programmer would be able to discern that the problem in
question was that the validation test had failed.

Are you sure that all the developers in your intended API audience are all
that competent, experienced, etc.? If so, then you have no worries.
I am expecting a *certain* degree of competence. I would expect that
anyone using this library is looking for a way to improve code
quality, much like a user who seeks out NUnit to improve code quality
through the use of TDD. However, as you've said, it's certainly going
to be interesting to see what happens when users look at the thing and
they've never seen it before, and the stack trace contains additional
entries.
>
Further, one of the points of the framework is to refactor complex
validation code so that it's *easier* to do, encouraging its
implementation.

Where do you get that idea? If encouraging argument validation was a
serious goal for the folks on the CLR or language teams at Microsoft,
declarative validation would have made its way out of the Spec# research pit
quite some time ago, and they would never have permitted the introduction of
automatic properties without support for declarative validation. Heck, they
could have done as little as salvaging the XC# post-compiler
(http://www.resolvecorp.com), which seems to have suffered a largely ignored
demise sometime before Visual Studio 2005 was released.

Personally, I would argue that there is next to no framework support for
doing the right thing with respect to argument validation, but then I might
just be a bit bitter... ;)
Whoops! That was my bad! By "framework" I was referring to the
framework that I am developing. I usually refer to the .NET Framework
with "Framework" (capital F). (I'm a spelling nazi, but always forget
that this is the Internet, and it doesn't universally apply, and
others may not pick up on that.)

What I should have said was that the purpose of the framework that I
am developing is to refactor complex code. The .NET Framework itself
almost certainly *never* does this, as you well know, but gives you
the raw building blocks, and then you have to refactor the hell out of
it.

My apologies for not being clear on that.
>
If I do it the way you've shown, it doesn't appear
that it would be easier to do. I might even argue that it would be
harder, and the additional overhead in code would buy me nothing.

Again, if it actually buys you nothing because your intended audience are
sufficiently sophisticated, then it's probably not worthwhile. However,
it's not actually all that much more work to put the throws in the target
methods. Granted, it's irritating (there is a reason I'm bitter <g>), but
it's not exactly expensive.
My goals here are to provide a streamlined means of validating
parameters that results in a consistent set of error messages, clear
code, with a minimal impact on the stack trace. I don't think that any
framework is going to be able to provide that without impacting the
stack trace, without resulting in lots of code at the point of call,
which defeats its purpose. I could be wrong, and am certainly willing
to entertain that notion.

There's a way to avoid the impact on the stack trace, but it's terribly
inelegant and would require adding more code to the target method. However,
just for the sake of completeness, here it is...

Since the call stack for an exception is generated when it is thrown, your
target method would wrap its validation method calls in a try catch that
explicitly rethrows any caught argument exceptions. e.g.:

Try
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
Catch ex As ArgumentException
Throw ex
End Try

Personally, I think it's far less palatable than the alternatives, but
ymmv...
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.

It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework. With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.

Thoughts?

May 7 '07 #21
KKS

"Mike Hofer" <kc********@gmail.comwrote in message
news:11*********************@h2g2000hsg.googlegrou ps.com...
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.

It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework. With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.

Thoughts?
Is this really that important? It seems to me now that you creating a lot of
work for yourself. Just my opinion.

Regards
Kjetil
May 7 '07 #22
finally someone else sees the fallacy of using XML

On May 7, 5:16 am, "KKS" <kks at synergi dot comwrote:
"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11*********************@h2g2000hsg.googlegrou ps.com...
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.
It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework. With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.
Thoughts?

Is this really that important? It seems to me now that you creating a lot of
work for yourself. Just my opinion.

Regards
Kjetil- Hide quoted text -

- Show quoted text -

May 7 '07 #23
On May 7, 3:36 pm, "dbahoo...@hotmail.com" <dbahoo...@hotmail.com>
wrote:
finally someone else sees the fallacy of using XML

On May 7, 5:16 am, "KKS" <kks at synergi dot comwrote:
"Mike Hofer" <kchighl...@gmail.comwrote in message
news:11*********************@h2g2000hsg.googlegrou ps.com...
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.
It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework. With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.
Thoughts?
Is this really that important? It seems to me now that you creating a lot of
work for yourself. Just my opinion.
Regards
Kjetil- Hide quoted text -
- Show quoted text -- Hide quoted text -

- Show quoted text -
No one even mentioned using XML. Certainly not me. Troll somewhere
else, please.

May 7 '07 #24
On May 7, 8:16 am, "KKS" <kks at synergi dot comwrote:
"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11*********************@h2g2000hsg.googlegrou ps.com...
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.
It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework. With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.
Thoughts?

Is this really that important? It seems to me now that you creating a lot of
work for yourself. Just my opinion.

Regards
Kjetil- Hide quoted text -

- Show quoted text -
I'm not sure. I just want to make sure that I've covered all my bases.
If enough users think that it would be a big enough concern, then I'd
want to make sure it wasn't going to be a substantial enough deterrant
to prevent its use, and handle that. But I'd only do so if the
implementation didn't destroy the usefulness of the library.

Also, I don't think it would be that much work to do it, especially if
it provided clarity to the end-user. Clarity, in my eyes, is a Very
Good Thing, and it's worth the up-front effort on my part if it
provides a larger return on my investment down the road. Not just for
myself, mind you, but for everyone who might use it.

But again, therein lies the rub: Does it provide clarity and
usefulness? If it does, then I'd do it. If it doesn't, then I wouldn't.

May 7 '07 #25
"Mike Hofer" <kc********@gmail.comwrote in message
news:11*********************@h2g2000hsg.googlegrou ps.com...
I am expecting a *certain* degree of competence. I would expect that
anyone using this library is looking for a way to improve code
quality, much like a user who seeks out NUnit to improve code quality
through the use of TDD.
What about their API users, who will be the ultimate consumers of exceptions
generated from your framework? Besides potentially not fitting the expected
profile for users of your library, they'll also be blissfully unaware that
your validation framework is even being used by the API they are invoking.

Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.
Unfortunately, swapping out the exception type is going to be a breaking
change for existing code. For example, an existing catch block that handles
System.ArgumentNullException won't catch your ParameterValidationException
or a derived
ParameterNullException. You could work around this by subclassing the
built-in specific exception types and adding your stack trace manipulation
via composition.

It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework.
Users of the APIs produced by users of your framework probably won't want to
see any difference between the exceptions it throws and any other argument
exceptions they might need to handle.

With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.
And able to handle stack traces generated in languages other than English?
May 8 '07 #26
On May 8, 9:37 am, "Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT
comwrote:
"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11*********************@h2g2000hsg.googlegrou ps.com...
I am expecting a *certain* degree of competence. I would expect that
anyone using this library is looking for a way to improve code
quality, much like a user who seeks out NUnit to improve code quality
through the use of TDD.

What about their API users, who will be the ultimate consumers of exceptions
generated from your framework? Besides potentially not fitting the expected
profile for users of your library, they'll also be blissfully unaware that
your validation framework is even being used by the API they are invoking.
Third, fourth, and fifth generation users, then? I hadn't thought of
that. Good catch.
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.

Unfortunately, swapping out the exception type is going to be a breaking
change for existing code. For example, an existing catch block that handles
System.ArgumentNullException won't catch your ParameterValidationException
or a derived
ParameterNullException. You could work around this by subclassing the
built-in specific exception types and adding your stack trace manipulation
via composition.
Okay, that works.
It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework.

Users of the APIs produced by users of your framework probably won't want to
see any difference between the exceptions it throws and any other argument
exceptions they might need to handle.
I'll buy that, with one caveat: In my limited experience, there's
alway the edge case that proves my expectations wrong. I'd at least
want to provide *some* way to let users know that the exception was
thrown from my framework, and not directly from within their code.
With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.

And able to handle stack traces generated in languages other than English?
Another very good point. I was actually thinking of making this open
souce once I got the base stuff done, and since I know little to
nothing about proper globalization (yeah, it's a gaping hole in my
skillset), I figured that others could help me cover that ground. I
don't pretend for an instant that I know everything, or even the vast
majority of all that there is to know about the proper or best way to
do this. It's why I started this thread in the first place. And your
insights have been invaluable. Thank you very much.

May 8 '07 #27
If you're willing to put a bit of work into this, you might want to consider
going the post-compiler route instead in order to allow declarative
validation. A few years ago, I was thinking of doing some similar stack
trace manipulation when I ran across XC#. Unfortunately, given that you're
targeting VB, you wouldn't be able to use XC# even if it were still
available. However, there does appear to be a more general .NET
post-compiler available at http://www.postsharp.org/. I'll be taking a
closer look at it myself as soon as I get a chance, but it does look like a
reasonable candidate for enabling declarative validation if it lives up to
its promise...

"Mike Hofer" <kc********@gmail.comwrote in message
news:11*********************@l77g2000hsb.googlegro ups.com...
On May 8, 9:37 am, "Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT
comwrote:
>"Mike Hofer" <kchighl...@gmail.comwrote in message

news:11*********************@h2g2000hsg.googlegro ups.com...
I am expecting a *certain* degree of competence. I would expect that
anyone using this library is looking for a way to improve code
quality, much like a user who seeks out NUnit to improve code quality
through the use of TDD.

What about their API users, who will be the ultimate consumers of
exceptions
generated from your framework? Besides potentially not fitting the
expected
profile for users of your library, they'll also be blissfully unaware
that
your validation framework is even being used by the API they are
invoking.

Third, fourth, and fifth generation users, then? I hadn't thought of
that. Good catch.
Just brainstorming here, but let me run this by you. Since I can't
modify the stack trace itself, and since I don't want additional
information in it, what if I used a custom exception class, such as a
ParameterValidtationException, and that class shadowed the StackTrace
property. I could then rewrite the stack trace so that I could fetch
it, remove the extra entries, and then expose the cleaned list to the
end user? Further, I could then extend that class with specific
exception types, if need be, to mimic the standard argument
exceptions.

Unfortunately, swapping out the exception type is going to be a breaking
change for existing code. For example, an existing catch block that
handles
System.ArgumentNullException won't catch your
ParameterValidationException
or a derived
ParameterNullException. You could work around this by subclassing the
built-in specific exception types and adding your stack trace
manipulation
via composition.

Okay, that works.
It might have several advantages: most notably, it keeps the stack
trace clean, but it also tells you explicitly that the exception was
thrown by the validation framework.

Users of the APIs produced by users of your framework probably won't want
to
see any difference between the exceptions it throws and any other
argument
exceptions they might need to handle.

I'll buy that, with one caveat: In my limited experience, there's
alway the edge case that proves my expectations wrong. I'd at least
want to provide *some* way to let users know that the exception was
thrown from my framework, and not directly from within their code.
With some optimization, the code
to clean the stack trace can be kept clean, small, and fast.

And able to handle stack traces generated in languages other than
English?

Another very good point. I was actually thinking of making this open
souce once I got the base stuff done, and since I know little to
nothing about proper globalization (yeah, it's a gaping hole in my
skillset), I figured that others could help me cover that ground. I
don't pretend for an instant that I know everything, or even the vast
majority of all that there is to know about the proper or best way to
do this. It's why I started this thread in the first place. And your
insights have been invaluable. Thank you very much.

May 9 '07 #28
"db*******@hotmail.com" <db*******@hotmail.comwrote in
news:11**********************@n59g2000hsh.googlegr oups.com:
I don't think that we should ever use 'XML' and 'Architected' in the
same sentence

XML is for retards that don't have the capacity to learn SQL
it's SLOWER and LARGER with no tangible benefits

Joke of the day!
May 11 '07 #29

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

23
2290
by: pigeonrandle | last post by:
Hi, Does this bit of code represent complete overkill?! try { //create a treenode TreeNode tn = new TreeNode(); //add it to a treeview tv.Nodes.Add(tn);
5
1459
by: pigeonrandle | last post by:
Hi, Does this bit of code represent complete overkill?! Try 'create a treenode Dim tn as new TreeNode() 'add it to a treeview tv.Nodes.Add(tn) 'do some other stuff
4
282
by: pigeonrandle | last post by:
Hi, Does this bit of code represent complete overkill?! Try 'create a treenode Dim tn as new TreeNode() 'add it to a treeview tv.Nodes.Add(tn) 'do some other stuff
0
7007
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
7171
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
7220
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
7386
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
1
4918
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...
0
4599
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...
0
3098
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The...
0
1427
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...
0
295
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.