471,582 Members | 1,360 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,582 software developers and data experts.

C# - Attributes - Unit Tests Question

Hi

I want to place the tests needed in the code using attributes. There seems
to be enough code snippets around
for me to cover this. e.g.

// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false]

// Function

static Boolean CheckType(int FirstArg, int SecondArg, string sType)

{

// Code in here

...

return bResult;

}

The harder part is i want to use reflection to establish the function
arguments and create on the

stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .
Can you comment if this is possible and if there is an example great ?

C# begineer

Thanks in advance
Nov 17 '05 #1
16 2423
Hi Greg,
what testing framework are you using for your unit tests, or are you
trying to write your own?

You do not want the test for your function to live with your production
code, as this adds extra code size, and makes the code more difficult to read
- plus you are sending test code to your end customers - you should seperate
your test code from the code it is testing. ie have all your testing code
live in a different project from your production code.

If you have not heard of NUnit (www.nunit.org) then it would be really good
to check this out, it is simple but really effective and takes the hassle out
of you having to write your own testing framework.

You should try to write one test function per action you want to test, this
way you do not need to have parameters passed to your test function, and
inside the function you initialize the object you want to test and pass in
the values to the object.

Take for example some ficticious object called MyAdditionObject which is
used to add two numbers, to test it (using NUnit) you would write something
like:

using NUnit.Framework;

[TestFixture]
public class MyAdditionObjectTestFixture
{
[Test]
public void TestAddNumbers()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(5, intResult);
}

[Test]
public void TestAddNegativePositiveNumber()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(-2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(1, intResult);
}
}

You can see your parameters live inside the testing functions and all test
functions are parameterless.

Hope this helps
Mark R Dawson
"Greg Roberts" wrote:
Hi

I want to place the tests needed in the code using attributes. There seems
to be enough code snippets around
for me to cover this. e.g.

// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false]

// Function

static Boolean CheckType(int FirstArg, int SecondArg, string sType)

{

// Code in here

...

return bResult;

}

The harder part is i want to use reflection to establish the function
arguments and create on the

stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .
Can you comment if this is possible and if there is an example great ?

C# begineer

Thanks in advance

Nov 17 '05 #2
Thanks for the feedback, some more info

Yes we are using MSTest, NUnit and a few others in various old / new C++
(non managed) / C# products .

We are starting a new C# (mainly) feature set and i am researching what is
possible to make for the best
test setup. I am relatively new to C#. Done the few odd utilities.

Currently if you add or change a function, you then need to go to the test
framemowork and add / adjust
, and in some cases due to a switch / case in the test which also nees to
have new numbers defined.

To me with the ability to hide text in C# having most of the "test cases"
living with the actual code would be ideal.

Hence this post.

Thanks

"Mark R. Dawson" <Ma*********@discussions.microsoft.com> wrote in message
news:AF**********************************@microsof t.com...
Hi Greg,
what testing framework are you using for your unit tests, or are you
trying to write your own?

You do not want the test for your function to live with your production
code, as this adds extra code size, and makes the code more difficult to
read
- plus you are sending test code to your end customers - you should
seperate
your test code from the code it is testing. ie have all your testing code
live in a different project from your production code.

If you have not heard of NUnit (www.nunit.org) then it would be really
good
to check this out, it is simple but really effective and takes the hassle
out
of you having to write your own testing framework.

You should try to write one test function per action you want to test,
this
way you do not need to have parameters passed to your test function, and
inside the function you initialize the object you want to test and pass in
the values to the object.

Take for example some ficticious object called MyAdditionObject which is
used to add two numbers, to test it (using NUnit) you would write
something
like:

using NUnit.Framework;

[TestFixture]
public class MyAdditionObjectTestFixture
{
[Test]
public void TestAddNumbers()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(5, intResult);
}

[Test]
public void TestAddNegativePositiveNumber()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(-2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(1, intResult);
}
}

You can see your parameters live inside the testing functions and all test
functions are parameterless.

Hope this helps
Mark R Dawson
"Greg Roberts" wrote:
Hi

I want to place the tests needed in the code using attributes. There
seems
to be enough code snippets around
for me to cover this. e.g.

// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false]

// Function

static Boolean CheckType(int FirstArg, int SecondArg, string sType)

{

// Code in here

...

return bResult;

}

The harder part is i want to use reflection to establish the function
arguments and create on the

stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .
>> Can you comment if this is possible and if there is an example great ?

C# begineer

Thanks in advance

Nov 17 '05 #3
Hi Greg,
what testing framework are you using for your unit tests, or are you
trying to write your own?

You do not want the test for your function to live with your production
code, as this adds extra code size, and makes the code more difficult to read
- plus you are sending test code to your end customers - you should seperate
your test code from the code it is testing. ie have all your testing code
live in a different project from your production code.

If you have not heard of NUnit (www.nunit.org) then it would be really good
to check this out, it is simple but really effective and takes the hassle out
of you having to write your own testing framework.

You should try to write one test function per action you want to test, this
way you do not need to have parameters passed to your test function, and
inside the function you initialize the object you want to test and pass in
the values to the object.

Take for example some ficticious object called MyAdditionObject which is
used to add two numbers, to test it (using NUnit) you would write something
like:

using NUnit.Framework;

[TestFixture]
public class MyAdditionObjectTestFixture
{
[Test]
public void TestAddNumbers()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(5, intResult);
}

[Test]
public void TestAddNegativePositiveNumber()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(-2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(1, intResult);
}
}

You can see your parameters live inside the testing functions and all test
functions are parameterless.

Hope this helps
Mark R Dawson
"Greg Roberts" wrote:
Hi

I want to place the tests needed in the code using attributes. There seems
to be enough code snippets around
for me to cover this. e.g.

// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false]

// Function

static Boolean CheckType(int FirstArg, int SecondArg, string sType)

{

// Code in here

...

return bResult;

}

The harder part is i want to use reflection to establish the function
arguments and create on the

stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .
Can you comment if this is possible and if there is an example great ?

C# begineer

Thanks in advance

Nov 17 '05 #4
Thanks for the feedback, some more info

Yes we are using MSTest, NUnit and a few others in various old / new C++
(non managed) / C# products .

We are starting a new C# (mainly) feature set and i am researching what is
possible to make for the best
test setup. I am relatively new to C#. Done the few odd utilities.

Currently if you add or change a function, you then need to go to the test
framemowork and add / adjust
, and in some cases due to a switch / case in the test which also nees to
have new numbers defined.

To me with the ability to hide text in C# having most of the "test cases"
living with the actual code would be ideal.

Hence this post.

Thanks

"Mark R. Dawson" <Ma*********@discussions.microsoft.com> wrote in message
news:AF**********************************@microsof t.com...
Hi Greg,
what testing framework are you using for your unit tests, or are you
trying to write your own?

You do not want the test for your function to live with your production
code, as this adds extra code size, and makes the code more difficult to
read
- plus you are sending test code to your end customers - you should
seperate
your test code from the code it is testing. ie have all your testing code
live in a different project from your production code.

If you have not heard of NUnit (www.nunit.org) then it would be really
good
to check this out, it is simple but really effective and takes the hassle
out
of you having to write your own testing framework.

You should try to write one test function per action you want to test,
this
way you do not need to have parameters passed to your test function, and
inside the function you initialize the object you want to test and pass in
the values to the object.

Take for example some ficticious object called MyAdditionObject which is
used to add two numbers, to test it (using NUnit) you would write
something
like:

using NUnit.Framework;

[TestFixture]
public class MyAdditionObjectTestFixture
{
[Test]
public void TestAddNumbers()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(5, intResult);
}

[Test]
public void TestAddNegativePositiveNumber()
{
MyAdditionObject myAddObj = new MyAdditionObject();

int intResult = myAddObj.AddTwoNumbers(-2,3);

//check the result is correct, this is an object native to NUnit
Assert.AreEqual(1, intResult);
}
}

You can see your parameters live inside the testing functions and all test
functions are parameterless.

Hope this helps
Mark R Dawson
"Greg Roberts" wrote:
Hi

I want to place the tests needed in the code using attributes. There
seems
to be enough code snippets around
for me to cover this. e.g.

// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false]

// Function

static Boolean CheckType(int FirstArg, int SecondArg, string sType)

{

// Code in here

...

return bResult;

}

The harder part is i want to use reflection to establish the function
arguments and create on the

stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .
>> Can you comment if this is possible and if there is an example great ?

C# begineer

Thanks in advance

Nov 17 '05 #5


Greg Roberts wrote:
Hi
// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false] .... static Boolean CheckType(int FirstArg, int SecondArg, string sType)
{
return bResult;
}
This seems like a very complicated way of doing:

// Here comes the test framework
public class CheckException: Exception { }
void CHECK(boolean b) { if (!b) throw new CheckException(); }
// here comes the test
void testFoo() {
CHECK(CheckType(-1,3,"flower") == true);
CHECK(CheckType(5,3,"pot") == true);
CHECK(CheckType(99,3,"men") == false);
}
The harder part is i want to use reflection to establish the function
arguments and create on the
stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .


You just formulated a new language ([Test, ...]), which is certainly not
as rich as C#. I try to formulate problems in the source language
instead of implementing my own.
Can you comment if this is possible and if there is an example great ?


nUnit does stuff like this. But I have a hard time seeing the point in
these frameworks.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #6


Greg Roberts wrote:
Hi
// Test cases, run these here on the function and check the result

[Test, Function=CheckType, Args="-1,3,'flower'", Result=true]

[Test, Function=CheckType, Args="5,3,'pot'", Result=true]

[Test, Function=CheckType, Args="99,3,'men'", Result=false] .... static Boolean CheckType(int FirstArg, int SecondArg, string sType)
{
return bResult;
}
This seems like a very complicated way of doing:

// Here comes the test framework
public class CheckException: Exception { }
void CHECK(boolean b) { if (!b) throw new CheckException(); }
// here comes the test
void testFoo() {
CHECK(CheckType(-1,3,"flower") == true);
CHECK(CheckType(5,3,"pot") == true);
CHECK(CheckType(99,3,"men") == false);
}
The harder part is i want to use reflection to establish the function
arguments and create on the
stack these arguments with the correct values and then call the function
trapping the returned value .

This would be fantastic as nolonger would 2 sets of code need to be
maintained, i.e. the original

functions then the test cases which exercised these functions.

The test for the function lives with the function .


You just formulated a new language ([Test, ...]), which is certainly not
as rich as C#. I try to formulate problems in the source language
instead of implementing my own.
Can you comment if this is possible and if there is an example great ?


nUnit does stuff like this. But I have a hard time seeing the point in
these frameworks.

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #7
Greg Roberts <gr*********@NOSPAMLOWLIFEcitect.com> wrote:
Thanks for the feedback, some more info

Yes we are using MSTest, NUnit and a few others in various old / new C++
(non managed) / C# products .

We are starting a new C# (mainly) feature set and i am researching what is
possible to make for the best
test setup. I am relatively new to C#. Done the few odd utilities.

Currently if you add or change a function, you then need to go to the
test framemowork and add / adjust , and in some cases due to a switch
/ case in the test which also nees to have new numbers defined.
But that's exactly the right way of working, in my view - separate test
code from the implementation code, and write the tests without looking
at the implementation. That way you're encouraged to think of cases
which the code might not cover - otherwise you often end up just
looking at the "if" conditions and writing a different test for each of
them, which may miss some conditions that you *should* be
distinguishing in the code, but aren't.

I think it's good to have the test code in the same solution, but in a
different project.
To me with the ability to hide text in C# having most of the "test cases"
living with the actual code would be ideal.


So you're happy with your tests ending up being part of your final app?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #8
Greg Roberts <gr*********@NOSPAMLOWLIFEcitect.com> wrote:
Thanks for the feedback, some more info

Yes we are using MSTest, NUnit and a few others in various old / new C++
(non managed) / C# products .

We are starting a new C# (mainly) feature set and i am researching what is
possible to make for the best
test setup. I am relatively new to C#. Done the few odd utilities.

Currently if you add or change a function, you then need to go to the
test framemowork and add / adjust , and in some cases due to a switch
/ case in the test which also nees to have new numbers defined.
But that's exactly the right way of working, in my view - separate test
code from the implementation code, and write the tests without looking
at the implementation. That way you're encouraged to think of cases
which the code might not cover - otherwise you often end up just
looking at the "if" conditions and writing a different test for each of
them, which may miss some conditions that you *should* be
distinguishing in the code, but aren't.

I think it's good to have the test code in the same solution, but in a
different project.
To me with the ability to hide text in C# having most of the "test cases"
living with the actual code would be ideal.


So you're happy with your tests ending up being part of your final app?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #9
Helge Jensen <he**********@slog.dk> wrote:
Can you comment if this is possible and if there is an example great ?

nUnit does stuff like this.


I haven't seen anything in NUnit for that - could you point me at a
reference?
But I have a hard time seeing the point in these frameworks.


Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #10
Helge Jensen <he**********@slog.dk> wrote:
Can you comment if this is possible and if there is an example great ?

nUnit does stuff like this.


I haven't seen anything in NUnit for that - could you point me at a
reference?
But I have a hard time seeing the point in these frameworks.


Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #11


Jon Skeet [C# MVP] wrote:
Helge Jensen <he**********@slog.dk> wrote:
>Can you comment if this is possible and if there is an example great ?
nUnit does stuff like this.

I haven't seen anything in NUnit for that - could you point me at a
reference?


nope, don't know much about nUnit, It was just my impression from the
docs i've read. Should have added ", doesn't it?".

I have (too) extensive experience with a C++ framework called xTest, and
jUnit, ...Fixtures and Suites, setUp, tearDown,... it's all coming back
in my dreams.
But I have a hard time seeing the point in these frameworks.

Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?


Especially this feature in particular, but also the frameworks in general.

Here it comes, some of it at least :)

unit-testing is *good*, it works and finds problems. how unit-test
*frameworks* are helping with that is unclear to me.

Tests are no different from any other code and should be written as
such. Test-code should not be inserted in "hot-spots" in a
test-framework -- People bend their test-code to the wierd syntax and
convetions required by the frameworks, atleast that's what i've seen happen.

Instead a test-*library* should provide some handy functionality, like
run_check(f, 1, 2, expected_result); and run_expect(f, 1, 3,
typeof(IndexOutOfRangeException)), which will become the "test-protocol".

This is difficult in (at least) < C#2.0, since

- you need to explicitly declare and instantiate a callback that fits
the type of f
- you can not catch by Type (the type Type, understandable?), only by
"exception-declaration"

This may have changed in C#2.0 but I haven't moved on from 1.1 yet. I
have a faint hope (from rumours and hear-say) that I will be able to use
the generics in C#2 and the implicit delegate support to implement such
a test-library. I have done it before, in C++ (thanks to the *amazing*
boost::function) and it works like a charm.

In the mean time I have made the following observations:

There are good ways to sequentially compose operations in C#:

op1(); op2()

It is more complicated to independently compose execution, but
relatively easy in most cases:

new ThreadStart(op1).Start();
new ThreadStart(op2).Start();

Exceptions are a *blazingly* good way of signalling that something is wrong:

void CHECK(boolean b) { throw new CheckExpection(); }

and the stack-trace tells you *exactly* what function failed, you can
then rerun the test in question setting a break-point at the failed line.

If you are running in debug-mode you can sneakily set a breakpoint in
CheckException.CheckException().

I don't use failure-counts anyway, it's either 0 or !0

If I want to run a specific subset of tests:

class SpecificSubsetOfTests {
public static int Main() {
#if !DEBUG
try {
#endif
// just run the tests
#if !DEBUG
} catch ( Exception e ) {
Console.WriteLine("EXCEPTION:\n{0}", e);
throw;
}
#endif
}
}

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #12


Jon Skeet [C# MVP] wrote:
Helge Jensen <he**********@slog.dk> wrote:
>Can you comment if this is possible and if there is an example great ?
nUnit does stuff like this.

I haven't seen anything in NUnit for that - could you point me at a
reference?


nope, don't know much about nUnit, It was just my impression from the
docs i've read. Should have added ", doesn't it?".

I have (too) extensive experience with a C++ framework called xTest, and
jUnit, ...Fixtures and Suites, setUp, tearDown,... it's all coming back
in my dreams.
But I have a hard time seeing the point in these frameworks.

Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?


Especially this feature in particular, but also the frameworks in general.

Here it comes, some of it at least :)

unit-testing is *good*, it works and finds problems. how unit-test
*frameworks* are helping with that is unclear to me.

Tests are no different from any other code and should be written as
such. Test-code should not be inserted in "hot-spots" in a
test-framework -- People bend their test-code to the wierd syntax and
convetions required by the frameworks, atleast that's what i've seen happen.

Instead a test-*library* should provide some handy functionality, like
run_check(f, 1, 2, expected_result); and run_expect(f, 1, 3,
typeof(IndexOutOfRangeException)), which will become the "test-protocol".

This is difficult in (at least) < C#2.0, since

- you need to explicitly declare and instantiate a callback that fits
the type of f
- you can not catch by Type (the type Type, understandable?), only by
"exception-declaration"

This may have changed in C#2.0 but I haven't moved on from 1.1 yet. I
have a faint hope (from rumours and hear-say) that I will be able to use
the generics in C#2 and the implicit delegate support to implement such
a test-library. I have done it before, in C++ (thanks to the *amazing*
boost::function) and it works like a charm.

In the mean time I have made the following observations:

There are good ways to sequentially compose operations in C#:

op1(); op2()

It is more complicated to independently compose execution, but
relatively easy in most cases:

new ThreadStart(op1).Start();
new ThreadStart(op2).Start();

Exceptions are a *blazingly* good way of signalling that something is wrong:

void CHECK(boolean b) { throw new CheckExpection(); }

and the stack-trace tells you *exactly* what function failed, you can
then rerun the test in question setting a break-point at the failed line.

If you are running in debug-mode you can sneakily set a breakpoint in
CheckException.CheckException().

I don't use failure-counts anyway, it's either 0 or !0

If I want to run a specific subset of tests:

class SpecificSubsetOfTests {
public static int Main() {
#if !DEBUG
try {
#endif
// just run the tests
#if !DEBUG
} catch ( Exception e ) {
Console.WriteLine("EXCEPTION:\n{0}", e);
throw;
}
#endif
}
}

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-
Nov 17 '05 #13
It took me a while to figure out how to use
the VS 2005 beta 2 built-in Team TestFramework,
but I'm fond of it now. Still exploring it; the walkthroughs
demo _some_ of the capability, especially the
code it generates to test private methods.

Worth d/ling the beta to experiment with, imo.
--
Grace + Peace,
Peter N Roth
Engineering Objects International
http://engineeringobjects.com
Home of Matrix.NET
"Helge Jensen" <he**********@slog.dk> wrote in message
news:43**************@slog.dk...


Jon Skeet [C# MVP] wrote:
Helge Jensen <he**********@slog.dk> wrote:
>>Can you comment if this is possible and if there is an example great ?

nUnit does stuff like this.

I haven't seen anything in NUnit for that - could you point me at a
reference?


nope, don't know much about nUnit, It was just my impression from the docs
i've read. Should have added ", doesn't it?".

I have (too) extensive experience with a C++ framework called xTest, and
jUnit, ...Fixtures and Suites, setUp, tearDown,... it's all coming back in
my dreams.
But I have a hard time seeing the point in these frameworks.

Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?


Especially this feature in particular, but also the frameworks in general.

Here it comes, some of it at least :)

unit-testing is *good*, it works and finds problems. how unit-test
*frameworks* are helping with that is unclear to me.

Tests are no different from any other code and should be written as such.
Test-code should not be inserted in "hot-spots" in a test-framework --
People bend their test-code to the wierd syntax and convetions required by
the frameworks, atleast that's what i've seen happen.

Instead a test-*library* should provide some handy functionality, like
run_check(f, 1, 2, expected_result); and run_expect(f, 1, 3,
typeof(IndexOutOfRangeException)), which will become the "test-protocol".

This is difficult in (at least) < C#2.0, since

- you need to explicitly declare and instantiate a callback that fits
the type of f
- you can not catch by Type (the type Type, understandable?), only by
"exception-declaration"

This may have changed in C#2.0 but I haven't moved on from 1.1 yet. I have
a faint hope (from rumours and hear-say) that I will be able to use the
generics in C#2 and the implicit delegate support to implement such a
test-library. I have done it before, in C++ (thanks to the *amazing*
boost::function) and it works like a charm.

In the mean time I have made the following observations:

There are good ways to sequentially compose operations in C#:

op1(); op2()

It is more complicated to independently compose execution, but relatively
easy in most cases:

new ThreadStart(op1).Start();
new ThreadStart(op2).Start();

Exceptions are a *blazingly* good way of signalling that something is
wrong:

void CHECK(boolean b) { throw new CheckExpection(); }

and the stack-trace tells you *exactly* what function failed, you can then
rerun the test in question setting a break-point at the failed line.

If you are running in debug-mode you can sneakily set a breakpoint in
CheckException.CheckException().

I don't use failure-counts anyway, it's either 0 or !0

If I want to run a specific subset of tests:

class SpecificSubsetOfTests {
public static int Main() {
#if !DEBUG
try {
#endif
// just run the tests
#if !DEBUG
} catch ( Exception e ) {
Console.WriteLine("EXCEPTION:\n{0}", e);
throw;
}
#endif
}
}

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-

Nov 17 '05 #14
It took me a while to figure out how to use
the VS 2005 beta 2 built-in Team TestFramework,
but I'm fond of it now. Still exploring it; the walkthroughs
demo _some_ of the capability, especially the
code it generates to test private methods.

Worth d/ling the beta to experiment with, imo.
--
Grace + Peace,
Peter N Roth
Engineering Objects International
http://engineeringobjects.com
Home of Matrix.NET
"Helge Jensen" <he**********@slog.dk> wrote in message
news:43**************@slog.dk...


Jon Skeet [C# MVP] wrote:
Helge Jensen <he**********@slog.dk> wrote:
>>Can you comment if this is possible and if there is an example great ?

nUnit does stuff like this.

I haven't seen anything in NUnit for that - could you point me at a
reference?


nope, don't know much about nUnit, It was just my impression from the docs
i've read. Should have added ", doesn't it?".

I have (too) extensive experience with a C++ framework called xTest, and
jUnit, ...Fixtures and Suites, setUp, tearDown,... it's all coming back in
my dreams.
But I have a hard time seeing the point in these frameworks.

Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?


Especially this feature in particular, but also the frameworks in general.

Here it comes, some of it at least :)

unit-testing is *good*, it works and finds problems. how unit-test
*frameworks* are helping with that is unclear to me.

Tests are no different from any other code and should be written as such.
Test-code should not be inserted in "hot-spots" in a test-framework --
People bend their test-code to the wierd syntax and convetions required by
the frameworks, atleast that's what i've seen happen.

Instead a test-*library* should provide some handy functionality, like
run_check(f, 1, 2, expected_result); and run_expect(f, 1, 3,
typeof(IndexOutOfRangeException)), which will become the "test-protocol".

This is difficult in (at least) < C#2.0, since

- you need to explicitly declare and instantiate a callback that fits
the type of f
- you can not catch by Type (the type Type, understandable?), only by
"exception-declaration"

This may have changed in C#2.0 but I haven't moved on from 1.1 yet. I have
a faint hope (from rumours and hear-say) that I will be able to use the
generics in C#2 and the implicit delegate support to implement such a
test-library. I have done it before, in C++ (thanks to the *amazing*
boost::function) and it works like a charm.

In the mean time I have made the following observations:

There are good ways to sequentially compose operations in C#:

op1(); op2()

It is more complicated to independently compose execution, but relatively
easy in most cases:

new ThreadStart(op1).Start();
new ThreadStart(op2).Start();

Exceptions are a *blazingly* good way of signalling that something is
wrong:

void CHECK(boolean b) { throw new CheckExpection(); }

and the stack-trace tells you *exactly* what function failed, you can then
rerun the test in question setting a break-point at the failed line.

If you are running in debug-mode you can sneakily set a breakpoint in
CheckException.CheckException().

I don't use failure-counts anyway, it's either 0 or !0

If I want to run a specific subset of tests:

class SpecificSubsetOfTests {
public static int Main() {
#if !DEBUG
try {
#endif
// just run the tests
#if !DEBUG
} catch ( Exception e ) {
Console.WriteLine("EXCEPTION:\n{0}", e);
throw;
}
#endif
}
}

--
Helge Jensen
mailto:he**********@slog.dk
sip:he**********@slog.dk
-=> Sebastian cover-music: http://ungdomshus.nu <=-

Nov 17 '05 #15
Helge Jensen <he**********@slog.dk> wrote:
nUnit does stuff like this. I haven't seen anything in NUnit for that - could you point me at a
reference?


nope, don't know much about nUnit, It was just my impression from the
docs i've read. Should have added ", doesn't it?".


Right - no, I don't believe it does.
I have (too) extensive experience with a C++ framework called xTest, and
jUnit, ...Fixtures and Suites, setUp, tearDown,... it's all coming back
in my dreams.
But I have a hard time seeing the point in these frameworks.
Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?


Especially this feature in particular, but also the frameworks in general.

Here it comes, some of it at least :)

unit-testing is *good*, it works and finds problems. how unit-test
*frameworks* are helping with that is unclear to me.


They make it very easy to write tests and then run them in an automated
manner, collecting the output in a nice form which can be attached to
build reports, etc.

How do you normally *run* your tests? With Eclipse and JUnit, I can do
it all within the IDE, debug into it easily, run as many or as few
tests as I want to etc. We have thousands of test cases over hundreds
of classes. Without using a test framework, how can I get the same
level of flexibility?
Tests are no different from any other code and should be written as
such. Test-code should not be inserted in "hot-spots" in a
test-framework -- People bend their test-code to the wierd syntax and
convetions required by the frameworks, atleast that's what i've seen happen.
Not sure what you mean here. There's no particularly weird syntax to
JUnit or NUnit - they're just straight Java or C#, potentially using
base classes which are provided by the framework.
Instead a test-*library* should provide some handy functionality, like
run_check(f, 1, 2, expected_result); and run_expect(f, 1, 3,
typeof(IndexOutOfRangeException)), which will become the "test-protocol".
Neither of those are particularly hard to do in NUnit or JUnit.
This is difficult in (at least) < C#2.0, since

- you need to explicitly declare and instantiate a callback that fits
the type of f
- you can not catch by Type (the type Type, understandable?), only by
"exception-declaration"

This may have changed in C#2.0 but I haven't moved on from 1.1 yet. I
have a faint hope (from rumours and hear-say) that I will be able to use
the generics in C#2 and the implicit delegate support to implement such
a test-library. I have done it before, in C++ (thanks to the *amazing*
boost::function) and it works like a charm.

In the mean time I have made the following observations:

There are good ways to sequentially compose operations in C#:

op1(); op2()

It is more complicated to independently compose execution, but
relatively easy in most cases:

new ThreadStart(op1).Start();
new ThreadStart(op2).Start();

Exceptions are a *blazingly* good way of signalling that something is wrong:

void CHECK(boolean b) { throw new CheckExpection(); }

and the stack-trace tells you *exactly* what function failed, you can
then rerun the test in question setting a break-point at the failed line.
And guess what - that's exactly what JUnit and NUnit do.
If you are running in debug-mode you can sneakily set a breakpoint in
CheckException.CheckException().

I don't use failure-counts anyway, it's either 0 or !0
Whereas I don't want to have to run the whole test suite again after
fixing one test, only to find there's another one that's failing - and
repeat that for as many tests as are failing. Usually there aren't many
(or even any) failing tests, but I still want to have a confidence
level about the rest of the code even if one fails.

Currently we've got four tests which are failing, and have been failing
for a few days while we try to find a solution. Would you suggest that
it's pointless knowing whether any *other* tests (including new ones)
have started failing over those last few days?
If I want to run a specific subset of tests:

class SpecificSubsetOfTests {
public static int Main() {
#if !DEBUG
try {
#endif
// just run the tests
#if !DEBUG
} catch ( Exception e ) {
Console.WriteLine("EXCEPTION:\n{0}", e);
throw;
}
#endif
}
}


Whereas I can just select a package in Eclipse, and run all the tests
within that package and sub-packages. I've never needed to specify in
code a list of tests to run.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #16
Helge Jensen <he**********@slog.dk> wrote:
nUnit does stuff like this. I haven't seen anything in NUnit for that - could you point me at a
reference?


nope, don't know much about nUnit, It was just my impression from the
docs i've read. Should have added ", doesn't it?".


Right - no, I don't believe it does.
I have (too) extensive experience with a C++ framework called xTest, and
jUnit, ...Fixtures and Suites, setUp, tearDown,... it's all coming back
in my dreams.
But I have a hard time seeing the point in these frameworks.
Do you mean you can't see the point in unit testing frameworks in
general, or just this kind of feature?


Especially this feature in particular, but also the frameworks in general.

Here it comes, some of it at least :)

unit-testing is *good*, it works and finds problems. how unit-test
*frameworks* are helping with that is unclear to me.


They make it very easy to write tests and then run them in an automated
manner, collecting the output in a nice form which can be attached to
build reports, etc.

How do you normally *run* your tests? With Eclipse and JUnit, I can do
it all within the IDE, debug into it easily, run as many or as few
tests as I want to etc. We have thousands of test cases over hundreds
of classes. Without using a test framework, how can I get the same
level of flexibility?
Tests are no different from any other code and should be written as
such. Test-code should not be inserted in "hot-spots" in a
test-framework -- People bend their test-code to the wierd syntax and
convetions required by the frameworks, atleast that's what i've seen happen.
Not sure what you mean here. There's no particularly weird syntax to
JUnit or NUnit - they're just straight Java or C#, potentially using
base classes which are provided by the framework.
Instead a test-*library* should provide some handy functionality, like
run_check(f, 1, 2, expected_result); and run_expect(f, 1, 3,
typeof(IndexOutOfRangeException)), which will become the "test-protocol".
Neither of those are particularly hard to do in NUnit or JUnit.
This is difficult in (at least) < C#2.0, since

- you need to explicitly declare and instantiate a callback that fits
the type of f
- you can not catch by Type (the type Type, understandable?), only by
"exception-declaration"

This may have changed in C#2.0 but I haven't moved on from 1.1 yet. I
have a faint hope (from rumours and hear-say) that I will be able to use
the generics in C#2 and the implicit delegate support to implement such
a test-library. I have done it before, in C++ (thanks to the *amazing*
boost::function) and it works like a charm.

In the mean time I have made the following observations:

There are good ways to sequentially compose operations in C#:

op1(); op2()

It is more complicated to independently compose execution, but
relatively easy in most cases:

new ThreadStart(op1).Start();
new ThreadStart(op2).Start();

Exceptions are a *blazingly* good way of signalling that something is wrong:

void CHECK(boolean b) { throw new CheckExpection(); }

and the stack-trace tells you *exactly* what function failed, you can
then rerun the test in question setting a break-point at the failed line.
And guess what - that's exactly what JUnit and NUnit do.
If you are running in debug-mode you can sneakily set a breakpoint in
CheckException.CheckException().

I don't use failure-counts anyway, it's either 0 or !0
Whereas I don't want to have to run the whole test suite again after
fixing one test, only to find there's another one that's failing - and
repeat that for as many tests as are failing. Usually there aren't many
(or even any) failing tests, but I still want to have a confidence
level about the rest of the code even if one fails.

Currently we've got four tests which are failing, and have been failing
for a few days while we try to find a solution. Would you suggest that
it's pointless knowing whether any *other* tests (including new ones)
have started failing over those last few days?
If I want to run a specific subset of tests:

class SpecificSubsetOfTests {
public static int Main() {
#if !DEBUG
try {
#endif
// just run the tests
#if !DEBUG
} catch ( Exception e ) {
Console.WriteLine("EXCEPTION:\n{0}", e);
throw;
}
#endif
}
}


Whereas I can just select a package in Eclipse, and run all the tests
within that package and sub-packages. I've never needed to specify in
code a list of tests to run.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too
Nov 17 '05 #17

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

38 posts views Thread by Christoph Zwerschke | last post: by
14 posts views Thread by | last post: by
2 posts views Thread by Naveen Mukkelli | last post: by
5 posts views Thread by David K Allen | last post: by
72 posts views Thread by Jacob | last post: by
28 posts views Thread by Alan Isaac | last post: by
176 posts views Thread by nw | last post: by
5 posts views Thread by =?Utf-8?B?cmFuZHkxMjAw?= | last post: by
2 posts views Thread by =?Utf-8?B?c2lwcHl1Y29ubg==?= | last post: by
reply views Thread by XIAOLAOHU | last post: by
reply views Thread by leo001 | last post: by
reply views Thread by Vinnie | last post: by
1 post views Thread by lumer26 | last post: by
reply views Thread by lumer26 | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.