473,404 Members | 2,195 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,404 software developers and data experts.

A simple unit test framework

nw
Hi,

I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

Thanks for your time!

class UnitTest {
private:
int tests_failed;
int tests_passed;
int total_tests_failed;
int total_tests_passed;
std::string test_set_name;
std::string current_file;
std::string current_description;

public:

UnitTest(std::string test_set_name_in) : tests_failed(0),

tests_passed(0),

total_tests_failed(0),

total_tests_passed(0),

current_file(),

current_description(),

test_set_name(test_set_name_in) {
std::cout << "*** Test set : " << test_set_name << std::endl;
}

void begin_test_set(std::string description, const char *filename) {
current_description = description;
current_file = filename;
tests_failed = 0;
tests_passed = 0;
std::cout << "****** Testing: " << current_description <<
std::endl;
}

void end_test_set() {
std::cout << "****** Test : " << current_description << "
complete, ";
std::cout << "passed " << tests_passed << ", failed " <<
tests_failed << "." << std::endl;
}

template<class _TestType>
bool test(_TestType t1,_TestType t2,int linenumber) {
bool test_result = (t1 == t2);

if(!test_result) {
std::cout << "****** FAILED : " << current_file << "," <<
linenumber;
std::cout << ": " << t1 << " is not equal to " << t2 <<
std::endl;
total_tests_failed++;
tests_failed++;
} else { tests_passed++; total_tests_passed++; }
}

void test_report() {
std::cout << "*** Test set : " << test_set_name << " complete, ";
std::cout << "passed " << total_tests_passed;
std::cout << " failed " << total_tests_failed << "." << std::endl;
if(total_tests_failed != 0) std::cout << "*** TEST FAILED!" <<
std::endl;
}
};

int main(void) {
// create a rectangle at position 0,0 with sides of length 10
UnitTest ut("Test Shapes");

// Test Class Rectangle
ut.begin_test_set("Rectangle",__FILE__);
Rectangle r(0,0,10,10);
ut.test(r.is_square(),true,__LINE__);
ut.test(r.area(),100.0,__LINE__);

Rectangle r2(0,0,1,5);
ut.test(r2.is_square(),true,__LINE__);
ut.test(r2.area(),5.0,__LINE__);
ut.end_test_set();

// Test Class Circle
ut.begin_test_set("Circle",__FILE__);
Circle c(0,0,10);
ut.test(c.area(),314.1592654,__LINE__);
ut.test(c.circumference(),62.831853080,__LINE__);

ut.end_test_set();

ut.test_report();

return 0;
}

May 3 '07 #1
176 8150
nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
http://cxxtest.sourceforge.net/
Here is a link to the c++ unit test framework I have been using. Take a
look - you might get an idea how to improve your unit test framework...
May 3 '07 #2
nw wrote:
>
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 3 '07 #3
nw
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.
Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.

May 3 '07 #4
nw
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.
Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.

May 3 '07 #5
nw
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.
Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.

May 3 '07 #6
nw
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.
Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.

May 3 '07 #7
nw wrote:
>A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run. A good tester
focuses on getting the code to fail.

Agreed. That was my motivation in providing a relatively simple small
class which is really just a comparison function that on failure
prints out the file and line the test failed in. So I was going to
spend about a half hour talking about the features of C++ they'll need
__LINE__, __FILE__ etc. and introducing a simple framework. Then
another half hour talking about designing tests to try and make their
code fail.
Saying it four times doesn't make your point any stronger :-)

I would only suggest that you also try to add a test registry and some
macros so that __FILE__ and __LINE__ are not used in test cases.

In the Austria C++ unit test system, I use exceptions to indicate
failure. It's usually silly to continue with a test if part of it has
failed.

Is Austria C++ there is also an assert macro "AT_TCAssert" for "Test
Case Assert" which is somewhat similar to:

ut.test(r.is_square(),true,__LINE__);

AT_TCAssert throws a "at::TestCase_Exception" when the assert fails and
provides a string descrbing the error.

Here is an example:

AT_TCAssert( m_value == A_enum, "Failed to get correct type" )

#define AT_TCAssert( x_expr, x_description ) \
if ( !( x_expr ) ) { \
throw TestCase_Exception( \
AT_String( x_description ), \
__FILE__, \
__LINE__ \
); \
} \
// end of macro

.... now that I think about it, that should be a while() not an if() or
an if wrapped in a do {}.

TestCase_Exception also grabs a stack trace and can print out a trace of
the place it is thrown.

May 3 '07 #8
Pete Becker wrote:
nw wrote:
>>
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!

--
Ian Collins.
May 4 '07 #9
Ian Collins wrote:
Pete Becker wrote:
>nw wrote:
>>I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!
You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 4 '07 #10
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:
>>nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!

You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
May 4 '07 #11
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:
>>nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:

A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!

You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written.
If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests. If code isn't required to pass a test, it simply
doesn't get written. Done correctly, TDD will give you full test
coverage for free.
There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.

--
Ian Collins.
May 4 '07 #12
anon wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>Pete Becker wrote:
nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
>
A fool with a tool is still a fool. The challenge in testing is not
test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!

You can't do coverage analysis or any other form of white-box testing
on code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
I find it makes the design and code process easier and way more fun. No
more debugging sessions!

--
Ian Collins.
May 4 '07 #13
anon wrote:
....
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
*I* term this Test Driven Design, or TDD. TDD is used to mean different
things.

The minimal deliverable of a design is a doxygen documented compilable
header file and compilable (not linkable) unit test cases that
demonstrate the use of the API (not coverage - that comes as part of the
development).

I personally employ the TDD part of development almost exclusively. I
think I used some form of this technique since around 1983. It results
in code that works and is easy (relatively speaking) to use.
May 4 '07 #14
anon wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>Pete Becker wrote:
nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
>
A fool with a tool is still a fool. The challenge in testing is not
test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!

You can't do coverage analysis or any other form of white-box testing
on code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus.
That's not what coverage analysis refers to. Coverage analysis takes the
test cases that you've written and measures (speaking a bit informally)
how much of the code is actually tested by the test set. You can't make
that measurement until you've written the code.
This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
When you write tests before writing code you're only doing black box
testing. Black box testing has some strengths, and it has some
weaknesses. White box testing (which includes coverage analysis)
complements black box testing. Excluding it because of some dogma about
only writing tests before writing code limits the kinds of things you
can discover through testing.

The problem is that, in general, you cannot test every possible set of
input conditions to a function. So you have to select two sets of test
caes: those that check that mainline operations are correct, and those
that are most likely to find errors. That second set requires knowledge
of how the code was written, so that you can probe its likely weak spots.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 4 '07 #15
Ian Collins wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>Pete Becker wrote:
nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
>
A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!
You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written.

If you code test first (practice Test Driven Design/Development), you
don't have to do coverage analysis because your code has been written to
pass the tests.
No, that's not what coverage analysis means. See my other message.
If code isn't required to pass a test, it simply
doesn't get written. Done correctly, TDD will give you full test
coverage for free.
Nope. Test driven design cannot account for the possibility that a
function will use an internal buffer that holds N bytes, and has to
handle the edges of that buffer correctly. The specification says
nothing about N, just what output has to result from what input. What
are the chances that input data chosen from the specification will just
happen to be the right length to hit that off by one error? If you know
what N is, you can test N-1, N, and N+1 input bytes, with a much higher
chance of hitting bad code.
>There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.
Good testers do far more than write black box tests (boring) and run
test suites (even more boring, and mechanical). Good testers know how to
write tests that finds bugs, and once the code has been fixed so that it
passes all the tests, they start over again.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 4 '07 #16
Pete Becker wrote:
anon wrote:
>Pete Becker wrote:
>>>
You can't do coverage analysis or any other form of white-box testing
on code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus.

That's not what coverage analysis refers to. Coverage analysis takes the
test cases that you've written and measures (speaking a bit informally)
how much of the code is actually tested by the test set. You can't make
that measurement until you've written the code.
If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.
>This way, the code you write will be minimal and easier to understand
and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically

When you write tests before writing code you're only doing black box
testing. Black box testing has some strengths, and it has some
weaknesses. White box testing (which includes coverage analysis)
complements black box testing. Excluding it because of some dogma about
only writing tests before writing code limits the kinds of things you
can discover through testing.
TDD advocates will (at lease they should) aways be acceptance test
advocates. These provide your white box testing.
The problem is that, in general, you cannot test every possible set of
input conditions to a function. So you have to select two sets of test
caes: those that check that mainline operations are correct, and those
that are most likely to find errors. That second set requires knowledge
of how the code was written, so that you can probe its likely weak spots.
Possibly, but doing so may not increase the code coverage. It may well
flush out failure modes. Once found, these should be fixed by adding a
unit test to reproduce the error.

--
Ian Collins.
May 5 '07 #17
Pete Becker wrote:
Ian Collins wrote:
>>>
I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.

Good testers do far more than write black box tests (boring) and run
test suites (even more boring, and mechanical). Good testers know how to
write tests that finds bugs, and once the code has been fixed so that it
passes all the tests, they start over again.
I never underestimate the ingenuity of good testers. My testers had
complete freedom to test the product how they wanted. They ended up
producing an extremely sophisticated test environment, which being fully
automated, they didn't have to run!

--
Ian Collins.
May 5 '07 #18
Ian Collins wrote:
Pete Becker wrote:
>anon wrote:
>>Pete Becker wrote:
You can't do coverage analysis or any other form of white-box testing
on code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.

The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus.
That's not what coverage analysis refers to. Coverage analysis takes the
test cases that you've written and measures (speaking a bit informally)
how much of the code is actually tested by the test set. You can't make
that measurement until you've written the code.
If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.
Suppose you're writing test cases for the function log, which calculates
the logarithm of its argument. Internally, it will use different
techniques for various ranges of argument values. But the specification
for log, of course, doesn't tell you this, so your test cases aren't
likely to hit each of those ranges, and certainly won't make careful
probes near their boundaries. It's only by looking at the code that you
can write these tests.
>>This way, the code you write will be minimal and easier to understand
and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
When you write tests before writing code you're only doing black box
testing. Black box testing has some strengths, and it has some
weaknesses. White box testing (which includes coverage analysis)
complements black box testing. Excluding it because of some dogma about
only writing tests before writing code limits the kinds of things you
can discover through testing.
TDD advocates will (at lease they should) aways be acceptance test
advocates. These provide your white box testing.
Acceptance tests can be white box tests, but they can also be black box
tests.
>The problem is that, in general, you cannot test every possible set of
input conditions to a function. So you have to select two sets of test
caes: those that check that mainline operations are correct, and those
that are most likely to find errors. That second set requires knowledge
of how the code was written, so that you can probe its likely weak spots.
Possibly, but doing so may not increase the code coverage. It may well
flush out failure modes. Once found, these should be fixed by adding a
unit test to reproduce the error.
Well, yes, but the point is that focused testing based on knowledge of
the internals of the code (i.e. white box testing) is more likely to
find some kinds of bugs than tests written without looking into the
code, which is the only kind of test you can write before you've written
the code.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 5 '07 #19
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:

If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.

Suppose you're writing test cases for the function log, which calculates
the logarithm of its argument. Internally, it will use different
techniques for various ranges of argument values. But the specification
for log, of course, doesn't tell you this, so your test cases aren't
likely to hit each of those ranges, and certainly won't make careful
probes near their boundaries. It's only by looking at the code that you
can write these tests.
Pete, I think you are missing the point of TDD.

It's easy for those unfamiliar with the process to focus on the "T" and
ignore the "DD". TDD is a tool for delivering better code, the tests
drive the design, they are not driven by it. So if I were tasked with
writing he function log, I'd start with a simple test, say log(10) and
then add more tests to cover the full range of inputs. These tests
would specify the behavior and drive the internals of the function.

Remember, if code isn't required to pass a test, it doesn't get written.
>TDD advocates will (at lease they should) aways be acceptance test
advocates. These provide your white box testing.
Acceptance tests can be white box tests, but they can also be black box
tests.
>>The problem is that, in general, you cannot test every possible set of
input conditions to a function. So you have to select two sets of test
caes: those that check that mainline operations are correct, and those
that are most likely to find errors. That second set requires knowledge
of how the code was written, so that you can probe its likely weak
spots.
Possibly, but doing so may not increase the code coverage. It may well
flush out failure modes. Once found, these should be fixed by adding a
unit test to reproduce the error.

Well, yes, but the point is that focused testing based on knowledge of
the internals of the code (i.e. white box testing) is more likely to
find some kinds of bugs than tests written without looking into the
code, which is the only kind of test you can write before you've written
the code.
That's where we disagree, I view the testers as the customer's
advocates. They should be designing acceptance tests with and for the
customer that validate the product to the customer's satisfaction. In
situations where there is an identifiable customer, here a some
excellent automated acceptance test frameworks available to enable
customer to design their own tests (FIT for example).

--
Ian Collins.
May 5 '07 #20

"nw" <ne*@soton.ac.ukwrote in message
news:11**********************@h2g2000hsg.googlegro ups.com...
Hi,

I previously asked for suggestions on teaching testing in C++.
i missed that.
... I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework,
sounds very painful
and then in a lab session see if I can get them to
write their own. ...
pm4ji, but please consider the cdt with cxxtest
(http://cxxtest.sourceforge.net/) or cute in ecplispe, these will be much
less painfull. please read this parer
(http://people.cs.vt.edu/~edwards/Res...s/Edwards.html) if you
have not already done so.

or, if you have visual studio 2005, you can use http://nunit.org/ and write
your tests in managed c++.

writing a test needs to be trivial or people will not write them.

thanks
May 5 '07 #21
Ian Collins wrote:
:: Pete Becker wrote:
::: Ian Collins wrote:
:::: Pete Becker wrote:
::::
:::: If you apply TDD correctly, you only write code to pass tests, so
:::: all of your code is covered.
::::
:::
::: Suppose you're writing test cases for the function log, which
::: calculates the logarithm of its argument. Internally, it will use
::: different techniques for various ranges of argument values. But the
::: specification for log, of course, doesn't tell you this, so your
::: test cases aren't likely to hit each of those ranges, and certainly
::: won't make careful probes near their boundaries. It's only by
::: looking at the code that you can write these tests.
:::
:: Pete, I think you are missing the point of TDD.
::
:: It's easy for those unfamiliar with the process to focus on the "T"
:: and ignore the "DD". TDD is a tool for delivering better code, the
:: tests drive the design, they are not driven by it. So if I were
:: tasked with writing he function log, I'd start with a simple test,
:: say log(10) and then add more tests to cover the full range of
:: inputs. These tests would specify the behavior and drive the
:: internals of the function.
::
:: Remember, if code isn't required to pass a test, it doesn't get
:: written.
::

So Pete will pass your first test with "return 1;".

How many more tests do you expect to write, before you are sure that Pete's
code is always no more than one unit off in the last decimal?

I know that he claims that it is. How can he do that?
Bo Persson
May 5 '07 #22
Bo Persson wrote:
Ian Collins wrote:
:: Pete Becker wrote:
::: Ian Collins wrote:
:::: Pete Becker wrote:
::::
:::: If you apply TDD correctly, you only write code to pass tests, so
:::: all of your code is covered.
::::
:::
::: Suppose you're writing test cases for the function log, which
::: calculates the logarithm of its argument. Internally, it will use
::: different techniques for various ranges of argument values. But the
::: specification for log, of course, doesn't tell you this, so your
::: test cases aren't likely to hit each of those ranges, and certainly
::: won't make careful probes near their boundaries. It's only by
::: looking at the code that you can write these tests.
:::
:: Pete, I think you are missing the point of TDD.
::
:: It's easy for those unfamiliar with the process to focus on the "T"
:: and ignore the "DD". TDD is a tool for delivering better code, the
:: tests drive the design, they are not driven by it. So if I were
:: tasked with writing he function log, I'd start with a simple test,
:: say log(10) and then add more tests to cover the full range of
:: inputs. These tests would specify the behavior and drive the
:: internals of the function.
::
:: Remember, if code isn't required to pass a test, it doesn't get
:: written.
::

So Pete will pass your first test with "return 1;".
Yes.
How many more tests do you expect to write, before you are sure that Pete's
code is always no more than one unit off in the last decimal?
You miss the point as well, I wouldn't be testing Pete's code, I'd be
developing my own. TDD is *NOT* retrospective testing.

--
Ian Collins.
May 5 '07 #23
Ian Collins wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.
Good testers do far more than write black box tests (boring) and run
test suites (even more boring, and mechanical). Good testers know how to
write tests that finds bugs, and once the code has been fixed so that it
passes all the tests, they start over again.
I never underestimate the ingenuity of good testers. My testers had
complete freedom to test the product how they wanted. They ended up
producing an extremely sophisticated test environment, which being fully
automated, they didn't have to run!
When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an
adjunct to development. Testing is a profession, with technical
challenges that differ from, and often exceed in complexity, those posed
by development. The skills required to do it well are vastly different,
but no less sophisticated, than those needed to write the product
itself. Most developers think they can write good tests, but in reality,
they simply don't have the right skills, and test suites written by
developers are usually naive. A development manager who gives testers
"complete freedom" is missing the point: that's not something the
manager can give or take, it's an essential part of effective testing.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 5 '07 #24
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:
>>Ian Collins wrote:
I don't deny that. Always let testers write the black box product
acceptance tests. That way you get the interpretation of two differing
groups on the product requirements.

Good testers do far more than write black box tests (boring) and run
test suites (even more boring, and mechanical). Good testers know how to
write tests that finds bugs, and once the code has been fixed so that it
passes all the tests, they start over again.
I never underestimate the ingenuity of good testers. My testers had
complete freedom to test the product how they wanted. They ended up
producing an extremely sophisticated test environment, which being fully
automated, they didn't have to run!

When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an
adjunct to development. Testing is a profession, with technical
challenges that differ from, and often exceed in complexity, those posed
by development. The skills required to do it well are vastly different,
but no less sophisticated, than those needed to write the product
itself. Most developers think they can write good tests, but in reality,
they simply don't have the right skills, and test suites written by
developers are usually naive. A development manager who gives testers
"complete freedom" is missing the point: that's not something the
manager can give or take, it's an essential part of effective testing.
I'm not sure if you are agreeing or disagreeing with me here!

--
Ian Collins.
May 5 '07 #25
Ian Collins wrote:
:: Bo Persson wrote:
::: Ian Collins wrote:
::::: Pete Becker wrote:
:::::: Ian Collins wrote:
::::::: Pete Becker wrote:
:::::::
::::::: If you apply TDD correctly, you only write code to pass tests,
::::::: so all of your code is covered.
:::::::
::::::
:::::: Suppose you're writing test cases for the function log, which
:::::: calculates the logarithm of its argument. Internally, it will use
:::::: different techniques for various ranges of argument values. But
:::::: the specification for log, of course, doesn't tell you this, so
:::::: your test cases aren't likely to hit each of those ranges, and
:::::: certainly won't make careful probes near their boundaries. It's
:::::: only by looking at the code that you can write these tests.
::::::
::::: Pete, I think you are missing the point of TDD.
:::::
::::: It's easy for those unfamiliar with the process to focus on the
::::: "T" and ignore the "DD". TDD is a tool for delivering better
::::: code, the tests drive the design, they are not driven by it. So
::::: if I were tasked with writing he function log, I'd start with a
::::: simple test, say log(10) and then add more tests to cover the
::::: full range of inputs. These tests would specify the behavior and
::::: drive the internals of the function.
:::::
::::: Remember, if code isn't required to pass a test, it doesn't get
::::: written.
:::::
:::
::: So Pete will pass your first test with "return 1;".
:::
:: Yes.
::
::: How many more tests do you expect to write, before you are sure
::: that Pete's code is always no more than one unit off in the last
::: decimal?
:::
:: You miss the point as well, I wouldn't be testing Pete's code, I'd be
:: developing my own. TDD is *NOT* retrospective testing.
::

Ok, then. If you were to develop a log() function, how many test cases would
you need for a black box testing?
Bo Persson
May 5 '07 #26
Bo Persson wrote:
>
Ok, then. If you were to develop a log() function, how many test cases would
you need for a black box testing?
One for each floating-point value. <g>

Don't get too hung up on the distinction between black box and white box
testing (I'm arguing against it, even though I introduced it into the
discussion). The point is that TDD is iterative; as you understand the
implementation of log() better, you recognize it's bad spots, add test
cases that hit them, and then change the implementation in response to
those tests. But that's a more sophisticated statement than "The latest
trends are to write tests first which demonstrates the requirements,
then code (classes+methods). In this case you will not have to do a
coverage, but it is a plus", which started this subthread.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 5 '07 #27
Bo Persson wrote:
....
Ok, then. If you were to develop a log() function, how many test cases would
you need for a black box testing?
Ah - that's a design question.

What are your design objectives ?

I assume you want to log various information.

Design objectives are like:

a) Must be configutable for different levels (error, warning, debug etc)
b) Must not consume compute resources when the level is not being used
c) Must be able to optimize away logging levels not wanted in release code.
d) Must be as simple as possible to use.
e) Formatting of log message must be not part of the mainline code
f) Must avoid problems with missing end logs etc
g) Must handle the order of initialization issues and still collect the
logging information even though no output has been determined
h) Must handle race conditions of multiple threads writing logs
simultaneously

...............
first unit test for logging
..............

AT_DefineTest( Log, Log, "Basic Logging test" )
{

void Run()
{

// set up a preference manager
// usually set up once for all the application
// preferences.
Ptr< ATLOGTEST_PreferenceManager * l_pm =
new ATLOGTEST_PreferenceManager;

// set up a logging manager
ATLOGTEST_LogManager l_lm;

{

// set up a place to write logging messages
ATLOGTEST_LogWriter l_lw( l_lm );

// create a loglevel preference
Preference< int >
l_loglevel( "loglevel", ~0, "/FooProgram", l_pm );

// start a logging section (compound statement)
AT_STARTLOG(
l_lm, l_loglevel, AT_LOGLEVEL_SEVERE, "Some Description"
)

at_log("Some Variables")
<< 55 << 123.456 << "Some String";

AT_ENDLOG

AT_STARTLOG(
l_lm, l_loglevel, AT_LOGLEVEL_SEVERE, "Some Description"
)

at_log("Some Variables")
<< 55 << 123.456 << "Some String";
AT_ENDLOG

}
}
};

AT_RegisterTest( Log, Log );
Now you can ask why did we use AT_STARTLOG...AT_ENDLOG macros.

The start of a logging block has to be a macro to begin with since it
must be an if statement because of requirements b amd c.

The data is stored in a temporary log session and at AT_ENDLOG it
destructs and pushes out the log all in one go - satisfying requirement h).

You need to set up a preference value for requiremnt a. Hence the
prefrence manager.

One of my prior implementations of a logging interface used an interface
like:

STARTLOG(...) { log( "blah" ) << blah; }

.... which also satisfied requirements b and c, however, a few too many
times, a n ending "}" went astray which caused all kinds of hard to find
bugs (which worked when debug logging was turned on and failed when
debug logging was turned off... very annoying. So the {...} was
replaced with START() log( "blah" ) << blah; END which never caused any
problems whatsoever. It was about 5 years ago now so I don't remember
the exact details. We got to the point of writing a little lex script
to run thorugh the code looking for possible problems.
This is a somewhat simplistic account of the issues but the point is
that the test case was written first. The rest of the code is far more
complex.
I'm not sure if there were any more tests written for the logging
interface but it was used throughout the application so it was tested
quite alot as part of other tests.
May 5 '07 #28
Pete Becker wrote:
....
When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an
adjunct to development. Testing is a profession, with technical
challenges that differ from, and often exceed in complexity, those posed
by development. The skills required to do it well are vastly different,
but no less sophisticated, than those needed to write the product
itself. Most developers think they can write good tests, but in reality,
they simply don't have the right skills, and test suites written by
developers are usually naive. A development manager who gives testers
"complete freedom" is missing the point: that's not something the
manager can give or take, it's an essential part of effective testing.
The most successful teams I worked with were teams that wrote their own
tests.

I find the distinction of being a "programmer" or a "test programmer" to
be counter productive.

Testability is a primary objective. (one of my tenets for good software
development).

Without somthing being testable, it can't be maintained. Maintaining
and developing are the same thing. Hence, can't be tested means can't
be developed.

Yes, there are edge cases where the developer is unable to fully test
the code. That's ok, tests don't have to be perfect first time, as
problems are found the developer gets to learn and develop better tests.
I almost always now resort to at least one monte-carlo test.
(construct/randomly twiddle/destruct) large numbers of times in random
orders with random data.

The latest code I did that for was this:
AT_DefineTest( MonteRangeMap, RangeMap, "Basic RangeMap test" )
{

void Run()
{

TestBoth<unsignedl_rm;

std::srand( 39000 );

int range = 60;

for ( int i = 0; i < 10000; ++i )
{

unsigned l_begin = std::rand() % range;
unsigned l_end = std::rand() % range;

bool operation = ( 2 & std::rand() ) == 0;

if ( operation )
{
l_rm.SubtractRange( l_begin, l_end );
}
else
{
l_rm.AddRange( l_begin, l_end );
}

}
}

};

AT_RegisterTest( MonteRangeMap, RangeMap );
.... it found more bugs in my code than any of the hard coded tests I
did. I also ran this test through valgrind just to make sure I was not
messing with memory.

I find the monte carlo tests to be very effective. The first time I
started employing these tests, I was met with criticism from developers
telling me that it was testing cases that would never happen in the real
world. Well, there was one failure I was getting on my monte carlo test
at the time that I pushed to get fixed and the push back was nah-never
happens. When we finally released the code, we were getting sporadic
failures that ... guess what ... were the same failures that the monte
carlo test raised... the ones where the other engineers said could not
happen.

So, surviving a monte-carlo test is required for "testability" and
"testability is a primary objective" and so any monte-carlo test
failures are now failures of the primary objective so they go to the top
of the bug list. It's amazing how solid the codebase gets once you push
this. Not to mention the rise in productivity because the engineers are
not wasting their time finding someone else's bugs.
May 5 '07 #29
Gianni Mariani wrote:
Bo Persson wrote:
...
>Ok, then. If you were to develop a log() function, how many test cases
would you need for a black box testing?

Ah - that's a design question.

What are your design objectives ?

I assume you want to log various information.
Well, no. log() in this case is the logarithm function that I used as an
example earlier in this thread.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 5 '07 #30
Gianni Mariani wrote:
>
I find the distinction of being a "programmer" or a "test programmer" to
be counter productive.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 5 '07 #31
Gianni Mariani wrote:
Pete Becker wrote:
...
>When I was a QA manager I'd bristle whenever any developer said they'd
"let testers" do something. That's simply wrong. Testing is not an
adjunct to development. Testing is a profession, with technical
challenges that differ from, and often exceed in complexity, those
posed by development. The skills required to do it well are vastly
different, but no less sophisticated, than those needed to write the
product itself. Most developers think they can write good tests, but
in reality, they simply don't have the right skills, and test suites
written by developers are usually naive. A development manager who
gives testers "complete freedom" is missing the point: that's not
something the manager can give or take, it's an essential part of
effective testing.

The most successful teams I worked with were teams that wrote their own
tests.

I find the distinction of being a "programmer" or a "test programmer" to
be counter productive.
I do, too, because those particular terms suggest a false hierarchy. A
better distinction might be between an application programmer and a test
programmer. The fact remains that developers rarely have the skills to
write good tests or the mindset to write good tests.
>
Testability is a primary objective. (one of my tenets for good software
development).

Without somthing being testable, it can't be maintained. Maintaining
and developing are the same thing. Hence, can't be tested means can't
be developed.

Yes, there are edge cases where the developer is unable to fully test
the code. That's ok, tests don't have to be perfect first time, as
problems are found the developer gets to learn and develop better tests.
I almost always now resort to at least one monte-carlo test.
(construct/randomly twiddle/destruct) large numbers of times in random
orders with random data.
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>

As I've said several times, developing and testing involve two distinct
sets of skills. Developers think they're good at testing, but any
professional tester will tell you that they aren't.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
May 5 '07 #32
On May 4, 3:01 pm, anon <a...@no.nowrote:
Pete Becker wrote:
Ian Collins wrote:
Pete Becker wrote:
nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
>A fool with a tool is still a fool. The challenge in testing is not test
management, but designing test cases to cover the possible failures in
the code under test. That's something that most developers don't do
well, because their focus is on getting the code to run.
Unless the test are written first!
You can't do coverage analysis or any other form of white-box testing on
code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).
The latest trend where? Certainly not in any company concerned
with good management, or quality software.
In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
And will not necessarily meet requirements, or even be useful.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 5 '07 #33
On May 4, 10:51 pm, Ian Collins <ian-n...@hotmail.comwrote:
anon wrote:
Pete Becker wrote:
Ian Collins wrote:
Pete Becker wrote:
nw wrote:
I previously asked for suggestions on teaching testing in C++. Based
on some of the replies I received I decided that best way to proceed
would be to teach the students how they might write their own unit
test framework, and then in a lab session see if I can get them to
write their own. To give them an example I've created the following
UTF class (with a simple test program following). I would welcome and
suggestions on how anybody here feels this could be improved:
>>A fool with a tool is still a fool. The challenge in
testing is not test management, but designing test cases
to cover the possible failures in the code under test.
That's something that most developers don't do well,
because their focus is on getting the code to run.
Unless the test are written first!
You can't do coverage analysis or any other form of white-box testing
on code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
I find it makes the design and code process easier and way more fun. No
more debugging sessions!
Until the code starts actually being used.

I'll admit that it's probably more fun to just write whatever
you want, rather than to have a hard specification which has to
be met, but the companies I work for want something useful.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 5 '07 #34
On May 5, 1:56 am, Ian Collins <ian-n...@hotmail.comwrote:
Pete Becker wrote:
anon wrote:
Pete Becker wrote:
>You can't do coverage analysis or any other form of white-box testing
on code that hasn't been written. There is a big difference between a
tester's minset and a develper's mindset, and it's very hard for one
person to do both.
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus.
That's not what coverage analysis refers to. Coverage analysis takes the
test cases that you've written and measures (speaking a bit informally)
how much of the code is actually tested by the test set. You can't make
that measurement until you've written the code.
If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.
Yes, but nobody but an idiot would pay you for such a thing.
Thread safety, to site but the most obvious example, isn't
testable, so you just ignore it?

My customers want to know what the code will do, and how much
development will cost, before they allocate the resources to
develope it. Which means that I have a requirements
specification which has to be met.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 5 '07 #35
On May 5, 2:31 am, Ian Collins <ian-n...@hotmail.comwrote:
Pete Becker wrote:
Ian Collins wrote:
Pete Becker wrote:
If you apply TDD correctly, you only write code to pass tests, so all of
your code is covered.
Suppose you're writing test cases for the function log, which calculates
the logarithm of its argument. Internally, it will use different
techniques for various ranges of argument values. But the specification
for log, of course, doesn't tell you this, so your test cases aren't
likely to hit each of those ranges, and certainly won't make careful
probes near their boundaries. It's only by looking at the code that you
can write these tests.
Pete, I think you are missing the point of TDD.

It's easy for those unfamiliar with the process to focus on the "T" and
ignore the "DD". TDD is a tool for delivering better code, the tests
drive the design, they are not driven by it.
Which, of course, is entirely backwards.
So if I were tasked with
writing he function log, I'd start with a simple test, say log(10) and
then add more tests to cover the full range of inputs.
That is, of course, one solution. It's theoretically possible
for log, but it will take several hundred centuries of CPU time,
which means that it's not very practical. In practice, the way
you verify that a log function is correct is with code review,
with testing of the border cases (which implies the what Pete is
calling white box testing), to back up the review.
These tests
would specify the behavior and drive the internals of the function.
In this case, I think that the behavior is specified before
hand. It is a mathematical function, after all, and we can know
the precise result for every possible input. In practice, of
course, it isn't at all possible to test it, at least not
exhaustively.
Remember, if code isn't required to pass a test, it doesn't get written.
So your log function only has to produce correct results for the
limited set of values you use to test it? I hope I never have
to use a library you wrote.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 5 '07 #36
On May 5, 3:08 pm, "Bo Persson" <b...@gmb.dkwrote:
Ok, then. If you were to develop a log() function, how many
test cases would you need for a black box testing?
In this case, it's not really just a question of black box
versus white box. To effectively test log(), you need something
around 1,8 E19 tests. At one test per microsecond (and I doubt
that you can do them that fast), that's over 6000 centuries.

If you want to be relatively sure of the correctness of your
function, and still deliver it on schedule, you will take two
steps:

-- you will have it reviewed by people who know numeric
processing, for possible errors, and

-- you will write tests based on knowledge of the algorithm
used, to verify the limit cases for that algorithm.

Pete's point is, I think, that you cannot do the latter until
you've decided what algorithm(s) to use. In the case of a
function like log(), you can't simply test 100% of the possible
input, and you don't know what is important to test until you've
actually written the code.

Note that the two steps above work together. The people doing
the code review also review the tests, and verify that your
choice of test cases was appropriate for the code you wrote.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 5 '07 #37
James Kanze wrote:
On May 4, 10:51 pm, Ian Collins <ian-n...@hotmail.comwrote:
>anon wrote:
>>The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods). In this case you will not
have to do a coverage, but it is a plus. This way, the code you write
will be minimal and easier to understand and maintain.
I agree this way looks harder (and I am not using it), but I am sure
once you get use to it - your programming skills will improve drastically
>I find it makes the design and code process easier and way more fun. No
more debugging sessions!

Until the code starts actually being used.
No, The first embedded product my team developed using TDD has had about
half a dozen defects reported in 5 years and several thousand units of
field use. They were all in bits of the code with poor tests.
I'll admit that it's probably more fun to just write whatever
you want, rather than to have a hard specification which has to
be met, but the companies I work for want something useful.
What a load of bollocks. Who said anything about "write whatever you
want"? If you are going to slag something off, please understand it and
try it first.
--
James Kanze (Gabi Software) email: ja*********@gmail.com
Please fix this!

--
Ian Collins.
May 5 '07 #38
James Kanze wrote:
On May 5, 3:08 pm, "Bo Persson" <b...@gmb.dkwrote:
>Ok, then. If you were to develop a log() function, how many
test cases would you need for a black box testing?

In this case, it's not really just a question of black box
versus white box. To effectively test log(), you need something
around 1,8 E19 tests. At one test per microsecond (and I doubt
that you can do them that fast), that's over 6000 centuries.

If you want to be relatively sure of the correctness of your
function, and still deliver it on schedule, you will take two
steps:

-- you will have it reviewed by people who know numeric
processing, for possible errors, and

-- you will write tests based on knowledge of the algorithm
used, to verify the limit cases for that algorithm.
Better still, sit down with someone who knows numeric processing and
write the tests and the function, together.
Pete's point is, I think, that you cannot do the latter until
you've decided what algorithm(s) to use.
It can also work the other way, you are unsure of the algorithm, so you
start be writing a few simple test cases and as these become more
complex, the algorithm finds its self. I did this with polynomial fit a
while back, stating with a test for a two point fit, then three, then a
quadratic and so on. I reached the general case in about a dozen tests.

--
Ian Collins.
May 5 '07 #39
* Ian Collins:
>
>--
James Kanze (Gabi Software) email: ja*********@gmail.com

Please fix this!
James is (for good reasons) posting via Google, which ingeniously strips
off the space at the end of the signature delimiter.

The problem with Google: just like Microsoft there seems to be a
high-level management decision to gather as much feedback as possible,
countered by low-level management decisions to sabotage that so that it
looks like it's in place, but actually impossible for anyone to report
anything (you're redirected to irrelevant places, submit buttons don't
work, nothing is actually received, no relevant category is listed, no
mail address available, and so on ad nauseam).

Hence, Google Earth places Norway in Sweden, Google Groups strips off
significant spaces, and so on and so forth, and even though thousands
and tens of thousands /try/ to report this, Google's as unwise as ever
about its failings. The price of becoming a behemoth company, and what
I'm speculating is probably the reason: lying cheating weasels are
attracted like moths to a flame, and form the lower management echelons.
Oh, sorry, this' off-topic in clc++m, but it sure felt good to get
that off my chest!

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
May 5 '07 #40
* Alf P. Steinbach:
* Ian Collins:
>>
>>--
James Kanze (Gabi Software) email: ja*********@gmail.com

Please fix this!

James is (for good reasons) posting via Google, which ingeniously strips
off the space at the end of the signature delimiter.

The problem with Google: just like Microsoft there seems to be a
high-level management decision to gather as much feedback as possible,
countered by low-level management decisions to sabotage that so that it
looks like it's in place, but actually impossible for anyone to report
anything (you're redirected to irrelevant places, submit buttons don't
work, nothing is actually received, no relevant category is listed, no
mail address available, and so on ad nauseam).

Hence, Google Earth places Norway in Sweden, Google Groups strips off
significant spaces, and so on and so forth, and even though thousands
and tens of thousands /try/ to report this, Google's as unwise as ever
about its failings. The price of becoming a behemoth company, and what
I'm speculating is probably the reason: lying cheating weasels are
attracted like moths to a flame, and form the lower management echelons.
Oh, sorry, this' off-topic in clc++m, but it sure felt good to get that
off my chest!
(James: no, I didn't follow up on you-know-what.)

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
May 5 '07 #41
James Kanze wrote:
On May 4, 3:01 pm, anon <a...@no.nowrote:
>The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).

The latest trend where? Certainly not in any company concerned
with good management, or quality software.
Have you ever been in charge of a company's software development? I
have and the best thing I ever did to improve both the productivity of
the teams and quality of the code was to introduce eXtreme Programming,
which includes TDD as a core practice.

Our delivery times and field defect rates more than vindicated the change.

--
Ian Collins.
May 5 '07 #42
Pete Becker wrote:
Gianni Mariani wrote:
>Pete Becker wrote:
...
>>When I was a QA manager I'd bristle whenever any developer said
they'd "let testers" do something. That's simply wrong. Testing is
not an adjunct to development. Testing is a profession, with
technical challenges that differ from, and often exceed in
complexity, those posed by development. The skills required to do it
well are vastly different, but no less sophisticated, than those
needed to write the product itself. Most developers think they can
write good tests, but in reality, they simply don't have the right
skills, and test suites written by developers are usually naive. A
development manager who gives testers "complete freedom" is missing
the point: that's not something the manager can give or take, it's an
essential part of effective testing.

The most successful teams I worked with were teams that wrote their
own tests.

I find the distinction of being a "programmer" or a "test programmer"
to be counter productive.

I do, too, because those particular terms suggest a false hierarchy. A
better distinction might be between an application programmer and a test
programmer. The fact remains that developers rarely have the skills to
write good tests or the mindset to write good tests.
So we do agree! That's pretty much the job title I gave my test
developers. They were just as much developers as those who developed
the application code. Their key skills were designing good tests and
knowing what tools to use to write, run and report on those tests.

--
Ian Collins.
May 5 '07 #43
Alf P. Steinbach wrote:
* Ian Collins:
>>
>>--
James Kanze (Gabi Software) email: ja*********@gmail.com

Please fix this!

James is (for good reasons) posting via Google, which ingeniously strips
off the space at the end of the signature delimiter.

The problem with Google: just like Microsoft there seems to be a
high-level management decision to gather as much feedback as possible,
countered by low-level management decisions to sabotage that so that it
looks like it's in place, but actually impossible for anyone to report
anything (you're redirected to irrelevant places, submit buttons don't
work, nothing is actually received, no relevant category is listed, no
mail address available, and so on ad nauseam).

Hence, Google Earth places Norway in Sweden, Google Groups strips off
significant spaces, and so on and so forth, and even though thousands
and tens of thousands /try/ to report this, Google's as unwise as ever
about its failings. The price of becoming a behemoth company, and what
I'm speculating is probably the reason: lying cheating weasels are
attracted like moths to a flame, and form the lower management echelons.
Oh, sorry, this' off-topic in clc++m, but it sure felt good to get that
off my chest!
A rant a day keeps the ulcers away!

--
Ian Collins.
May 5 '07 #44
Pete Becker wrote:
Gianni Mariani wrote:
>Bo Persson wrote:
...
>>Ok, then. If you were to develop a log() function, how many test
cases would you need for a black box testing?

Ah - that's a design question.

What are your design objectives ?

I assume you want to log various information.

Well, no. log() in this case is the logarithm function that I used as an
example earlier in this thread.
Oh - well that's why you need a spec !!!!
May 5 '07 #45
Pete Becker wrote:
....
>
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>
I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.

OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
>
As I've said several times, developing and testing involve two distinct
sets of skills. Developers think they're good at testing, but any
professional tester will tell you that they aren't.
I challenge you. I don't think of myself as a tester. I believe you
can't do a better job than I in testing my code. Let's use this "range
map" as an example.

I have attached the header file and the test cases.

Your clain that I "don't understand testing well enough" and so I use an
MC test I think is short sighted.

For example, MC tests are the only tests that I have been able to truly
test multithreaded code. It is nigh impossible for me (or any human) to
truly understand all the interactions in a MT scenario. MC tests almost
always will push every edge of the problem space.

Again, it's not to say that there are no systematic errors that can't be
discovered using random tests, but these types of errors are exactly
the kind that a good developer knows exist and tests for or even designs
around.
//
// The Austria library is copyright (c) Gianni Mariani 2004.
//
// Grant Of License. Grants to LICENSEE the non-exclusive right to use the Austria
// library subject to the terms of the LGPL.
//
// A copy of the license is available in this directory or one may be found at this URL:
// http://www.gnu.org/copyleft/lesser.txt
//
/**
* at_rangemap.h
*
*/

#ifndef x_at_rangemap_h_x
#define x_at_rangemap_h_x 1

#include "at_exports.h"
#include "at_os.h"
#include "at_assert.h"

#include <map>

// Austria namespace
namespace at
{
// ======== TypeRange =================================================
/**
* TypeRange describes the range of a particular type
*
*/

template <typename w_RangeType>
class TypeRange
{
public:

// range type
typedef w_RangeType t_RangeType;
// ======== Adjacent ==============================================
/**
* Adjacent returns true if the two parameters are "one apart"
*
* @param i_lesser is the lesser of the two values
* @param i_greater is the greater of the two
* @return true is no other elements exist between i_lesser and i_greater
*/

static bool Adjacent(
const t_RangeType & i_lesser,
const t_RangeType & i_greater
) {

t_RangeType l_greater_less( i_greater );

-- l_greater_less; // go to the earlier element

// deal with wrapping
if ( i_greater < l_greater_less )
{
return false;
}

return !( i_lesser < l_greater_less );
}
};
// ======== RangeMap ==================================================
/**
* RangeMap is a template that defines ranges.
*
*/

template <typename w_RangeType, typename w_RangeTraits=TypeRange<w_RangeType
class RangeMap
{
public:

// range type
typedef w_RangeType t_RangeType;
typedef w_RangeTraits t_RangeTraits;

// index on the end of the range
typedef std::map< t_RangeType, t_RangeType t_Map;
typedef typename t_Map::iterator t_Iterator;
// ======== AddRange ==============================================
/**
* Add a segment to the range.
*
* @param i_begin The beginning of the range (inclusive)
* @param i_end The end of the range (inclusive)
* @return nothing
*/

void AddRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

// deal with an empty map here
if ( m_map.empty() )
{
// shorthand adding the first element into the map
m_map[ l_end ] = l_begin;
return;
}

// see if there is a segment to merge - find the element that preceeds
// l_begin

t_Iterator l_begin_bound = m_map.lower_bound( l_begin );

if ( l_begin_bound == m_map.end() )
{
// l_begin is after the last element

-- l_begin_bound;

if ( t_RangeTraits::Adjacent( l_begin_bound->first, l_begin ) )
{

// yes, they are mergable
t_RangeType l_temp = l_begin_bound->second;
m_map.erase( l_begin_bound );
m_map[ l_end ] = l_temp;

return;
}

// not mergable - add the segment at the end

m_map[ l_end ] = l_begin;
return;
}

// if the end of the segment being inserted is not beyond this one
if ( ( l_end < l_begin_bound->second ) && ! t_RangeTraits::Adjacent( l_end, l_begin_bound->second ) )
{
// NOT mergable with subsequent segments

if ( l_begin_bound == m_map.begin() )
{
// There is no previous segment

m_map[ l_end ] = l_begin;
return;
}

// The segment being inserted can't be merged at the end

// see if it can be merged with the previous one

t_Iterator l_previous = l_begin_bound;
-- l_previous;

AT_Assert( l_previous->first < l_begin );

if ( ! t_RangeTraits::Adjacent( l_previous->first, l_begin ) )
{
// not overlapping with previous and not mergable

m_map[ l_end ] = l_begin;
return;
}
else
{
// we are mergable with the previous element

// yes, they are mergable
t_RangeType l_temp = l_previous->second;
m_map.erase( l_previous );
m_map[ l_end ] = l_temp;
return;
}

}

if ( l_begin_bound == m_map.begin() )
{
if ( l_end < l_begin_bound->first )
{
if ( l_end < l_begin_bound->second )
{
if ( t_RangeTraits::Adjacent( l_end, l_begin_bound->second ) )
{
l_begin_bound->second = l_begin;
return;
}
else
{
m_map[ l_end ] = l_begin;
return;
}
}
else
{
if ( l_begin < l_begin_bound->second )
{
l_begin_bound->second = l_begin;
}
return;
}
}
else
{

t_RangeType l_new_begin = l_begin;

if ( l_begin_bound->second < l_begin )
{
l_new_begin = l_begin_bound->second;
}

// Check to see what segment is close to the end
t_Iterator l_end_bound = m_map.lower_bound( l_end );

if ( l_end_bound == m_map.end() )
{
// erase all the segments from l_previous to the end and
// replace with one

m_map.erase( l_begin_bound, l_end_bound );

m_map[ l_end ] = l_new_begin;
return;
}

if ( l_end < l_end_bound->second && ! t_RangeTraits::Adjacent( l_end, l_end_bound->second ) )
{
m_map.erase( l_begin_bound, l_end_bound );
m_map[ l_end ] = l_new_begin;
return;
}

// merge with the current end

m_map.erase( l_begin_bound, l_end_bound ); // erase segments in between
l_end_bound->second = l_new_begin;
return;
}
}

if ( l_begin_bound == m_map.begin() )
{
// no previous ranges

// see if we can merge with the current range
}

// find the previous iterator
t_Iterator l_previous = l_begin_bound;
-- l_previous;

t_RangeType l_new_begin = l_begin;

if ( t_RangeTraits::Adjacent( l_previous->first, l_begin ) )
{
l_new_begin = l_previous->second;
}
else
{
++ l_previous;

if ( l_previous->second < l_new_begin )
{
l_new_begin = l_previous->second;
}
}

t_RangeType l_new_end = l_end;
// Check to see what segment is close to the end
t_Iterator l_end_bound = m_map.lower_bound( l_end );

if ( l_end_bound == m_map.end() )
{
// erase all the segments from l_previous to the end and
// replace with one

m_map.erase( l_previous, l_end_bound );

m_map[ l_end ] = l_new_begin;
return;
}

if ( l_end < l_end_bound->second && ! t_RangeTraits::Adjacent( l_end, l_end_bound->second ) )
{
m_map.erase( l_previous, l_end_bound );
m_map[ l_end ] = l_new_begin;
return;
}

// merge with the current end

m_map.erase( l_previous, l_end_bound ); // erase segments in between
l_end_bound->second = l_new_begin;

return;
}
// ======== SubtractRange =========================================
/**
* SubtractRange removes the range. (opposite of Add)
*
*
* @param i_begin Beginning of range to subtract
* @param i_end End of range to subtract
* @return nothing
*/

void SubtractRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

// deal with an empty map here
if ( m_map.empty() )
{
// Nothing to remove
return;
}

// See if we find a segment
// l_begin

t_Iterator l_begin_bound = m_map.lower_bound( l_begin );

if ( l_begin_bound == m_map.end() )
{
// this does not cover any segments
return;
}

if ( l_begin_bound->second < l_begin )
{
// this segment is broken up

t_RangeType l_newend = l_begin;

-- l_newend;

m_map[ l_newend ] = l_begin_bound->second;

l_begin_bound->second = l_begin;
}

t_Iterator l_end_bound = m_map.lower_bound( l_end );

if ( l_end_bound == m_map.end() )
{
// erase all the segments from the beginning to end
m_map.erase( l_begin_bound, l_end_bound );
return;
}

if ( !( l_end < l_end_bound->first ) )
{
// the segment end must be equal the segment given

++ l_end_bound;

m_map.erase( l_begin_bound, l_end_bound );
return;
}

// need to break up the final segment

m_map.erase( l_begin_bound, l_end_bound );

if ( !( l_end < l_end_bound->second ) )
{
t_RangeType l_newbegin = l_end;

++ l_newbegin;

l_end_bound->second = l_newbegin;
}

return;

}

// ======== IsSet =================================================
/**
* Checks to see if the position is set
*
* @param i_pos
* @return True if the position is set
*/

bool IsSet( const t_RangeType & i_pos )
{
t_Iterator l_bound = m_map.lower_bound( i_pos );

if ( l_bound == m_map.end() )
{
// this does not cover any segments
return false;
}

return !( i_pos < l_bound->second );
}

t_Map m_map;

};

}; // namespace

#endif // x_at_rangemap_h_x

#include "at_rangemap.h"

#include "at_unit_test.h"

#include <iostream>
#include <vector>
#include <cstdlib>

using namespace at;

namespace RangemapTest {

//
// ======== TestBitMap ================================================
/**
* TestBitMap is like a range map but the logic is far easier.
*
*/

template <typename w_RangeType, typename w_RangeTraits=TypeRange<w_RangeType
class TestBitMap
{
public:

// range type
typedef w_RangeType t_RangeType;
typedef w_RangeTraits t_RangeTraits;
// ======== AddRange ==============================================
/**
* Add a segment to the range.
*
* @param i_begin The beginning of the range (inclusive)
* @param i_end The end of the range (inclusive)
* @return nothing
*/

void AddRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

CheckSize( l_end );

for ( unsigned i = l_begin; i <= l_end; ++i )
{
m_bitmap[ i ] = true;
}
}

// ======== SubtractRange =========================================
/**
* SubtractRange removes the range. (opposite of Add)
*
*
* @param i_begin Beginning of range to subtract
* @param i_end End of range to subtract
* @return nothing
*/

void SubtractRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
const bool l_less_than( i_end < i_begin );

const t_RangeType & l_begin = ! l_less_than ? i_begin : i_end;
const t_RangeType & l_end = l_less_than ? i_begin : i_end;

CheckSize( l_end );

for ( unsigned i = l_begin; i <= l_end; ++i )
{
m_bitmap[ i ] = false;
}

}
// ======== IsSet =================================================
/**
* Checks to see if the position is set
*
* @param i_pos
* @return True if the position is set
*/

bool IsSet( const t_RangeType & i_pos )
{
if ( m_bitmap.size() < std::size_t( i_pos ) )
{
return false;
}
return m_bitmap[ i_pos ];
}
void CheckSize( const t_RangeType & i_end )
{
if ( m_bitmap.size() < std::size_t(i_end + 1) )
{
m_bitmap.resize( i_end + 1 );
}
}

std::vector<bool m_bitmap;
};

// ======== TestBoth ==================================================
/**
*
*
*/

template <typename w_RangeType, typename w_RangeTraits=TypeRange<w_RangeType
class TestBoth
{
public:
// range type
typedef w_RangeType t_RangeType;
typedef w_RangeTraits t_RangeTraits;
// ======== AddRange ==============================================
/**
* Add a segment to the range.
*
* @param i_begin The beginning of the range (inclusive)
* @param i_end The end of the range (inclusive)
* @return nothing
*/

void AddRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
m_rangemap_pre = m_rangemap;
m_rangemap.AddRange( i_begin, i_end );
m_bitmap.AddRange( i_begin, i_end );

Verify( "Add", i_begin, i_end );
}

// ======== SubtractRange =========================================
/**
* SubtractRange removes the range. (opposite of Add)
*
*
* @param i_begin Beginning of range to subtract
* @param i_end End of range to subtract
* @return nothing
*/

void SubtractRange( const t_RangeType & i_begin, const t_RangeType & i_end )
{
m_rangemap_pre = m_rangemap;
m_rangemap.SubtractRange( i_begin, i_end );
m_bitmap.SubtractRange( i_begin, i_end );

Verify( "Sub", i_begin, i_end );
}

void Verify( const char * i_op, const t_RangeType & i_begin, const t_RangeType & i_end )
{
unsigned l_elems = m_bitmap.m_bitmap.size();

typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Iterator l_bound;
typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Iterator l_previous;
bool l_previous_set = false;
for ( unsigned i = 0; i < (l_elems); ++ i )
{
l_bound = m_rangemap.m_map.lower_bound( i );

bool l_rangemap_val = l_bound == m_rangemap.m_map.end() ? false : !( i < l_bound->second );
bool l_bitmap_val = m_bitmap.IsSet(i);
bool l_segment_fail = false;

if ( l_previous_set )
{
if ( l_rangemap_val )
{
l_segment_fail = l_previous != l_bound;
}
}
else
{
}

l_previous_set = l_rangemap_val;

if ( l_rangemap_val && ! l_segment_fail )
{
l_previous = l_bound;
}

if ( ( l_rangemap_val != l_bitmap_val ) || l_segment_fail )
{
if ( l_segment_fail )
{
std::cout << "Segments ( " << l_bound->second << ", " << l_bound->first << " ) and \n";
std::cout << " ( " << l_previous->second << ", " << l_previous->first << " ) and \n";
}
std::cout << "Operation = " << i_op << " i_begin = " << i_begin << " i_end = " << i_end << "\n";
std::cout << "Pre operation ";
DumpRanges( m_rangemap_pre.m_map );
std::cout << "Post operation ";
DumpRanges( m_rangemap.m_map );
std::cout << "rangemap " << w_RangeType(i) << " - l_rangemap_val = " << l_rangemap_val << ", l_bitmap_val = " << l_bitmap_val << "\n";
}

AT_TCAssert( m_rangemap.IsSet(i) == m_bitmap.IsSet(i), "Bitmap differs" );
}
}

void DumpRanges()
{
DumpRanges( m_rangemap.m_map );
}
void DumpRanges( typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Map & l_map )
{
typename at::RangeMap< w_RangeType, w_RangeTraits >::t_Iterator l_iterator;

for ( l_iterator = l_map.begin(); l_iterator != l_map.end(); ++ l_iterator )
{
std::cout << "( " << l_iterator->second << ", " << l_iterator->first << " )";
}
std::cout << "\n";
}

at::RangeMap< w_RangeType, w_RangeTraits m_rangemap;
at::RangeMap< w_RangeType, w_RangeTraits m_rangemap_pre;
TestBitMap< w_RangeType, w_RangeTraits m_bitmap;
};


AT_TestArea( RangeMap, "Rangemap object tests" );

AT_DefineTest( RangeMap, RangeMap, "Basic RangeMap test" )
{

void Run()
{

{
TestBoth<unsigned charl_rm;

l_rm.AddRange( 'a','x' );

l_rm.AddRange( 'A','X' );

l_rm.AddRange( 'z','z' );

l_rm.AddRange( 'Y','Z' );

l_rm.DumpRanges();
}
{
TestBoth<unsigned charl_rm;

l_rm.AddRange( '0','0' );

l_rm.AddRange( 'a','a' );

l_rm.AddRange( 'c','c' );

l_rm.AddRange( 'h','h' );

l_rm.AddRange( 'b','g' );

l_rm.DumpRanges();
}
{
TestBoth<unsigned charl_rm;

l_rm.AddRange( '0','0' );

l_rm.AddRange( 'a','a' );

l_rm.AddRange( 'c','c' );

l_rm.AddRange( 'h','h' );

l_rm.AddRange( 'b','i' );

l_rm.DumpRanges();

l_rm.AddRange( 'c','c' );

l_rm.DumpRanges();

l_rm.SubtractRange( 'b','b' );
l_rm.SubtractRange( 'c','c' );
l_rm.SubtractRange( 'd','d' );
l_rm.SubtractRange( 'c','c' );

l_rm.DumpRanges();
}
{
TestBoth<unsigned charl_rm;
l_rm.AddRange( 'a','a' );

l_rm.AddRange( 'c','c' );

l_rm.AddRange( 'h','h' );

l_rm.AddRange( 'b','i' );

l_rm.AddRange( '0','0' );

l_rm.SubtractRange( '0','0' );

l_rm.AddRange( '0', 'i' );

l_rm.SubtractRange( '0','i' );

l_rm.AddRange( 'A','Z' );

l_rm.SubtractRange( 'M','M' );

l_rm.DumpRanges();
}
}

};

AT_RegisterTest( RangeMap, RangeMap );
AT_DefineTest( MonteRangeMap, RangeMap, "Basic RangeMap test" )
{

void Run()
{

TestBoth<unsignedl_rm;

std::srand( 39000 );

int range = 60;

for ( int i = 0; i < 10000; ++i )
{

unsigned l_begin = std::rand() % range;
unsigned l_end = std::rand() % range;

bool operation = ( 2 & std::rand() ) == 0;

if ( operation )
{
l_rm.SubtractRange( l_begin, l_end );
}
else
{
l_rm.AddRange( l_begin, l_end );
}

}
}

};

AT_RegisterTest( MonteRangeMap, RangeMap );
} // RangeMap Test namespace

May 5 '07 #46
Gianni Mariani wrote:
Pete Becker wrote:
....
>>
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>

I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.

OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
There are plenty of situations where a Monte Carlo test isn't
appropriate or even possible. A good test developer has the knack of
thinking like a user, where a developer thinks like a developer. They
also see the bigger picture, you know your part of a system in detail,
but they know the overall system which enables them to think up more
imaginative usage scenarios.
>>
As I've said several times, developing and testing involve two
distinct sets of skills. Developers think they're good at testing, but
any professional tester will tell you that they aren't.

I challenge you. I don't think of myself as a tester. I believe you
can't do a better job than I in testing my code. Let's use this "range
map" as an example.

I have attached the header file and the test cases.
Not a good idea on Usenet!
Your clain that I "don't understand testing well enough" and so I use an
MC test I think is short sighted.

For example, MC tests are the only tests that I have been able to truly
test multithreaded code. It is nigh impossible for me (or any human) to
truly understand all the interactions in a MT scenario. MC tests almost
always will push every edge of the problem space.
While I generally agree with your comments on MT code, reproducing a
failure induced by random testing can be very difficult.

The tests you posted don't appear to do any MT testing, or am I missing
something?

--
Ian Collins.
May 6 '07 #47
Ian Collins wrote:
Gianni Mariani wrote:
>Pete Becker wrote:
....
>>Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
problem. <g>
I have yet to meet a "test" developer that can beat the monte carlo test
for coverage.

OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
There are plenty of situations where a Monte Carlo test isn't
appropriate or even possible.
ya - we agree. I kind of said that in the first sentence.

.... A good test developer has the knack of
thinking like a user, where a developer thinks like a developer.
My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.

.... They
also see the bigger picture, you know your part of a system in detail,
but they know the overall system which enables them to think up more
imaginative usage scenarios.
I guess I don't see any value in a developer taking a myopic view of the
product they work on.
>
>>As I've said several times, developing and testing involve two
distinct sets of skills. Developers think they're good at testing, but
any professional tester will tell you that they aren't.
I challenge you. I don't think of myself as a tester. I believe you
can't do a better job than I in testing my code. Let's use this "range
map" as an example.

I have attached the header file and the test cases.
Not a good idea on Usenet!
My newsreader messes with the code otherwise ... :-(
>
>Your clain that I "don't understand testing well enough" and so I use an
MC test I think is short sighted.

For example, MC tests are the only tests that I have been able to truly
test multithreaded code. It is nigh impossible for me (or any human) to
truly understand all the interactions in a MT scenario. MC tests almost
always will push every edge of the problem space.
While I generally agree with your comments on MT code, reproducing a
failure induced by random testing can be very difficult.

The tests you posted don't appear to do any MT testing, or am I missing
something?
No, they don't I was just making the point that sometimes, the best test
is the MC test. There is no simple and easy rule as to the test
approach, you need to adopt the test for the problem at hand.
May 6 '07 #48
Gianni Mariani wrote:
Ian Collins wrote:

.... A good test developer has the knack of
>thinking like a user, where a developer thinks like a developer.

My BS meter jut pegged. A developer had better think like a user or
they're a crappy developer IMHO.
While ideal, that can be difficult when the developer is part of a large
team working on a component of complex system. Sure everyone should
have some degree of domain knowledge, but it isn't always possible.
Many project teams I have worked on had a large number of contact staff
employed for their coding skill rather than product knowledge (I know, I
frequently was one!).

This was why in my shop, the test developers worked with the customer(s)
to design and implement the acceptance tests.

--
Ian Collins.
May 6 '07 #49
On May 5, 10:44 pm, Ian Collins <ian-n...@hotmail.comwrote:
James Kanze wrote:
On May 4, 3:01 pm, anon <a...@no.nowrote:
The latest trends are to write tests first which demonstrates the
requirements, then code (classes+methods).
The latest trend where? Certainly not in any company concerned
with good management, or quality software.
Have you ever been in charge of a company's software development? I
have and the best thing I ever did to improve both the productivity of
the teams and quality of the code was to introduce eXtreme Programming,
which includes TDD as a core practice.
Our delivery times and field defect rates more than vindicated the change.
I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.

When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review. Compared to
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.

--
James Kanze (Gabi Software) email: ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

May 6 '07 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

14
by: | last post by:
Hi! I'm looking for unit-testing tools for .NET. Somthing like Java has --> http://www.junit.org regards, gicio
16
by: Greg Roberts | last post by:
Hi I want to place the tests needed in the code using attributes. There seems to be enough code snippets around for me to cover this. e.g. // Test cases, run these here on the function and...
4
by: sylcheung | last post by:
Hi, Does anyone has any suggestion for Unit Test Framework for c++ program? Is there any framework which let me use some script languages (e.g. python) to write the test cases to test c++...
5
by: VvanN | last post by:
hi, fellows I'd like to intruduce a new unit test framework for C++ freely available at: http://unit--.sourceforge.net/ It does not need bothering test registration, here is an example ...
6
by: Michael Bray | last post by:
I've just inherited a fairly large project with multiple classes. The developer also wrote a huge number of unit tests (using NUnit) to validate that the classes work correctly. However, I don't...
10
by: Brendan Miller | last post by:
What would heavy python unit testers say is the best framework? I've seen a few mentions that maybe the built in unittest framework isn't that great. I've heard a couple of good things about...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.