By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,678 Members | 2,402 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,678 IT Pros & Developers. It's quick & easy.

Testing in C++

P: n/a
nw
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Sorry if this is off-topic in this group.

Many Thanks.

Mar 8 '07 #1
Share this Question
Share on Google+
58 Replies


P: n/a
nw wrote:
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?
Test Driven Development with CppUnit!

--
Ian Collins.
Mar 8 '07 #2

P: n/a
On Mar 8, 9:38 am, "nw" <n...@soton.ac.ukwrote:
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?
This article is a bit dated (2004), but it compares several C++ unit
testing frameworks (including Boost):

http://www.gamesfromwithin.com/artic...12/000061.html
Mar 9 '07 #3

P: n/a
dave_mikesell wrote:
nw wrote:
>Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?
Try to get as far as Abstract Test Cases, which are super-important for
refactoring. Read /Pragmatic Unit Testing/ and/or /Unit Test Frameworks/.

And emphasize programmers write tests _while_ they write the tested code.
That saves a lot of time debugging.
This article is a bit dated (2004), but it compares several C++ unit
testing frameworks (including Boost):

http://www.gamesfromwithin.com/artic...12/000061.html
Yeah, it pans my "NanoCppUnit", which I _told_ him was just a sketch and not
productized yet. Then he complains it's just a sketch and not productized.

And that article came out before Noel's own UnitTest++, which bears a
striking resemblance to NanoCppUnit, which was in use at the lab where I
worked with Noel.

http://unittest-cpp.sourceforge.net/

With friends like that...

But, seriously folks, UnitTest++ is my favorite C++ unit test rig. Its tests
for itself are exemplary.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Mar 9 '07 #4

P: n/a
On 8 Mar 2007 06:38:43 -0800, "nw" wrote:
>I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course?
You should at least explain the different kinds of tests: integration
test, functional test, unit test, assert.
>Is boost the way to go?
No (unless you want to chase away your students). The following is a
nice unit test framework, esp. for teaching purposes:
http://www.ddj.com/dept/cpp/184401279

Best regards,
Roland Pibinger
Mar 9 '07 #5

P: n/a
On Mar 8, 11:38 pm, "nw" <n...@soton.ac.ukwrote:
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Sorry if this is off-topic in this group.

Many Thanks.
CPP Unit is a free tool available for C++ Unit Testing.

http://cppunit.sourceforge.net

Mar 9 '07 #6

P: n/a
Roland Pibinger wrote:
On 8 Mar 2007 06:38:43 -0800, "nw" wrote:
>I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course?

You should at least explain the different kinds of tests: integration
test, functional test, unit test, assert.
And regression test. Well, maybe not, since it has become meaningless.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #7

P: n/a
On Mar 8, 10:35 pm, "Phlip" <phlip...@yahoo.comwrote:
>
But, seriously folks, UnitTest++ is my favorite C++ unit test rig. Its tests
for itself are exemplary.
Thanks, Phlip. I'll check it out. I've only used CPPUnit to date
because it's what I used first and it worked OK.

Mar 9 '07 #8

P: n/a
nw
On Mar 8, 9:42 pm, Ian Collins <ian-n...@hotmail.comwrote:
nw wrote:
Hi,
I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Test Driven Development with CppUnit!
I've taken a look at the CppUnit documentation. It seems there is
quite a significant overhead in CppUnit in terms of creating classes/
methods to get a simple test running (as compared to other testing
frameworks) what advantages does this give you?

Mar 9 '07 #9

P: n/a
nw
On Mar 9, 3:35 am, "Phlip" <phlip...@yahoo.comwrote:
dave_mikesell wrote:
nw wrote:
Hi,
I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Try to get as far as Abstract Test Cases, which are super-important for
refactoring. Read /Pragmatic Unit Testing/ and/or /Unit Test Frameworks/.

And emphasize programmers write tests _while_ they write the tested code.
That saves a lot of time debugging.
This article is a bit dated (2004), but it compares several C++ unit
testing frameworks (including Boost):
http://www.gamesfromwithin.com/artic...12/000061.html

Yeah, it pans my "NanoCppUnit", which I _told_ him was just a sketch and not
productized yet. Then he complains it's just a sketch and not productized.

And that article came out before Noel's own UnitTest++, which bears a
striking resemblance to NanoCppUnit, which was in use at the lab where I
worked with Noel.

http://unittest-cpp.sourceforge.net/

With friends like that...

But, seriously folks, UnitTest++ is my favorite C++ unit test rig. Its tests
for itself are exemplary.
Thanks for your suggestion, after reading a bit about it I'm tending
towards this it seems quite straight forward to use.

As an aside, many testing frameworks (UnitTest++) appear to make
extensive use of macros, is there a particular reason for this? I
thought "Macros are evil"? http://www.parashift.com/c++-faq-lit....html#faq-39.4
>
--
Phlip
http://www.greencheese.us/ZeekLand<-- NOT a blog!!!
nope, it's currently a 404 error. ;)
Mar 9 '07 #10

P: n/a
nw wrote:
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?
Talk about testing, not about tools. For example, have each member of
the class write a complete set of test cases for a program that reads
three numbers from stdin and tells you whether a triangle with sides of
those lengths is equilateral, isosceles, scalene, or impossible. Then
compare the solutions, and talk about testing principles that help
refine the set of test cases.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #11

P: n/a
nw wrote:
As an aside, many testing frameworks (UnitTest++) appear to make
extensive use of macros, is there a particular reason for this? I
thought "Macros are evil"?
That is the entry-level aphorism. The correct rule is "don't abuse
macros; they are easier to abuse than other things".

Per Bjarne, Herb, and Alex (IIRC), a macro is the only way to do an
assertion, and a unit test is nothing but a humongous nested
assertion. All test runners in the C languages use macros, and they
provide features, such as expression reflection, that other test
runners sorely miss.
--
Phlip
http://www.greencheese.us/ZeekLand<-- NOT a blog!!!

nope, it's currently a 404 error. ;)- Hide quoted text -
I need to downgrade that to http://c2.com/cgi/wiki?ZeekLand . But the
project is on hold while I do other things. Start here:

http://flea.sourceforge.net/PiglegToo_1.html

--
Phlip

Mar 9 '07 #12

P: n/a
nw wrote:
On Mar 8, 9:42 pm, Ian Collins <ian-n...@hotmail.comwrote:
>nw wrote:
>>Hi,
I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?
Test Driven Development with CppUnit!

I've taken a look at the CppUnit documentation. It seems there is
quite a significant overhead in CppUnit in terms of creating classes/
methods to get a simple test running (as compared to other testing
frameworks) what advantages does this give you?
Really? Huh...I guess I've been using some other CppUnit library...
Mar 9 '07 #13

P: n/a
Phlip wrote:
All test runners in the C languages use macros, ...
If by "C languages" you mean C++ inclusively then no, you're wrong.

http://tut-framework.sourceforge.net/
Mar 9 '07 #14

P: n/a
Pete Becker wrote:
Roland Pibinger wrote:
>On 8 Mar 2007 06:38:43 -0800, "nw" wrote:
>>I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course?

You should at least explain the different kinds of tests: integration
test, functional test, unit test, assert.

And regression test. Well, maybe not, since it has become meaningless.
Has it?
Mar 9 '07 #15

P: n/a
Noah Roberts wrote:
Pete Becker wrote:
>Roland Pibinger wrote:
>>On 8 Mar 2007 06:38:43 -0800, "nw" wrote:
I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course?

You should at least explain the different kinds of tests: integration
test, functional test, unit test, assert.

And regression test. Well, maybe not, since it has become meaningless.
Has it?
Yes.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #16

P: n/a
Pete Becker wrote:
Noah Roberts wrote:
>Pete Becker wrote:
>>Roland Pibinger wrote:
On 8 Mar 2007 06:38:43 -0800, "nw" wrote:
I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course?

You should at least explain the different kinds of tests: integration
test, functional test, unit test, assert.
And regression test. Well, maybe not, since it has become meaningless.
Has it?

Yes.
Well, I suppose a more complete answer would be appropriate.

Today, "regression test" seems to mean "run the tests you've run before
and see if anything got worse." I.e., run the test suite. Formally,
though, a regression test is a test you add to your test suite in
response to a user-reported defect, reproducing the user's conditions.
You also add specific tests cases to the general test suite, since
obviously the particular problem that caused the failure wasn't
detected. The idea behind regression tests is that you may be dealing
with a fragile area of the code, and rerunning cases that failed in the
field gives you some reassurance that you haven't missed something.
Since user-supplied code can be big and clunky, you only run regression
tests occasionally, and rely on the more focused test cases for day to
day testing.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #17

P: n/a
Noah Roberts wrote:
Phlip wrote:
All test runners in the C languages use macros, ...

If by "C languages" you mean C++ inclusively then no, you're wrong.

http://tut-framework.sourceforge.net/
All true Scotsmen eat their oatmeal without sugar.

Thanks for confirming my point; let's see what tut is missing:

ensure("lost ownership",ap.get()==0);

At fault time, the output cannot reflect the file, line number, or the
'ap.get()==0'.

So, all true assertions in the C languages use macros. Those who avoid
them, just to brag "we don't use macros" on their homepages, are not
as useful.

BTW the "significant overhead" in CppUnit is all the test cases,
suites, and registration methods you must set up just to get anything
done. Never ignore the effect that redundant code, and poor assertion
diagnostics, have on project velocity.

--
Phlip

Mar 9 '07 #18

P: n/a
On Mar 8, 10:35 pm, "Phlip" <phlip...@yahoo.comwrote:
But, seriously folks, UnitTest++ is my favorite C++ unit test rig. Its tests
for itself are exemplary.
Hey Phlip - I checked out UnitTest++ and it's really cool. One thing
I tried to do, though, that wasn't successful...I defined member
functions in my fixture to do certain checks that are performed across
several tests, but the CHECK* macros wouldn't compile (errors below).
When I moved the CHECKs to TEST_FIXTURE definitions they work fine,
but now I have redundant macro calls across several tests. Is this a
feature?

error C2065: 'testResults_' : undeclared identifier
error C2228: left of '.OnTestFailure' must have class/struct/union
type
error C2065: 'm_details' : undeclared identifier
Mar 9 '07 #19

P: n/a
Phlip wrote:
Noah Roberts wrote:
>Phlip wrote:
>>All test runners in the C languages use macros, ...
If by "C languages" you mean C++ inclusively then no, you're wrong.

http://tut-framework.sourceforge.net/

All true Scotsmen eat their oatmeal without sugar.
So you realize the fallacy but just choose to use it anyway. Whatever;
I have no need to continue this conversation anyway.
Mar 9 '07 #20

P: n/a
Pete Becker wrote:
Today, "regression test" seems to mean "run the tests you've run before
and see if anything got worse." I.e., run the test suite. Formally,
though, a regression test is a test you add to your test suite in
response to a user-reported defect, reproducing the user's conditions.
I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

1. the act of going back to a previous place or state; return or reversion.
http://dictionary.reference.com/browse/regression

I don't think this is a change either. Wikipedia quotes Fred Brooks:

"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)

That book is a couple decades old at least...

This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.

What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.

So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.
Mar 9 '07 #21

P: n/a
dave_mikes wrote:
HeyPhlip- I checked out UnitTest++ and it's really cool.
You need to stop using it immediately; my arguments here are
fallacious.

Report back as soon as you stop... ;-)
One thing
I tried to do, though, that wasn't successful...I defined member
functions in my fixture to do certain checks that are performed across
several tests, but the CHECK* macros wouldn't compile (errors below).
When I moved the CHECKs to TEST_FIXTURE definitions they work fine,
but now I have redundant macro calls across several tests. Is this a
feature?
Yes; the macros hit private variables in the fixtures, so they are de-
facto members.

The point of the fixtures is for test cases to share common code, so
put the CHECKs into shared methods and share these between test
fixtures. Roughly...

class fixture
{
void methodOne()
CHECK(x)
void methodTwo()
CHECK(y)
}

TEST_FIXTURE(fixture, case_a)
{
methodOne()
methodTwo()
}

TEST_FIXTURE(fixture, case_b)
{
methodOne()
}

(I will now await Noah triumphantly declaring "that's not well-formed C
++!")

--
Phlip

Mar 9 '07 #22

P: n/a
Noah Roberts wrote:
I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":
I once worked a game project where the QA department kept saying "we
regressed levels 2 thru 7 today". They meant "we checked levels 2 thru
7 against regressions". Showing them the dictionary definition didn't
really help the situation. It was a high-stress shop, and if you
pushed too hard they would insist on using the wrong definition to
prove they can, as a territoriality thing.
What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Capture bugs with tests.
So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.
Talking for Pete, he discusses customers sending in failing code. Such
code wasn't (in TDD terms) "grown with" your test rig, so calling into
it will be slow. Pete uses "regression" to indirectly mean tests that
run slowly in huge batches. So you TDD some new features, run the
entire regression suite, and deploy. Your customers, in theory, will
remain happy with their projects.

--
Phlip

Mar 9 '07 #23

P: n/a
nw wrote:
Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Sorry if this is off-topic in this group.

Many Thanks.
Testing should be part of the design process, not an after thought,
hence I think you should consider a design, how to implement it and
the various tests required for each class, the whole design and parts
thereof. This would be useful if you provide design errors in C++ code
that should be revealed by testing but are not immediately obvious.

JB
Mar 9 '07 #24

P: n/a
nw wrote:
On Mar 8, 9:42 pm, Ian Collins <ian-n...@hotmail.comwrote:
>>nw wrote:
>>>Hi,
>>>I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Test Driven Development with CppUnit!

I've taken a look at the CppUnit documentation. It seems there is
quite a significant overhead in CppUnit in terms of creating classes/
methods to get a simple test running (as compared to other testing
frameworks) what advantages does this give you?
The overhead is very low, all you require is one overall TestRunner
object and a TestFixture. I use one TestFixture per class under test.

--
Ian Collins.
Mar 9 '07 #25

P: n/a
Phlip wrote:
Noah Roberts wrote:
>>Phlip wrote:
>>>All test runners in the C languages use macros, ...

If by "C languages" you mean C++ inclusively then no, you're wrong.

http://tut-framework.sourceforge.net/

All true Scotsmen eat their oatmeal without sugar.
Oatmeal should be eaten with a pinch of salt, not sugar.

--
Ian Collins.
Mar 9 '07 #26

P: n/a
On Mar 9, 2:00 pm, "Phlip" <phlip2...@gmail.comwrote:
>
Yes; the macros hit private variables in the fixtures, so they are de-
facto members.

The point of the fixtures is for test cases to share common code, so
put the CHECKs into shared methods and share these between test
fixtures. Roughly...
<snip>

That's what I'm trying, to no avail. The following trivial example
fails compilation with the aforementioned errors in VC++ 6.0. If I
move the CHECK to the TEST_FIXTURE block, it works fine.

#include <UnitTest++.h>

using namespace UnitTest;

class Fixture {
public:
void method_one() { CHECK(1 == 1); }
};

TEST_FIXTURE(Fixture, test_one)
{
method_one();
}

int main()
{
return RunAllTests();
}

Unfortunately that's the only compiler I have access to at the client
site. I'll try with MinGW when I get home tonight.

Thanks for your help.

Mar 9 '07 #27

P: n/a
Noah Roberts wrote:
Pete Becker wrote:
>Today, "regression test" seems to mean "run the tests you've run
before and see if anything got worse." I.e., run the test suite.
Formally, though, a regression test is a test you add to your test
suite in response to a user-reported defect, reproducing the user's
conditions.


I believe you're wrong on this. All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression

I don't think this is a change either. Wikipedia quotes Fred Brooks:

"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)

That book is a couple decades old at least...
It shows, before we started using automated unit and acceptance test
frameworks, running all the tests on a product or application was an
expensive process. The arrival of well designed testing frameworks has
mitigated a great deal of that cost by removing the labour cost and
elapsed time costs from the process.
This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.
Steps 5 and 6 are one and the same, the new test become part of the test
suite.
So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.
The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.

--
Ian Collins.
Mar 9 '07 #28

P: n/a
dave_mikes wrote:
class Fixture {
Make class Fixture inherit whatever TEST_FIXTURE is wrapping. Sorry I
forgot to mention that!

(And the correct verbiage has always been Suite and _SUITE here, but
that's beside the point.)

If that doesn't work, post to the UnitTest++ mailing list. I know they
take these architectural concerns quite seriously.

--
Phlip

Mar 9 '07 #29

P: n/a
Noah Roberts wrote:
Pete Becker wrote:
>Today, "regression test" seems to mean "run the tests you've run
before and see if anything got worse." I.e., run the test suite.
Formally, though, a regression test is a test you add to your test
suite in response to a user-reported defect, reproducing the user's
conditions.

I believe you're wrong on this.
I've only spent ten years as a test writer and manager, so it may be
that I don't know what I'm talking about, but I doubt it.

All definitions of regression testing I
have seen are running the full suite to make sure you didn't break
anything. This would follow from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression
That's not a definition of "regression test."
I don't think this is a change either. Wikipedia quotes Fred Brooks:

"Also as a consequence of the introduction of new bugs, program
maintenance requires far more system testing per statement written than
any other programming. Theoretically, after each fix one must run the
entire batch of test cases previously run against the system, to ensure
that it has not been damaged in an obscure way. In practice, such
regression testing must indeed approximate this theoretical idea, and it
is very costly." -- Fred Brooks, The Mythical Man Month (p 122)

That book is a couple decades old at least...
And that's not a book about testing. For the true definition, see "The
Art of Software Testing," by Glenform Myers.
This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.

What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Its formal name is "regression testing." Or was, until the "regression"
became vacuous.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.

So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.
As I said, "regression test" has come to mean "test." Too bad.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #30

P: n/a
Ian Collins wrote:
>>
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
No, the risk is not eliminated. That eliminates the risk that whatever
test you wrote in response to the defect report hasn't failed. That
doesn't mean that the customer's code will work right, because the
previous fix might not have gotten at the entire problem, and some other
untested thing might now cause the customer's same code to fail. There's
no substitute for real-world code.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #31

P: n/a
bnonaj wrote:
nw wrote:
>Hi,

I have been asked to teach a short course on testing in C++. Until now
I have used my own testing classes (which from what I've seen seem
similar to the boost unit testing classes). Considering I have a
limited amount of time what do readers of this group think would be
useful to cover in this course? Is boost the way to go?

Sorry if this is off-topic in this group.

Many Thanks.
Testing should be part of the design process, not an after thought,
hence I think you should consider a design, how to implement it and
the various tests required for each class, the whole design and parts
thereof. This would be useful if you provide design errors in C++ code
that should be revealed by testing but are not immediately obvious.
Yes, exactly. Tools come later.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #32

P: n/a
Ian Collins wrote:
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
Tip: Suppose you have wall-to-wall tests, and a bug report. Test cases
make excellent platforms for debugging.

Now suppose the bug report comes with the high-level inputs that cause
the bug. You could create a new high-level test, pass the inputs in,
and indeed reproduce the bug.

However, if your new test case is very far from the target bug, you
have not yet "captured" the bug. You should then write a low-level
(faster) test case, directly on the bug's home class, and only then
kill the bug.
The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.
If the "regression" suite is very slow, and if you run it less often
than the TDD tests, then you should treat any failure in the
regression suite as an escape from the TDD suite. You should capture
_this_ bug with a test, before killing it. This tip increases the
value and power of the faster test suite. You can then TDD with more
confidence.

--
Phlip

Mar 9 '07 #33

P: n/a
Phlip wrote:
>
However, if your new test case is very far from the target bug, you
have not yet "captured" the bug. You should then write a low-level
(faster) test case, directly on the bug's home class, and only then
kill the bug.
Exactly. The low-level test case captures what the developer thought the
problem was. The regression test captures what the customer saw. They
aren't necessarily the same.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #34

P: n/a
Phlip wrote:
Ian Collins wrote:

>>That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.


Tip: Suppose you have wall-to-wall tests, and a bug report. Test cases
make excellent platforms for debugging.

Now suppose the bug report comes with the high-level inputs that cause
the bug. You could create a new high-level test, pass the inputs in,
and indeed reproduce the bug.

However, if your new test case is very far from the target bug, you
have not yet "captured" the bug. You should then write a low-level
(faster) test case, directly on the bug's home class, and only then
kill the bug.
True, that's what I do. I should have stressed propagating the tests
down to the unit level. The failing acceptance test can be a good
pointer as to where the problem lies and where to add the filing unit tests.

--
Ian Collins.
Mar 9 '07 #35

P: n/a
Pete Becker wrote:
Ian Collins wrote:
>>>
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.

No, the risk is not eliminated. That eliminates the risk that whatever
test you wrote in response to the defect report hasn't failed. That
doesn't mean that the customer's code will work right, because the
previous fix might not have gotten at the entire problem, and some other
untested thing might now cause the customer's same code to fail. There's
no substitute for real-world code.
True, it may fail somewhere else, but the conditions described in the
original defect report will not cause problems. A test suite is seldom,
if ever, perfect, but adding tests for field failure cases is an
excellent means of improving it. Not adding tests for field failure
cases is nothing short of negligent.

I don't know where "There's no substitute for real-world code" came
from, tests are always run against real code.

--
Ian Collins.
Mar 9 '07 #36

P: n/a
Pete Becker wrote:
>
As I said, "regression test" has come to mean "test." Too bad.
Is arguably more accurate to say that "regression test" has come to mean
"acceptance test". One role of all test is to prevent regressions.

--
Ian Collins.
Mar 9 '07 #37

P: n/a
Ian Collins wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
No, the risk is not eliminated. That eliminates the risk that whatever
test you wrote in response to the defect report hasn't failed. That
doesn't mean that the customer's code will work right, because the
previous fix might not have gotten at the entire problem, and some other
untested thing might now cause the customer's same code to fail. There's
no substitute for real-world code.
True, it may fail somewhere else, but the conditions described in the
original defect report will not cause problems.
That's not true. The added test tests what the developer or the tester
thinks caused the problem, after distilling the original defect report.
You still need to run the original code now and then, in case the
problem was more complex than the developer realized.
A test suite is seldom,
if ever, perfect, but adding tests for field failure cases is an
excellent means of improving it. Not adding tests for field failure
cases is nothing short of negligent.

I don't know where "There's no substitute for real-world code" came
from, tests are always run against real code.
Real code, yes. Real-world code, no. Most tests are fairly short,
intended to isolate a particular part of a specification. That's
important, but it's also important to incorporate real users in the
testing cycle, for example, through a beta test. They typically don't
find anywhere near as many problems as the internal testers do (back
when I managed Borland's compiler testing, over 90% of the defects were
found by internal testers), but the ones they find are often killers.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #38

P: n/a
Ian Collins wrote:
Pete Becker wrote:
>As I said, "regression test" has come to mean "test." Too bad.
Is arguably more accurate to say that "regression test" has come to mean
"acceptance test".
Well, back in the day, we had unit tests, integration tests, acceptance
tests, and regression tests.
One role of all test is to prevent regressions.
Yes, and that is apparently taken to mean that every test is a
regression test.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #39

P: n/a
Pete Becker wrote:
Noah Roberts wrote:
>Pete Becker wrote:
>>Today, "regression test" seems to mean "run the tests you've run
before and see if anything got worse." I.e., run the test suite.
Formally, though, a regression test is a test you add to your test
suite in response to a user-reported defect, reproducing the user's
conditions.

I believe you're wrong on this.

I've only spent ten years as a test writer and manager, so it may be
that I don't know what I'm talking about, but I doubt it.

> All definitions of regression testing I have seen are running the
full suite to make sure you didn't break anything. This would follow
from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression

That's not a definition of "regression test."
No, it isn't. But it is a definition of regression. If you add test to
the end you can easily derive what the correct meaning should be from
the definition of the component words. Since regression means to move
back, then regression test must mean to test what was previously tested.

Now, this is the common use, it fits the definition of the component
words as they are used in the English language. If you want to change
that definition to mean something else and then claim that the other use
is somehow wrong be my guest; the fact that you find one book that
coincides with your use of little import. Your claim of authority
likewise does not mean you are correct; I fully know who I am arguing
with, I own your book on TR1, and I still say your definition is flawed.

I think I will stick with common use as based on the English language
myself. It's how most people use the words:

"[Regression testing] is a quality control measure to ensure that the
newly modified code still complies with its specified requirements and
that unmodified code has not been affected by the maintenance activity."

http://www.webopedia.com/TERM/R/regression_testing.html

"Any time you modify an implementation within a program, you should also
do regression testing. You can do so by rerunning existing tests against
the modified code to determine whether the changes break anything that
worked prior to the change and by writing new tests where necessary."

http://msdn2.microsoft.com/en-us/lib...67(VS.71).aspx

"In traditional terms, this is called regression testing. We
periodically run tests that check for known good behavior to find out
whether our software still works the way that it did in the past."

Feathers - Working Effectively with Legacy Code pg10

Only Beck seems to disagree with this more common usage.
As I said, "regression test" has come to mean "test." Too bad.
I don't see how you can claim that. Regression testing is simply one
aspect of testing. It means you don't just test the new stuff you also
test what worked before. "Regression" simply spells out this necessity.
Yes, regression testing is often done at the same time as the new
tests but that is beside the point that it needs to be a part of your
testing process. It is also an important distinction because regression
testing might not be part of your immediate feedback cycle but done less
frequently because it may require many hours to run.
Mar 9 '07 #40

P: n/a
Ian Collins wrote:
It shows, before we started using automated unit and acceptance test
frameworks, running all the tests on a product or application was an
expensive process. The arrival of well designed testing frameworks has
mitigated a great deal of that cost by removing the labour cost and
elapsed time costs from the process.
True, it is faster but still, running your regression suite can take
hours. Ours is parallel processed on many different PC's and still
takes 3+. So it's more expensive than running a set of tests for what
you're working on.
>
>This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
I didn't say it isn't an important step to add a new test for the bug.
What I'm saying is that "regression testing" means something different
and is equally important.
>
>What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.
Steps 5 and 6 are one and the same, the new test become part of the test
suite.
Eventually for sure. Immediately, not necessarily. It might be put in
the stack of immediate tests that run in every integration since it is
now something of current importance.
>
>So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.

The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.
But there may be a division between what you're currently working on and
the vast supply of tests in your regression. The real point is that a
true complete test includes the regression and not just the new stuff.
Mar 9 '07 #41

P: n/a
Noah Roberts wrote:
Pete Becker wrote:
>Noah Roberts wrote:
>>Pete Becker wrote:

Today, "regression test" seems to mean "run the tests you've run
before and see if anything got worse." I.e., run the test suite.
Formally, though, a regression test is a test you add to your test
suite in response to a user-reported defect, reproducing the user's
conditions.

I believe you're wrong on this.

I've only spent ten years as a test writer and manager, so it may be
that I don't know what I'm talking about, but I doubt it.

>> All definitions of regression testing I have seen are running the
full suite to make sure you didn't break anything. This would follow
from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression

That's not a definition of "regression test."

No, it isn't. But it is a definition of regression. If you add test to
the end you can easily derive what the correct meaning should be from
the definition of the component words. Since regression means to move
back, then regression test must mean to test what was previously tested.
So "integral calculus" is all about basic arithmetic, since "integral"
in mathematics means "of or denoted by an integer." Parsing individual
words does not tell you what a compound word means.
Now, this is the common use, it fits the definition of the component
words as they are used in the English language. If you want to change
that definition to mean something else and then claim that the other use
is somehow wrong be my guest; the fact that you find one book that
coincides with your use of little import. Your claim of authority
likewise does not mean you are correct; I fully know who I am arguing
with, I own your book on TR1, and I still say your definition is flawed.

I think I will stick with common use as based on the English language
myself. It's how most people use the words:

"[Regression testing] is a quality control measure to ensure that the
newly modified code still complies with its specified requirements and
that unmodified code has not been affected by the maintenance activity."

http://www.webopedia.com/TERM/R/regression_testing.html
I don't dispute that this is common usage. I do regret the loss of a
previously useful term.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #42

P: n/a
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:
>>As I said, "regression test" has come to mean "test." Too bad.
Is arguably more accurate to say that "regression test" has come to mean
"acceptance test".

Well, back in the day, we had unit tests, integration tests, acceptance
tests, and regression tests.
> One role of all test is to prevent regressions.

Yes, and that is apparently taken to mean that every test is a
regression test.
Regression test is running tests from previous versions that still apply
(ie are not being changed in the current iteration). In other words,
you have a product that parses XML. It's old and doesn't do schema.
You decide to add that feature. You build acceptance and integration
tests that test your new feature. You also want to make sure that it
still parses all the stuff it did before so your run a
_regression_suite_ at times additional to your new tests. It may be
that you do them both every time but then again you might have a lot of
old features so maybe you want to speed things up a bit by skipping the
regression and only running it once a night.
Mar 9 '07 #43

P: n/a
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:
>>Ian Collins wrote:

That is why it is important to fix every bug by adding a failing
test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.

No, the risk is not eliminated. That eliminates the risk that whatever
test you wrote in response to the defect report hasn't failed. That
doesn't mean that the customer's code will work right, because the
previous fix might not have gotten at the entire problem, and some other
untested thing might now cause the customer's same code to fail. There's
no substitute for real-world code.
True, it may fail somewhere else, but the conditions described in the
original defect report will not cause problems.

That's not true. The added test tests what the developer or the tester
thinks caused the problem, after distilling the original defect report.
You still need to run the original code now and then, in case the
problem was more complex than the developer realized.
I think we are arguing at cross purposes here, what are tests run
against if not the original code?
>>
I don't know where "There's no substitute for real-world code" came
from, tests are always run against real code.

Real code, yes. Real-world code, no. Most tests are fairly short,
intended to isolate a particular part of a specification. That's
important, but it's also important to incorporate real users in the
testing cycle, for example, through a beta test. They typically don't
find anywhere near as many problems as the internal testers do (back
when I managed Borland's compiler testing, over 90% of the defects were
found by internal testers), but the ones they find are often killers.
I see, you are talking about real world test cases. I agree these are a
must, but I tend to disagree with your description of tests. While
individual tests may cover individual user stories, specification
clauses or whatever, the test suite as a whole covers way more. If the
tests are designed by or with the customer, the application should be
being tested to ensure it does what it says on the tin in the way the
users expect.

The ability to achieve a very high test coverage with minimal labour
overhead is the biggest improvement offered with fully automated test
frameworks.

--
Ian Collins.
Mar 9 '07 #44

P: n/a
Pete Becker wrote:
Ian Collins wrote:
>Pete Becker wrote:
>>As I said, "regression test" has come to mean "test." Too bad.
Is arguably more accurate to say that "regression test" has come to mean
"acceptance test".

Well, back in the day, we had unit tests, integration tests, acceptance
tests, and regression tests.
I see the trend these days, especially where one of the various agile
methodologies is being followed for the latter three being collapsed
into acceptance tests.
> One role of all test is to prevent regressions.
Yes, and that is apparently taken to mean that every test is a
regression test.
If every test catches the regression of the feature it tests, it is.

--
Ian Collins.
Mar 9 '07 #45

P: n/a
Pete Becker wrote:
Noah Roberts wrote:
>Pete Becker wrote:
>>Noah Roberts wrote:
Pete Becker wrote:

Today, "regression test" seems to mean "run the tests you've run
before and see if anything got worse." I.e., run the test suite.
Formally, though, a regression test is a test you add to your test
suite in response to a user-reported defect, reproducing the user's
conditions.

I believe you're wrong on this.

I've only spent ten years as a test writer and manager, so it may be
that I don't know what I'm talking about, but I doubt it.
All definitions of regression testing I have seen are running the
full suite to make sure you didn't break anything. This would
follow from the definition of "regression":

1. the act of going back to a previous place or state; return or
reversion.
http://dictionary.reference.com/browse/regression
That's not a definition of "regression test."

No, it isn't. But it is a definition of regression. If you add test
to the end you can easily derive what the correct meaning should be
from the definition of the component words. Since regression means to
move back, then regression test must mean to test what was previously
tested.

So "integral calculus" is all about basic arithmetic, since "integral"
in mathematics means "of or denoted by an integer." Parsing individual
words does not tell you what a compound word means.
There are other meanings to the word "integral", only one is about
integers. Integral in this case is a function that is derived through a
process called "integration" or, "the act or instance of combining into
an integral whole."

The dictionary has several definitions for the word including:
"consisting or composed of parts that together constitute a whole,"
which seems an apt description of what an integral function is.

If you want to stick to only those that are labeled "mathematics" there
is, "any of several analogous quantities. Compare improper integral,
line integral, multiple integral, surface integral."

In general, yes you can find the meaning of a phrase through analysis of
its composite words. If you couldn't communication would be difficult
if not impossible. Integral calculus is no exception.
Mar 9 '07 #46

P: n/a
Noah Roberts wrote:
Ian Collins wrote:
>It shows, before we started using automated unit and acceptance test
frameworks, running all the tests on a product or application was an
expensive process. The arrival of well designed testing frameworks has
mitigated a great deal of that cost by removing the labour cost and
elapsed time costs from the process.

True, it is faster but still, running your regression suite can take
hours. Ours is parallel processed on many different PC's and still
takes 3+. So it's more expensive than running a set of tests for what
you're working on.
That's why you have the unit tests for what you are working on.
>>
>>This is an important step to make even if expensive. Many times a fix
to a new bug can cause old bugs to reappear...for instance, sometimes a
fix introduces a new bug, which is found and reported...and then "fixed"
bringing back the old one that's fix introduced this new bug.
That is why it is important to fix every bug by adding a failing test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.

I didn't say it isn't an important step to add a new test for the bug.
What I'm saying is that "regression testing" means something different
and is equally important.
That's where we disagree, I see "regression testing" as an integral part
of "acceptance testing". Do you have separate regression and acceptance
tests, or a combined suite?
>>
>>What you are talking about is heavily used in TDD and also hasn't gone
away or become less used. If it has a formal name I don't recall it.
Step one, new acceptance test for the bug...step two, find the cause,
step 3 write unit test to expose cause...step 4 fix...step 5 run new
tests...step 6 run regression.
Steps 5 and 6 are one and the same, the new test become part of the test
suite.

Eventually for sure. Immediately, not necessarily. It might be put in
the stack of immediate tests that run in every integration since it is
now something of current importance.
I see. we didn't do that very often. Once we had identified unit tests
that illuminated a bug, these were used to fix it and the validation was
performed during regular acceptance suite runs. This proved (at least
for us), to be the most effective approach.
>>
>>So I don't see that anything has become meaningless here. Both new
tests for bugs and regression tests to be sure the program still passes
acceptance from previous versions are important steps to robust project
management.

The point may be that there isn't and division between new tests for
bugs and existing acceptance tests. The bug simply becomes another test
case.

But there may be a division between what you're currently working on and
the vast supply of tests in your regression. The real point is that a
true complete test includes the regression and not just the new stuff.
I agree, we just differ in approach.

--
Ian Collins.
Mar 9 '07 #47

P: n/a
Ian Collins wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>Pete Becker wrote:

Ian Collins wrote:

That is why it is important to fix every bug by adding a failing
test or
tests to the test suite that reveal the bug and then getting these to
pass. The tests remain in the suite, so the risk of regressing an old
bug is eliminated.
>
No, the risk is not eliminated. That eliminates the risk that whatever
test you wrote in response to the defect report hasn't failed. That
doesn't mean that the customer's code will work right, because the
previous fix might not have gotten at the entire problem, and some other
untested thing might now cause the customer's same code to fail. There's
no substitute for real-world code.

True, it may fail somewhere else, but the conditions described in the
original defect report will not cause problems.
That's not true. The added test tests what the developer or the tester
thinks caused the problem, after distilling the original defect report.
You still need to run the original code now and then, in case the
problem was more complex than the developer realized.
I think we are arguing at cross purposes here, what are tests run
against if not the original code?
Yes, sorry: by "the original code" I was referring to the failing code
submitted by the customer.
>>I don't know where "There's no substitute for real-world code" came
from, tests are always run against real code.
Real code, yes. Real-world code, no. Most tests are fairly short,
intended to isolate a particular part of a specification. That's
important, but it's also important to incorporate real users in the
testing cycle, for example, through a beta test. They typically don't
find anywhere near as many problems as the internal testers do (back
when I managed Borland's compiler testing, over 90% of the defects were
found by internal testers), but the ones they find are often killers.
I see, you are talking about real world test cases. I agree these are a
must, but I tend to disagree with your description of tests. While
individual tests may cover individual user stories, specification
clauses or whatever, the test suite as a whole covers way more. If the
tests are designed by or with the customer, the application should be
being tested to ensure it does what it says on the tin in the way the
users expect.

The ability to achieve a very high test coverage with minimal labour
overhead is the biggest improvement offered with fully automated test
frameworks.
Frameworks do reduce the labor of running tests, but they do not in
themselves improve test coverage (except in the sense that you can run
more tests in a given time). Test coverage is the result of design and
implementation.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #48

P: n/a
Noah Roberts wrote:
Pete Becker wrote:
>Ian Collins wrote:
>>Pete Becker wrote:
As I said, "regression test" has come to mean "test." Too bad.

Is arguably more accurate to say that "regression test" has come to mean
"acceptance test".

Well, back in the day, we had unit tests, integration tests,
acceptance tests, and regression tests.
>> One role of all test is to prevent regressions.

Yes, and that is apparently taken to mean that every test is a
regression test.

Regression test is running tests from previous versions that still apply
(ie are not being changed in the current iteration). In other words,
you have a product that parses XML. It's old and doesn't do schema. You
decide to add that feature. You build acceptance and integration tests
that test your new feature. You also want to make sure that it still
parses all the stuff it did before so your run a _regression_suite_ at
times additional to your new tests. It may be that you do them both
every time but then again you might have a lot of old features so maybe
you want to speed things up a bit by skipping the regression and only
running it once a night.
Tests from previous versions that still apply test current requirements.
Separating them from newly written tests that also test current
requirements is artificial.

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #49

P: n/a
Noah Roberts wrote:
>
In general, yes you can find the meaning of a phrase through analysis of
its composite words. If you couldn't communication would be difficult
if not impossible.
The meaning of modifiers is often determined by context, which a
dictionary doesn't understand. For example, if you look up "unit" in the
dictionary, its definition could lead you to believe that a "unit test"
is any test that you apply to your application, which is, after all, a
"unit".

--

-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
Mar 9 '07 #50

58 Replies

This discussion thread is closed

Replies have been disabled for this discussion.