473,320 Members | 2,177 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

How to use CPPUnit effectively?

How to use CPPUnit effectively?
When I want to use CPPUnit, I always fint it useless and waste my time.
I think that's mainly
because I don't know how to use CPPUnit effectively. So if somebody
here has some experience,
please share it to me.
And also I expect answers to these questions:
1. Should I write test codes for every member function of my class? If
not, which function should be tested?
2. What else should I consider apart from checking parameters and
return value?

Thanks.

Oct 31 '06 #1
59 12550
Hooyoo wrote:
How to use CPPUnit effectively?
When I want to use CPPUnit, I always fint it useless and waste my time.
I think that's mainly
because I don't know how to use CPPUnit effectively. So if somebody
here has some experience,
please share it to me.
Like any other new piece of software, start with the smallest example
that compiles and runs, say a TestFixture with single test.
And also I expect answers to these questions:
1. Should I write test codes for every member function of my class? If
not, which function should be tested?
Write the tests first and the members will be added as you require them.
2. What else should I consider apart from checking parameters and
return value?
Side effects.

--
Ian Collins.
Oct 31 '06 #2
Hooyoo wrote:
How to use CPPUnit effectively?
Use UnitTest++. It's much leaner and easier to use:

http://unittest-cpp.sourceforge.net/

CppUnit is stuffed with features you won't need, and they often get in the
way of the few you do.
When I want to use CPPUnit, I always fint it useless and waste my time.
Do you still spend a lot of time debugging? Don't you feel _that's_ a little
more useless and wasteful?
I think that's mainly
because I don't know how to use CPPUnit effectively.
You should write tests for each new line (or two) of the code. Never write
new code without a failing test.

If a test fails unexpectedly, use Undo to revert the code back to the last
state where it passed. This simple trick takes care of nearly all debugging.
1. Should I write test codes for every member function of my class? If
not, which function should be tested?
To make the function exist, you need a test. To make its behavior change,
you need another test. And so on.
2. What else should I consider apart from checking parameters and
return value?
Each test case you write, run it first and see if it fails. The test should
fail for the correct reason. Any kind of code could, in theory, pass the
test, so you should then write just enough code to pass. If the code errs,
it should err on the side of simplicity.

Then write another test to force out the error. These tests, together, will
constrain the innards of that function.

The result works as if the test cases all colluded to constrain all the
function's internal lines, though each test case alone is too simple.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Oct 31 '06 #3

Phlip wrote:
Hooyoo wrote:
How to use CPPUnit effectively?

Use UnitTest++. It's much leaner and easier to use:

http://unittest-cpp.sourceforge.net/
The MS VC project in the zipped source makes me disgusting.
CppUnit is stuffed with features you won't need, and they often get in the
way of the few you do.
When I want to use CPPUnit, I always fint it useless and waste my time.

Do you still spend a lot of time debugging? Don't you feel _that's_ a little
more useless and wasteful?
I think that's mainly
because I don't know how to use CPPUnit effectively.

You should write tests for each new line (or two) of the code. Never write
new code without a failing test.

If a test fails unexpectedly, use Undo to revert the code back to the last
state where it passed. This simple trick takes care of nearly all debugging.
1. Should I write test codes for every member function of my class? If
not, which function should be tested?

To make the function exist, you need a test. To make its behavior change,
you need another test. And so on.
2. What else should I consider apart from checking parameters and
return value?

Each test case you write, run it first and see if it fails. The test should
fail for the correct reason. Any kind of code could, in theory, pass the
test, so you should then write just enough code to pass. If the code errs,
it should err on the side of simplicity.

Then write another test to force out the error. These tests, together, will
constrain the innards of that function.

The result works as if the test cases all colluded to constrain all the
function's internal lines, though each test case alone is too simple.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Oct 31 '06 #4
Binary wrote:
Phlip wrote:
>Hooyoo wrote:
>>How to use CPPUnit effectively?
Use UnitTest++. It's much leaner and easier to use:

http://unittest-cpp.sourceforge.net/
The MS VC project in the zipped source makes me disgusting.
I resemble that remark.
Oct 31 '06 #5
Binary wrote:
> http://unittest-cpp.sourceforge.net/
The MS VC project in the zipped source makes me disgusting.
I'm sure you already were. However...

The authors are quite proud to support various other compilers, including
those targeting embedded platforms.

Or did you expect the project to ship with no project files of any kind?
Don't get me started about Autoconf and the abattoir of eternal torture that
is the Makefile format(s)...

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Oct 31 '06 #6
Phlip wrote:
Binary wrote:
>> http://unittest-cpp.sourceforge.net/
The MS VC project in the zipped source makes me disgusting.

I'm sure you already were. However...

The authors are quite proud to support various other compilers, including
those targeting embedded platforms.

Or did you expect the project to ship with no project files of any kind?
Don't get me started about Autoconf and the abattoir of eternal torture that
is the Makefile format(s)...
HEY - make is beautiful. I use it with MakeXS (I really have to put
that web page back up) but a copy comes with Austria C++.

autoconf on the other hand, well it surprises me just how much it is used.

As for MS project files, they're a disaster as well, if you have a large
number of project files, its very hard to maintain them.
Oct 31 '06 #7

Phlip wrote:
Binary wrote:
http://unittest-cpp.sourceforge.net/
The MS VC project in the zipped source makes me disgusting.

I'm sure you already were. However...

The authors are quite proud to support various other compilers, including
those targeting embedded platforms.

Or did you expect the project to ship with no project files of any kind?
Don't get me started about Autoconf and the abattoir of eternal torture that
is the Makefile format(s)...
Shrug... at least they work on more than one platform/compiler
combination.

Oct 31 '06 #8
Binary wrote:
The MS VC project in the zipped source makes me disgusting.
That's interesting. I looked at the MS VC project. Then I looked in a
mirror. Seems like the project doesn't have any effect on me.

Oct 31 '06 #9
Hooyoo wrote:
How to use CPPUnit effectively?
When I want to use CPPUnit, I always fint it useless and waste my time.
I think that's mainly
because I don't know how to use CPPUnit effectively. So if somebody
here has some experience,
please share it to me.
This group is about C++. It seems that you don't understand what unit
tests are about. So I'd like to recommend a very good book about unit
testing to you:

Kent Beck: Test Driven Development. By Example. Addison Wesley.

This book isn't about CPPUnit. It isn't even about C++. It uses Java
and JUnit for its examples. But it is very well written and the example
code is very easy to understand.
And also I expect answers to these questions:
1. Should I write test codes for every member function of my class? If
not, which function should be tested?
2. What else should I consider apart from checking parameters and
return value?
You will find the answers to these non-C++-related questions in the
book I recommended.
Thanks.
You're welcome

Oct 31 '06 #10
How to use CPPUnit effectively?

There are several schools of thought on this. One is the "Test Driven
Development" school. See Ian's excellent post or the book by the same
name. The idea is you write tests first, and then write code that
passes those tests.

But, of course, it depends on your project. Another school of thought
is the "Legacy Code" school. (See Working Effectively with Legacy Code
by Michael Feathers.) This is more appropriate if you have a large
project that already exists without tests. The key idea there is that
you add tests before changing things, so that you don't break anything,
and add tests when you add new code, so that the new code is
automatically tested. Over time, more and more of the code is tested.

It depends on your use model. If you're writing code only for
yourself, you probably won't need as much. I tend to not write tests
for obvious things (does a + b work), but for things that have a high
chance of being buggy (does my code work to find the last element in
the array? what if the array is empty?). And particularly for things
with complex logic, where I know it's right, but I fear some other
developer will miss the subtlety of all that a given method does, and
will change it in a way that breaks part (but not all) of the behavior.

One of the best uses I've ever seen was in complex database code that
then did some complex business logic processing processing, followed by
complex display to a browser. By isolating the business logic in unit
tests, you could get it right, and if there was a bug, you didn't have
to start up the database and the web server, so tracking down problems
took seconds instead of minutes.

Another really good use is if you ever find a bug. Don't fix the bug.
Instead, write a test that replicates the bug, then fix the code so it
passes the test. That way, you've fixed the bug, and it will stay
fixed forever (i.e., some other programmer won't reintroduce it).

Sadly, the greatest use of unit tests is when you start getting
coverage over part of code, and it may take a while to get there. At
that point you can feel really confident changing code without breaking
things, and you're confident the code works correctly.

Michael

Oct 31 '06 #11
On 30 Oct 2006 19:07:57 -0800, "Hooyoo" <zh*********@126.comwrote:
>How to use CPPUnit effectively?
When I want to use CPPUnit, I always fint it useless and waste my time.
I think that's mainly
because I don't know how to use CPPUnit effectively. So if somebody
here has some experience, please share it to me.
And also I expect answers to these questions:
1. Should I write test codes for every member function of my class? If
not, which function should be tested?
2. What else should I consider apart from checking parameters and
return value?
For beginners to C++ Unit Tests the following article may be helpful:
Chuck Allison: The Simplest Automated Unit Test Framework That Could
Possibly Work - http://www.ddj.com/dept/cpp/184401279

Good luck,
Roland Pibinger
Oct 31 '06 #12
On Tue, 31 Oct 2006 04:12:06 GMT, "Phlip" <ph******@yahoo.comwrote:
>Hooyoo wrote:
>How to use CPPUnit effectively?

Use UnitTest++. It's much leaner and easier to use:

http://unittest-cpp.sourceforge.net/

CppUnit is stuffed with features you won't need, and they often get in the
way of the few you do.
I guess a good C++ unit test 'framework' is yet to be written. Most of
the available frameworks are cluttered up with macros which IMO is
rather repelling.

Best regards,
Roland Pibinger
Oct 31 '06 #13
Roland Pibinger wrote:
On Tue, 31 Oct 2006 04:12:06 GMT, "Phlip" <ph******@yahoo.comwrote:
>Hooyoo wrote:
>>How to use CPPUnit effectively?
Use UnitTest++. It's much leaner and easier to use:

http://unittest-cpp.sourceforge.net/

CppUnit is stuffed with features you won't need, and they often get in the
way of the few you do.

I guess a good C++ unit test 'framework' is yet to be written. Most of
the available frameworks are cluttered up with macros which IMO is
rather repelling.

There are 4 macros in the Austria C++ unit test framework.

They are:

AT_TestArea - define a test area
AT_DefineTest - define a test
AT_RegisterTest - register a test
AT_TCAssert - "Test case assert" which throws a test case exception

I lied, there is another version of AT_DefineTest that takes a little
more tweaking.

So, what's so "repelling" ?

Oh - btw, here is one of the unit tests.. You can define any number of
unit tests in a file and you can link as many files unit test files as
you like into a single executable. It uses the Austria generic factories
so all the unit tests register themselves automatically.

That's about it. WHAT is repelling about that and what would you do
differently ?

#include "at_bw_transform.h"

#include "at_unit_test.h"

using namespace at;

AT_TestArea( ATBWTransform, "BW Transform" );

#include <iostream>

AT_DefineTest( Basic, ATBWTransform, "Basic BW Transform code test" )
{
void Run()
{
std::string test_str = "fun faster fast";

std::vector<char test_result( test_str.size() );

unsigned i = BWTEncode( test_str.begin(), test_str.end(),
test_result.begin() );

std::cout << "BW( \"" << test_str << "\" ) = (" << i << ") \""
<< std::string( test_result.begin(), test_result.end() ) <<
"\"\n";

std::vector<char test_decode( test_str.size() );

BWTDecode( i, test_result.begin(), test_result.end(),
test_decode.begin() );

std::cout << "Decode( \"" << std::string( test_result.begin(),
test_result.end() ) << "\" ) (" << i << ") $
<< std::string( test_decode.begin(), test_decode.end() ) <<
"\"\n";

AT_TCAssert( std::string( test_decode.begin(),
test_decode.end() ) == test_str, "BWT decode failed" );
}
};

AT_RegisterTest( Basic, ATBWTransform );
Oct 31 '06 #14
On Tue, 31 Oct 2006 21:54:48 +1100, Gianni Mariani
<gi*******@mariani.wswrote:
>Roland Pibinger wrote:
>I guess a good C++ unit test 'framework' is yet to be written. Most of
the available frameworks are cluttered up with macros which IMO is
rather repelling.

There are 4 macros in the Austria C++ unit test framework. They are:
AT_TestArea - define a test area
AT_DefineTest - define a test
AT_RegisterTest - register a test
AT_TCAssert - "Test case assert" which throws a test case exception
I lied, there is another version of AT_DefineTest that takes a little
more tweaking.

So, what's so "repelling" ?
I've not been aware of your Austria framework (even though I live in
Austria. BTW, why haven't you named it Italia?). My favorite unit test
'framework' still is the Standard 'assert' (admittedly a macro in most
implementations).

Best wishes,
Roland Pibinger
Oct 31 '06 #15
Roland Pibinger wrote:
...
>So, what's so "repelling" ?

I've not been aware of your Austria framework (even though I live in
Austria. BTW, why haven't you named it Italia?). My favorite unit test
'framework' still is the Standard 'assert' (admittedly a macro in most
implementations).
Because it started off as a country that started with "A". The next
project is Borneo. Besides I think the Austrian's are the most fun
people in Europe.

Hmm. AT_TCAssert gives you the line number and file name (and stack
trace if you want) of the failure and allows you to run the rest of the
tests. You get a full report on all the test failures not just the
first one to fail.

Anyhow, if you're happy with assert, more power to you !
Oct 31 '06 #16
Roland Pibinger wrote:
I guess a good C++ unit test 'framework' is yet to be written. Most of
the available frameworks are cluttered up with macros which IMO is
rather repelling.
One Noel Llopis once wrote a book on C++ for game programmers. You should
have seen how he twitched when first I showed him my TEST_() macro (based on
the CppUnitLite TEST() macro). Game programmers are known to abuse macros in
false pursuit of efficiency, and I expect his book ranted against them.

And now his UnitTest++ has exactly the same suite of macros.

In a C language, thou shalt write an assertion with a macro. Even the
various published style guides say so. Further, you can generally write a
better assertion than many vaunted soft languages can support.

In C++, use a macro for any of these techniques:

- conditional compilation
- extracting compiler-defined constants (like __LINE__)
- stringerization
- token pasting

You need all of them for a powerful and flexible unit test rig.

People who are repelled by any language technique, without actually
understanding its uses and costs, are not on the path to wisdom.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Oct 31 '06 #17
"Hooyoo" <zh*********@126.comwrites:
How to use CPPUnit effectively? When I want to use CPPUnit, I
always fint it useless and waste my time. I think that's mainly
because I don't know how to use CPPUnit effectively. So if somebody
here has some experience, please share it to me.
Our first cut at a home-grown unit test framework was very much like
JUnit / CPPUnit. The amount of extra verbage needed for even the
simplest tests was a major problem.

We moved to Boost's unit test framework about a year ago, and it's
_much_ better. None of this "unit tests are classes that inherit
from..." stuff, each unit test is a free function.

Roland P. won't like it, because it's replete with macros. :-) But
the "auto unit test" facility is very handy, and does a very, very
important thing: it makes writing unit tests extremely cheap.

I recommend that you check it out.

----------------------------------------------------------------------
Dave Steffen, Ph.D. Disobey this command!
Software Engineer IV - Douglas Hofstadter
Numerica Corporation
dg@steffen a@t numer@ica d@ot us (remove @'s to email me)
Oct 31 '06 #18
Dave Steffen wrote:
We moved to Boost's unit test framework about a year ago, and it's
_much_ better. None of this "unit tests are classes that inherit
from..." stuff, each unit test is a free function.
Fixtures are a unit test Best Practice, so test rigs should enable them.
Both UnitTest++ and my TEST_() enable them without requiring them. Boost's
doesn't. But indeed do check it out; it _is_ Boost, after all!

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Oct 31 '06 #19
On 31 Oct 2006 13:44:44 -0700, Dr. Dave Steffen wrote:
We moved to Boost's unit test framework about a year ago, and it's
_much_ better. None of this "unit tests are classes that inherit
from..." stuff, each unit test is a free function.

Roland P. won't like it, because it's replete with macros. :-) But
the "auto unit test" facility is very handy, and does a very, very
important thing: it makes writing unit tests extremely cheap.
If you are fond of boosted stuff TUT might interest you (not that I
recommend it). It uses built-in C++ macros (a.k.a. templates) instead
of preprocessor macros: http://tut-framework.sourceforge.net/

Best regards,
Roland Pibinger
Oct 31 '06 #20
"Phlip" <ph******@yahoo.comwrites:
Dave Steffen wrote:
We moved to Boost's unit test framework about a year ago, and it's
_much_ better. None of this "unit tests are classes that inherit
from..." stuff, each unit test is a free function.

Fixtures are a unit test Best Practice, so test rigs should enable them.
Yes, good point...
Both UnitTest++ and my TEST_() enable them without requiring them. Boost's
doesn't. But indeed do check it out; it _is_ Boost, after all!
... and using Boost::UnitTest I miss them on occasion. IIRC there
are ways to do _whole test suite_ setups and fixtures, but I haven't
hacked around enough to use them yet.

The thing that makes Boost's unit test framework so near and dear to
my heart is the extreme lack of typing overhead. I credit this for
the large number of unit tests that people in my organization are
writing (on their own, without prodding from anyone) these days; as
opposed to our old JUnit-ish thing, which took _loads_ of typing to
get anywhere, so people just tended to ignore it.

----------------------------------------------------------------------
Dave Steffen, Ph.D. Disobey this command!
Software Engineer IV - Douglas Hofstadter
Numerica Corporation
dg@steffen a@t numerica d@ot us (remove @'s to email me)

Oct 31 '06 #21
You should write tests for each new line (or two) of the code. Never write
new code without a failing test.
Allways?
How do I test concurrency?
How do I test that after adding new button my gui looks ugly?
How do I test proper excepion handling after disk failure?
How do I test efficiency of used algorithm?
There is a lot of code that making an unit test for is useless.
Unit test are really nice especially when code changes - if test passes
after the change that's a hint that maybe nothing was broken. But they
are also dangerous. Many times I'v heard that if test passes than
everything works. Actually often it means that there are to little
tests. Or maybe every single function works, but together they don't
work (especially true in multithread environment). Or maybe new
functionality that was added introduces several hundred of new states
in application that wans't considered when existing test was written.
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".

Oct 31 '06 #22
jolz wrote:
>You should write tests for each new line (or two) of the code. Never write
new code without a failing test.

Allways?
How do I test concurrency?
Easy - run many of them.
How do I test that after adding new button my gui looks ugly?
I don't know what you talk about - it looks good to me :-)
How do I test proper excepion handling after disk failure?
Have you app code read/write to an abstracted layer that you can
simulate those kinds of failures. BTW - This would not be a test that
would be high on my priority list. If the unlikely event the customer
has a disk failure, it's unlikely that my code is going to be the only
one to blow up.

The idea here is be selective of the failure modes you test for. Make
sure they are going to be a priority to the customer.
How do I test efficiency of used algorithm?
Run the test on large data sets and limit the execution time.
There is a lot of code that making an unit test for is useless.
Unit test are really nice especially when code changes - if test passes
after the change that's a hint that maybe nothing was broken. But they
are also dangerous. Many times I'v heard that if test passes than
everything works. Actually often it means that there are to little
tests. Or maybe every single function works, but together they don't
work (especially true in multithread environment).
This one is easy - test them in a multithreaded environment. Many of
the classes I create that have to be thread safe are subject to what I
term a "Monte Carlo" test where I nail them with various random
requests. For example there is a "timer" module. I have tests where
timers are added, removed, fail on and on over a varying number of threads.

In an automated build system, the build run continuously and
occasionally there is a "failure" so we grab the core dumps and viola,
sometimes you nail it.

The motto of the story is, write many tests and run them as often as you
can and have a way to check the results of failure - usually with a core
dump. Many times I have resorted to having the program suspend itself
on failure and going into the automated test system and attaching a
debugger to the suspended code (usually on Windows).

Also, by doing an automated continuous build you see which changes to
the software possibly caused the problem.

Unit tests don't have to be small. I remember once where the whole
application was a class I could instantiate (only ever used const
globals), so I did one of my monte carlo tests and - ooh it failed due
to some very esoteric shut down before being fully initialized bug.
Everyone, including myself, thought this would never happen in real
life. .... Guess again. On the initial release that's exactly one of
the critical bugs that came back, which btw we knew what it was because
we did the test.
... Or maybe new
functionality that was added introduces several hundred of new states
in application that wans't considered when existing test was written.
Write more tests to try to make those states happen.
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.
So ? The tests can take some time to write but it is far less expensive
and much more useful to find the bugs before it gets to the customer.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".
What is this thing you call "design" ?

What is it's deliverable ? i.e. when you have finishes a design, what
is it that you have ?

If you think TDD, your design is the unit tests + the interface to the
application. BTW - not all the unit tests, just the ones that help
underpin the interface. For instance I would not do a monte carlo
stress test at the design stage, but I would do it for verification.

While you write the code, you're doing to come across issues that may
mean changing the interfaces and hence the unit tests so having a boat
load of tests to fix when your interfaces changes does not help.
Oct 31 '06 #23
VJ
Phlip wrote:
Hooyoo wrote:

>>How to use CPPUnit effectively?


Use UnitTest++. It's much leaner and easier to use:

http://unittest-cpp.sourceforge.net/

CppUnit is stuffed with features you won't need, and they often get in the
way of the few you do.

We are using http://cxxtest.sourceforge.net/guide.html and it is good
and easy to understand
Nov 1 '06 #24
How do I test concurrency?
>
Easy - run many of them.
And if 1000 passes succedes that means that everything is ok? I don't
tink so. For example:

#include <boost/thread/thread.hpp>

int counter = 0;
int c1;
int c2;

void t1() {
for (int i = 0; i < 1000; i++);
c1 = counter;
++counter;
}

void t2() {
for (int i = 0; i < 1000; i++);
c2 = counter;
++counter;
}

int main() {
for (int i = 0; i < 100000; i++) {
boost::thread thrd1(&t1);
boost::thread thrd2(&t2);
thrd1.join();
thrd2.join();
assert(c1 != c2);
}
}

passes, but the code is obviously wrong. And even if test woulh have
failed it doen't really mean anything. Real life examples are much more
subtle. Also the same test may work on one comuter + compiler +
operating system (for example test machine) and fail on every other.
It runs about a minute on my computer. If I would have to test this way
all functions in 500000 line application it would take a lot of time.
And usually 1 fuction have more than 1 test.
Notice that single look at the code proves that code is wrong, but
100000 passes of the test gave the illusion that everything is ok.
How do I test proper excepion handling after disk failure?
Have you app code read/write to an abstracted layer that you can
simulate those kinds of failures.
But this way I have to change my code. Well, it may lead to a better
design, but still the test tests the abstract layer and not the actual
problem.
How do I test efficiency of used algorithm?
Run the test on large data sets and limit the execution time.
Remember - first write test, than write code. So how do you guess what
is the correct time? And what happens if test is run on a faster
machine? I guess it will pass every time even is something was broken.

Those were only examples. In real life there are much more of those (so
far I think that we agree that gui is not unit testable, and there are
lots of developers that do only gui).
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.

So ? The tests can take some time to write but it is far less expensive
and much more useful to find the bugs before it gets to the customer.
But even less expensive is simply looking into the code and thinking
what to do to make it right, not what to do to make some test pass.
Test will pass anyway.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".

What is this thing you call "design" ?
TDD says - write as little as posible to make something work. And don't
do anything that is not necessary. Than add something to make something
else work. Almoust always change the first code to cooperate with new
code. But often one nows that for example creating cont string will be
eventually necessary. Why don't start with this? UML is one way to
deign. But it usually doesn't focus on details. Unit tests focuses only
on details and nothing else. I think none of them is ultimate solution.

I think that unit test may be dangerous exactly because of everyting
you wrote. I get the impression that you think that if test passes that
it means that application is bug free. That may be true for very
limited number of applications (for example library that does matrix
calculations) but not in many real life applications.

Nov 1 '06 #25
jolz wrote:
Remember - first write test, than write code.
And don't thread. Write an event-driven architecture that time-slices
things. That is, in turn, easy to switch to threading if you find a real
need. That real need will drive the test concerns. And you can't test-in
thread robustness; you must review and model it.
So how do you guess what
is the correct time?
You ask your business analyst, or you set the time comfortably close to what
the code currently does. Tests don't (generally) validate features. They
generally detect if your code strays from its original activities.
And what happens if test is run on a faster
machine? I guess it will pass every time even is something was broken.
But it must pass on every developer machine, too.
Those were only examples. In real life there are much more of those (so
far I think that we agree that gui is not unit testable,
GUIs are always unit testable. They are just a bad place to start learning!

http://www.zeroplayer.com/cgi-bin/wi...UserInterfaces
and there are
lots of developers that do only gui).
Oh, I only do the variables that start with [L-Z], but I don't put that on
my resume!
>So ? The tests can take some time to write but it is far less expensive
and much more useful to find the bugs before it gets to the customer.

But even less expensive is simply looking into the code and thinking
what to do to make it right, not what to do to make some test pass.
Test will pass anyway.
Short-term, tests make development faster by avoiding debugging, including
all the "program in the debugger" that everyone does with modern editors.

Long-term, tests make development faster by recording decisions about
behavior in a place where they are easy to review, and generally automatic.
They shorten the QA pipeline between programmers and users, and they help
turn the decision to release into strictly a business decision.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".
Uh, TDD used to stand for "Test Driven Design". It now stands for
"Development", because TDDers try to express every aspect of development -
analysis, design, coding, integration, productization, etc. - as a form of
test.
>What is this thing you call "design" ?

TDD says - write as little as posible to make something work. And don't
do anything that is not necessary. Than add something to make something
else work. Almoust always change the first code to cooperate with new
code. But often one nows that for example creating cont string will be
eventually necessary. Why don't start with this? UML is one way to
deign. But it usually doesn't focus on details. Unit tests focuses only
on details and nothing else. I think none of them is ultimate solution.
UML is where you draw a bunch of circles and arrows, each one of which
represents an opportunity for a bug. If we treat UML as a methodology (which
it is not), then we might draw a design, then code the design without any
behavior. So it's all bugs. Then we go thru whomping down each bug by adding
the behaviors to each box and arrow.

Software is about behavior. Design is the support for behavior; it's less
important. It should be always easy to change. (I read a nice job listing
recently, that said "Refactoring is rare due to good design practices". My
clairvoyancy skills are not as accurate as they require!)

So implement the behavior first, then refactor the design while preserving
that behavior.
I think that unit test may be dangerous exactly because of everyting
you wrote. I get the impression that you think that if test passes that
it means that application is bug free. That may be true for very
limited number of applications (for example library that does matrix
calculations) but not in many real life applications.
It shows the program passed tests that it used to pass. Such tests might
have been reviewed. The behaviors they test were certainly reviewed. So
preserving the tests will preserve this investment into reviewing the
behavior. This frees up everyones time, to concentrate on implementing
features and exploratory testing.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 1 '06 #26
jolz wrote:
>You should write tests for each new line (or two) of the code. Never
write
new code without a failing test.
Allways?
If you can't think of the test, how will you think of the line of code?

The test doesn't need to perfectly prove the line will work. In the
degenerate case, the test simply checks the line exists. Another line could
fool the test. Don't do that, and add different tests with different attack
angles, to triangulate the code you need.
How do I test concurrency?
Developer tests are only import for rapid development. Not for (shocked
gasp) proving a program works. If you simply must go concurrent, write a
good structure for your semaphores, enforce this _structure_ with tests, and
then soak-test your final application, just to check for CPU bugs and such.
How do I test that after adding new button my gui looks ugly?
If that's important (it usually _isn't_), you make reviewing the GUI very
easy:

http://www.zeroplayer.com/cgi-bin/wiki?TestFlea
How do I test proper excepion handling after disk failure?
Sometimes you let things like this slide. Yes, you occassionally write a
line of code without a test. The difference is Ayou could have written the
test, and Byour code follows community agreements about robustness.

For example, an in-house tool application should simply die and tell the
user to report to the help-desk. A commercial application, by contrast,
shouldn't make those assumptions.

A test on disk failure should use a Mock Object to mock out the file system.
That will return an error on command, so a test can trivially ensure the
error gets handled correctly.
How do I test efficiency of used algorithm?
You don't need to do that to test-first each line of the algorithm. You need
to know what the algorithm is, so you can implement it under test. If you
don't know the algorithm yet, test-first might help discover it, but you
still need other design techniques.

If the algorithm is important, then it should have tests that catch when any
detail of its internal operations change. So only implement important things
via test-first.
There is a lot of code that making an unit test for is useless.
Uh, then why write the code?

If you assume a unit test is expensive, then you will never get over that
curve and make them cheap.

If, instead, you start with unit tests, they accelerate development. For
each new line of code, the unit test is easier to write than debugging that
line would be.

Don't tell my bosses this, but I have programs that I almost never manually
drive. So, for me, there's a lot of code that manual testing is useless!
Unit test are really nice especially when code changes - if test passes
after the change that's a hint that maybe nothing was broken. But they
are also dangerous.
So get them reviewed as often as you review the code. Also, tests make an
excellent way to document the code, and you should help your business
analysts review this code.
Many times I'v heard that if test passes than
everything works.
Nobody should believe that, even when you have 100% statement coverage, and
it appears true!
Actually often it means that there are too little
tests. Or maybe every single function works, but together they don't
work (especially true in multithread environment).
Don't multithread. Do write tests that re-use lots of objects in
combination. (Don't abuse Mock Objects, for example.) Your design should be
so loosely coupled that you can create any object under test without any
other object.
Or maybe new
functionality that was added introduces several hundred of new states
in application that wans't considered when existing test was written.
Yep. But the odds you _know_ you did that are very high.
Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.
Uh, "debugging"?

Under test-first, if the tests fail unexpectedly, you hit Undo until they
pass. That's where most debugging goes.

If you try the feature again, and the tests fail again, you Undo and then
start again, at a smaller scale, with a _smaller_ change. If this passes, do
it again. You might find the error in the tests like this, because it will
relate to the line of code you are trying to change.

This is like debugging, but with one major difference. Each and every time
you get a Green Bar, you could integrate and ship your code.

So a test-rich environment is like having a big button on your editor called
"Kill Bug". Invest in the power of that button.
Yes, test are important, but I tkink that they often overrated. I'v
often heard that TDD's goal is to make it green. Beware of red. Make it
as simple as possible. But I don't recall hearing the word "design".
If you are reading the blogs and such, they often preach to the choir. They
know you know it's about "design".

Google for "emergent design" to learn the full power of the system.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 1 '06 #27
Dave Steffen wrote:
>Fixtures are a unit test Best Practice, so test rigs should enable them.

Yes, good point...
>Both UnitTest++ and my TEST_() enable them without requiring them.
Boost's
doesn't. But indeed do check it out; it _is_ Boost, after all!

... and using Boost::UnitTest I miss them on occasion. IIRC there
are ways to do _whole test suite_ setups and fixtures, but I haven't
hacked around enough to use them yet.

The thing that makes Boost's unit test framework so near and dear to
my heart is the extreme lack of typing overhead.
Boost made a module that's easy to type??! ;-)

Seriously, all the TEST() macro systems limit typing in some way, better
than raw CppUnit. However...
I credit this for
the large number of unit tests that people in my organization are
writing (on their own, without prodding from anyone) these days; as
opposed to our old JUnit-ish thing, which took _loads_ of typing to
get anywhere, so people just tended to ignore it.
That is /so/ much more important than convenient fixture systems! (And you
could trivially add them, with another TEST_FIXTURE() macro.)

Carry on!

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 1 '06 #28
VJ wrote:
We are using http://cxxtest.sourceforge.net/guide.html and it is good and
easy to understand
I don't understand the brief amount of documentation I read.

Given test<3>, do I guess right that we have to increment that 3 by hand,
each time we

I want to smoke crack while coding, and having to remember what number I'm
up to interferes with my lifestyle. I can't even think of a macro that fixes
this (with a compile-time constant).

So does cxxtest really make me increment that number? Or should I, like,
read more of the documentation before passing judgement?

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 1 '06 #29
And don't thread.

So how do I write gui application? Do you really write for example
database communicatin from gui thread? If so, how do you allow user to
stop the operation. Or resize window. Baically do anything other than
staring in a gray window. Application logic should be in a network
thread? Or maybe parallel algorithm should be in a single thread?
So how do you guess what
is the correct time?

or you set the time comfortably close to what
the code currently does.
What is the purpose of test if I write test so it allways passes?
And what about changing hardware? Rewrite all tests? Change 1 global
settings and pray that it is changed with correct proportion? Or maybe
make new settings so all test passes?
But it must pass on every developer machine, too.
Does really tests that run few hours/days are run on all machines in a
company?
GUIs are always unit testable. They are just a bad place to start learning!
I'v seen tester trying to write a gui test. Heard about others. It was
allways a lot of work with minor advantages. It was only testing
behaviour. Never how application looks. And it was an easier version -
java gui. I hava no idea how can't be it even worse in language without
gui in a standart.
Short-term, tests make development faster by avoiding debugging, including
all the "program in the debugger" that everyone does with modern editors.
Debugger also won't work in any of situations I presented. The fact
that test isn't worse than debugger doesn't mean that it usefull.
UML is where you draw a bunch of circles and arrows, each one of which
represents an opportunity for a bug. If we treat UML as a methodology (which
it is not), then we might draw a design, then code the design without any
behavior. So it's all bugs.
I didn't quite get how uml causes bugs. But let's not start another
off-topic from this one.
It shows the program passed tests that it used to pass.
Yes. And it is the only thing that makes test usefull for me. It means
that I didn't screwed up to much. But It certainly doesn't mean that
application works. It doesn't even mean that it isn't worse that
before.
If you can't think of the test, how will you think of the line of code?
I'v already presented a code that passes test and doesn't work. I know
how to correct the code, but have no idea how to write a test that
validetes the code.
Each and every time
you get a Green Bar, you could integrate and ship your code.
Again the thing that scares me the most. Green = good. Don't think
about anythink else. If it's green so it must work. Well, it doesn't.I
have nothing against tests. Sometimes they are usefull. But they don't
solve all developer's problems.

Nov 1 '06 #30
jolz wrote:
>>How do I test concurrency?
Easy - run many of them.

And if 1000 passes succedes that means that everything is ok? I don't
tink so.
I tink that would be a bad test.
... For example:
example of bad test removed. I can write bad tests too...
...really mean anything.
Many people beg to differ. Sure, you can write a bogus test, bets are
however that if you don't do it you're likely going to find a problem as
a customer report which is far more costly.
... Real life examples are much more
subtle.
My experience is the opposite.
... Also the same test may work on one comuter + compiler +
operating system (for example test machine) and fail on every other.
Exactly, so run it on a bunch of different test machines. I reccomend a
dual platform development for exactly this reason. Linux-posix + Win32
is a great combination.
It runs about a minute on my computer. If I would have to test this way
all functions in 500000 line application it would take a lot of time.
again - bad test. You have to run a dual CPU or better to run
multithreaded tests. Also your test has too much overhead, you create
new threads all the time. I usually create a parameterized number of
threads make them all arrive at a barrier and then unleash them all at
the same time. Then I run the test with those threads throughout the
entire test.
And usually 1 fuction have more than 1 test.
Notice that single look at the code proves that code is wrong, but
100000 passes of the test gave the illusion that everything is ok.
Again, your clever, write a better test.

This argument sounds like: "Doctor doctor, it hurts when I point a gun
at my foot and shoot". Well, duh !
>
>>How do I test proper excepion handling after disk failure?
Have you app code read/write to an abstracted layer that you can
simulate those kinds of failures.

But this way I have to change my code. Well, it may lead to a better
design, but still the test tests the abstract layer and not the actual
problem.
You can't test every situation all the time, nobody argues that. The
question is, what is the cost/benefit of doing some. Clearly if you
have no tests you're running blind. If you have some tests you're
better off and so on until there is a diminishing return. Maybe you can
use this principle. "If I write a unit test and it finds no bugs, then I
stop and move onto the next project."
>
>>How do I test efficiency of used algorithm?
Run the test on large data sets and limit the execution time.

Remember - first write test, than write code. So how do you guess what
is the correct time? And what happens if test is run on a faster
machine? I guess it will pass every time even is something was broken.
Use common sense at first and be a little conservative. It depends on
the app.
>
Those were only examples. In real life there are much more of those (so
far I think that we agree that gui is not unit testable, and there are
lots of developers that do only gui).
There are frameworks for testing GUI. I'm not familiar with the state
of the art. I was hinting at being careful about being distinct from
function and appearance. I agree that you can't test if something is
aesthetic, but you can test if somthing works.
>
>>Also probanly everybody who write tests spent a lot of time debuging
working code and finding that the mistake was in the test.
So ? The tests can take some time to write but it is far less expensive
and much more useful to find the bugs before it gets to the customer.

But even less expensive is simply looking into the code and thinking
what to do to make it right, not what to do to make some test pass.
Test will pass anyway.
I think you're wrong here. I have yet to meet a programmer that can
write code without bugs.

Nov 2 '06 #31
jolz wrote:
>And don't thread.

So how do I write gui application?
Briefly, sometimes you need threads. Fight them, because they are bad. They
are like 'goto'. Don't thread just because "my function runs too long, and I
want to do something else at the same time". Fix the function so you can
time-slice its algorithm. That overwhelmingly improves your design, anyway.

The tutorials for threads often give lame examples that only need simpler
fixes. The tutorials often don't examine the full rationale for threading.
If you drive with one hand and eat a sandwich with another, you might need a
thread. Internet Explorer probably uses threads. If our apps are simpler, we
shouldn't need them, and should resist adding them until we explore all
possible alternatives.
Do you really write for example
database communicatin from gui thread?
The question here is simple: Why and how does communication block your GUIs
thread?

Winsock, for example, fixed that (for 16-bit Windows without much thread
support) by turning network input into a window message. If you have input
communication, you can often _research_ to find the right function that
multiplexes its events together with your GUI events.

If you can't find such a function, then you must use a thread, and you
should use PostMessage() or similar to send events to your GUI thread. If
you thread, make sure your code in both threads is sufficiently event-driven
that you don't need to abuse your semaphores.

Threading violates encapsulation when one thread must remain "aware" of
another thread's intimate details, just to get something done.
If so, how do you allow user to
stop the operation.
I didn't say "lock the GUI up early and often". I just said "don't thread".
There's a lot of middle ground.
So how do you guess what
is the correct time?

or you set the time comfortably close to what
the code currently does.

What is the purpose of test if I write test so it allways passes?
A test here and there that can't fail is mostly harmless. Note that, under
TDD, you often write tests and show them failing at least once, before
passing them.

The purpose of my time example is to instantly fail if someone "upgrades"
the tested code and exceeds the time limit. They ought to see the failure,
Undo their change, and try again. Even if the test defends nothing, the
upgrades shouldn't be sloppy.
And what about changing hardware? Rewrite all tests?
Nobody said time-test all tests.
Change 1 global
settings and pray that it is changed with correct proportion? Or maybe
make new settings so all test passes?
To continue your digression here, one might ask what we expect to do without
tests when we change 1 global variable (besides not use global variables).
Should we debug every function in our program? Of course not, we just run
out of time. So we debug a few of them, and _then_ we pray.

That is the most common form of implementation, and it is seriously broken.
>But it must pass on every developer machine, too.

Does really tests that run few hours/days are run on all machines in a
company?
Yes, because developers are running all the tests before integrating their
code. If any test fails, they don't submit. So the same tests must pass on
every machine, many times a day.

What do you do before integrating? Spot check a few program features,
manually?
>GUIs are always unit testable. They are just a bad place to start
learning!

I'v seen tester trying to write a gui test.
That is QA, and test-last. That means it's a professional tester, trying to
write a test that will control quality. And she or he is using the program
from the outside, not changing its source to write the test.

That's all downhill from the kind of testing we describe here. And without a
unit test layer, it's bloody impossible. But they keep trying!
Heard about others. It was
allways a lot of work with minor advantages. It was only testing
behaviour. Never how application looks. And it was an easier version -
java gui. I hava no idea how can't be it even worse in language without
gui in a standart.
And that is probably describing Capture/Playback testing, which is the worst
of the lot.

Now if I can write a test, in the same language as the tested code, that
tests how our program drives the database, or the file system, why can't I
write a test that checks how we drive the GUI? What's so special about GUIs?
>Short-term, tests make development faster by avoiding debugging,
including
all the "program in the debugger" that everyone does with modern editors.

Debugger also won't work in any of situations I presented. The fact
that test isn't worse than debugger doesn't mean that it usefull.
That is a different topic. I mean that TDD prevents the need to run the
debugger. Programming teams who switch to TDD routinely report that they
never even turn their debugger on; never set a breakpoint; never inspect a
variable.
I didn't quite get how uml causes bugs. But let's not start another
off-topic from this one.
UML doesn't cause bugs. Drawing a UML diagram, then converting the diagram
to code full of empty classes with no behavior, causes bugs. All the classes
are empty, but they inherit and delegate properly! Then you must debug, over
and over again, to put the behavior in.
>Each and every time
you get a Green Bar, you could integrate and ship your code.

Again the thing that scares me the most. Green = good. Don't think
about anythink else. If it's green so it must work. Well, it doesn't.I
have nothing against tests. Sometimes they are usefull. But they don't
solve all developer's problems.
Nobody said that, so don't extrapolate from it.

Under TDD, sometimes you predict the next Bar will be Red. The point of the
exercise is you constantly predict the next Bar color, and you are almost
always right. Predictability = good. It shows that your understanding of the
code matches what the code actually does.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 2 '06 #32
Phlip wrote:
jolz wrote:

>>>And don't thread.

So how do I write gui application?


Briefly, sometimes you need threads. Fight them, because they are bad. They
are like 'goto'.
Even if you application benefits from concurrency?

I never used goto, but I often use threads and I don't have any issues
with doing TDD with an MT application.

--
Ian Collins.
Nov 2 '06 #33
Ian Collins wrote:
>Briefly, sometimes you need threads. Fight them, because they are bad.
They
are like 'goto'.

Even if you application benefits from concurrency?
If your app benefits from goto, use it. Various techniques have various
cost-benefit ratios.

Like goto, the cost-benefit ratio for threads is known to be suspicious.
They are hard to test for a reason; that's not the unit tests' fault!!

Fight them, by learning how to avoid them, if at all possible. Such research
will generally lead to a better event-driven design. This design, in turn,
will be easy to thread if you then prove the need.
I never used goto, but I often use threads and I don't have any issues
with doing TDD with an MT application.
That's because TDD is not the same thing as a formal QA effort to determine
your defect rate.

And I suspect there are those who have mocked their kernel and CPU just to
put unit tests on their threads!

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 2 '06 #34
Gianni Mariani wrote:
This argument sounds like: "Doctor doctor, it hurts when I point a gun at
my foot and shoot". Well, duh !
Make sure you test the gun first!

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 2 '06 #35
Phlip wrote:
Ian Collins wrote:

>>>Briefly, sometimes you need threads. Fight them, because they are bad.
They
are like 'goto'.

Even if you application benefits from concurrency?


If your app benefits from goto, use it. Various techniques have various
cost-benefit ratios.
I still think the analogy is too strong, I can't think of any situation
where I'd resort to goto.
Like goto, the cost-benefit ratio for threads is known to be suspicious.
They are hard to test for a reason; that's not the unit tests' fault!!

Fight them, by learning how to avoid them, if at all possible. Such research
will generally lead to a better event-driven design. This design, in turn,
will be easy to thread if you then prove the need.
Most of the work I do does a lot of processing and I/O. The
applications invariably run on multi-core systems. Keeping all the
cores busy get the job done quicker.

While they can either be abused or used inappropriately, threads can
often simplify a design.
>
>>I never used goto, but I often use threads and I don't have any issues
with doing TDD with an MT application.


That's because TDD is not the same thing as a formal QA effort to determine
your defect rate.
I didn't say it was.
And I suspect there are those who have mocked their kernel and CPU just to
put unit tests on their threads!
I wouldn't go that far, but mocking the thread library to run the units
tests in a single threaded application is a useful technique. From my
experience, MT unit tests are unnecessary and tend to end in tears.

--
Ian Collins.
Nov 2 '06 #36
Ian Collins wrote:
I still think the analogy is too strong, I can't think of any situation
where I'd resort to goto.
Not even "goto another stack and CPU context"?

That's what a thread does; goes from the middle of one function to the
middle of another.
Most of the work I do does a lot of processing and I/O. The
applications invariably run on multi-core systems. Keeping all the
cores busy get the job done quicker.
My hostility to thread abuse dates from the 1980s, with single-processor
machines. I worked for many years at a shop stuck with a design invented by
a consultant who read the lame tutorials I mention, and used threads where
he should have used good structure. The threads enabled the bad structure,
which should have been fully event-driven. I got to see the drag this
imposed on our product - and all the thread bugs that went with it.

Granted, the bad structure wasn't directly the threads' fault. But a good
structure could have made the threads easier to live with - or replace!

On a modern multi-context CPU, don't thread and then lock everything down.
You will naturally have a single-threaded, multi-CPU application!

Yet if those threads are indeed independent of each other, then you merely
have a multi-process situation. That's very different from the juggling-act
you get if you use threads for trivial reasons, such as GUI concurrency!
>That's because TDD is not the same thing as a formal QA effort to
determine
your defect rate.
I didn't say it was.
You can TDD the functions your threads call. I suspect that writing a
TDD-style test case that juggles threads is very hard, so I would slack off
on that. (It's not strictly required to generate the code.) So TDD's
incidental power of QA will suffer in this situation.
>And I suspect there are those who have mocked their kernel and CPU just
to
put unit tests on their threads!
I wouldn't go that far, but mocking the thread library to run the units
tests in a single threaded application is a useful technique. From my
experience, MT unit tests are unnecessary and tend to end in tears.
And still not a reason not to write unit tests! ;-)

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 2 '06 #37
Phlip wrote:
Ian Collins wrote:

>>I still think the analogy is too strong, I can't think of any situation
where I'd resort to goto.


Not even "goto another stack and CPU context"?
Boy some people like their hairs split pretty fine :)
>
Yet if those threads are indeed independent of each other, then you merely
have a multi-process situation.
With all the benefits and pitfalls of a shared data model.
>
>>>And I suspect there are those who have mocked their kernel and CPU just
to
put unit tests on their threads!

I wouldn't go that far, but mocking the thread library to run the units
tests in a single threaded application is a useful technique. From my
experience, MT unit tests are unnecessary and tend to end in tears.


And still not a reason not to write unit tests! ;-)
Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.

--
Ian Collins.
Nov 2 '06 #38
Ian Collins wrote:
....
Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.
Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...
Nov 2 '06 #39
Gianni Mariani wrote:
Ian Collins wrote:
....
>Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.


Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...
You can test the logic without running in an MT environment.

Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.

--
Ian Collins.
Nov 2 '06 #40
Ian Collins wrote:
Gianni Mariani wrote:
>Ian Collins wrote:
....
>>Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.

Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...
You can test the logic without running in an MT environment.

Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.
That's the point. In a limited number of cases, you can't.

Show me what your timer test code looks like ?
Nov 2 '06 #41
Gianni Mariani wrote:
Ian Collins wrote:
>Gianni Mariani wrote:
>>Ian Collins wrote:
....

Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.
Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...
You can test the logic without running in an MT environment.

Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.


That's the point. In a limited number of cases, you can't.
Can't what? Build the code, or test it?

Code that provides the *lowest level* threading functionality has to be
tested in an MT environment, but a) I'd argue this doesn't constitute
unit testing and b) how often do you write that type of code, once per
project? Once per platform? Once?

If you are developing an event handler, decouple the processing logic
from whatever connects it to the source of the event. Care to provide
an example of where this can't be done?

Show me what your timer test code looks like ?
I don't have any, but I'm sure the OS developer does.

--
Ian Collins.
Nov 2 '06 #42
Ian Collins wrote:
Gianni Mariani wrote:
>Ian Collins wrote:
>>Gianni Mariani wrote:

Ian Collins wrote:
....

Indeed, but I'd advise against writing MT units tests. As you have
pointed out before, that level of testing doesn't help with TDD and is
best left to the acceptance tests.

Unless you're designing something is multi threaded by design. For
example, If the type is performing some kind of asynchronous event
management, it's kind of useless to write a TDD unit test for it that
does not show the MT aspects...

You can test the logic without running in an MT environment.

Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.

That's the point. In a limited number of cases, you can't.
Can't what? Build the code, or test it?
Appreciate the implications of the design.
>
Code that provides the *lowest level* threading functionality has to be
tested in an MT environment, but a) I'd argue this doesn't constitute
unit testing and b) how often do you write that type of code, once per
project? Once per platform? Once?
It depends on the asynchronous nature of the problem you're trying to solve.
>
If you are developing an event handler, decouple the processing logic
from whatever connects it to the source of the event. Care to provide
an example of where this can't be done?
Sure, the Austria C++ thread pool design unit test. Maybe the Austria
C++ timer unit test. Both of these tests must show how you deal with
the inherent multi threaded nature of the service you're designing
otherwise you're not really seeing all the design issues.
>
>Show me what your timer test code looks like ?

I don't have any, but I'm sure the OS developer does.
Actually you make my point, if they had a unit test before they coded
they probably would have changed the design. The O.S. timer facilities
are very difficult to use correctly. For example, most OS timer
services come back with a signal. This means you can't do anything that
takes a resource. About the only thing you can do safely is set a flag
or call some other "signal safe" OS calls (like write a byte to a pipe).

The code below shows what the Austria timer unit test "kernel" looks
like. Event(...) gets called back in the "timer service" thread. It
doesn't look like it but it is inherently multi threaded !
class MyTimerClient
: public TimerClient_Basic
{

TimerLog & m_log;
int m_client_number;

public:

MyTimerClient( TimerLog & io_log, int i_client_number )
: m_log( io_log ),
m_client_number( i_client_number )
{
}

protected:

virtual bool Reschedule(
const TimeStamp & i_current_time,
TimeStamp & o_reschedule_time
) = 0;

private:

virtual bool Event(
const TimeStamp & i_current_time,
TimeStamp & o_reschedule_time
)
{
m_log.push_back( m_client_number, ThreadSystemTime() );
return Reschedule( i_current_time, o_reschedule_time );
}

};

Nov 2 '06 #43
Gianni Mariani wrote:
>Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.

That's the point. In a limited number of cases, you can't.
CppUnit and its ilk can be used for QA testing, but that's not what we've
been discussing. We are discussing TDD, where we write tests that help force
our lines of code to exist. This helps to refactor our designs as they grow.

If we are not writing a kernel, we don't need to TDD its event systems. We
won't refactor our kernel, so gaps in our test coverage are very low risk.

So you can safely rely on a policy "wait till we have a bug there". (Note
this is the policy that test-free programs already follow - for everything!)
Show me what your timer test code looks like ?
TEST_(TestDialog, SetTimer)
{
CButton slowButton = m_aDlg.GetDlgItem(IDC_SLOW_OPERATION);
slowButton.SendMessage(BM_CLICK);
BOOL thereWasaTimerToKill = m_aDlg.KillTimer(0);
CPPUNIT_ASSERT(thereWasaTimerToKill);
}

In Win32 (under WTL), we test that our BM_CLICK handler sets a timer by
killing that timer.

I'm aware that doesn't answer the exact question, but I already had this
answer available!

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 2 '06 #44
And if 1000 passes succedes that means that everything is ok? I don't
tink so.

I tink that would be a bad test.
... For example:
example of bad test removed. I can write bad tests too...
Sure it is bad. But how would look a good test that shows that c1 and
c2 may be equal? My point is that such a test doesn't exist.
You can't test every situation all the time, nobody argues that.
Somebody does:
1. You should write tests for each new line (or two) of the code.
Never write
new code without a failing test.
2. If you can't think of the test, how will you think of the line of
code?

I'm just trying to explain that "You can't test every situation all the
time", not that all tests are evil.
I also think that there are more situations that can't be tested than
those that can, but of course I may be wrong here.

Nov 2 '06 #45
Fight them, because they are bad.

And exactly why?
Fix the function so you can
time-slice its algorithm.
That is exactly what OS does. I don't want to rewrite part of OS
because I'm afraid of threads. I want to use code written by OS's
developers and focus on my own application.
we
shouldn't need them, and should resist adding them until we explore all
possible alternatives.
Again, why? Developer's only goal is not to aviod threads, espacially
if they are the best solution.
The question here is simple: Why and how does communication block your GUIs
thread?
- bacause it accesses disk which is slow
- bacause it uses network (not everybody work on windows and even then
not everybody uses networking with messages - Internet Explorer you
mentioned uses now on my computer 9 threads even if it doesn't do
anything currently)
- because I'v just executed sql which will finish in 5 minutes
- because my chess computer really trying to beat me
- because my fractal algorithm for some reason refuses to finish in 2
seconds
- because I'm converting a movie from divx to xvid
....
....
Threading violates encapsulation
thread 1: eatBreakfast()
thread 2: watchTV()

How does it violates encapsulation?
I have eggs on breakfast so:
private:
eatEggs()
But how do I do test it?
I know:
public:
eatEggs()
Fortunatelly in c++ I can make friends with eggs, but now I have test
reference in my code. Good thing is that it doesn't do anything wrong.
So test may break encapsulation (and may not), propelly designed thread
shouldn't.
What is the purpose of test if I write test so it allways passes?

A test here and there that can't fail is mostly harmless.
Obviously. So is scratching behind my ear. It also doesn't do anything
good.
Change 1 global
settings and pray that it is changed with correct proportion?
To continue your digression here, one might ask what we expect to do without
tests when we change 1 global variable
I ment setting in the test. Global variables in a program if they exist
should be changed, possibly in an automated test so there will be no
surprises ater change from const int days_in_current_year from 365 to
366. Or possibly to 543. Its just a number, it should work.

Does really tests that run few hours/days are run on all machines in a
company?

Yes, because developers are running all the tests before integrating their
code. If any test fails, they don't submit. So the same tests must pass on
every machine, many times a day.
Test that last few days should pass on developer machine many times a
day?
What's so special about GUIs?
- it's main purpose is to beeing seen
- it requires user input which may not be predicted

Nov 2 '06 #46

Roland Pibinger wrote:
>
For beginners to C++ Unit Tests the following article may be helpful:
Chuck Allison: The Simplest Automated Unit Test Framework That Could
Possibly Work - http://www.ddj.com/dept/cpp/184401279
Why beginners? I recommend this tool for all that don't want a GUI.

Nov 2 '06 #47
Phlip wrote:
Gianni Mariani wrote:
>>Same with testing something like a signal handler or interrupt service
routine, you don't have to send an event to test the code. You know the
underlying OS/hardware will deliver the event, your job is to build the
code that processes it.
That's the point. In a limited number of cases, you can't.

CppUnit and its ilk can be used for QA testing, but that's not what we've
been discussing. We are discussing TDD, where we write tests that help force
our lines of code to exist. This helps to refactor our designs as they grow.
That is specifically what I talk about.

Hiding the MT nature of a service does not help you design it properly !

If your interface design is inherently MT then show me how to use it in
an MT environment, otherwise I can't evaluate how well it works.
>
If we are not writing a kernel, we don't need to TDD its event systems. We
won't refactor our kernel, so gaps in our test coverage are very low risk.
I disagree. I think this is why I have seen so many abortions of MT
code, because they don't take into account the big 3:

1. resource deadlock (not just mutexes)
2. reliable destruction
3. reliable event management

There are API's I have come across (WININET is one) that you cannot
close properly.

Also, you cannot return a handle from an MT call, it must be made
available to the caller *before* you return.

The point is, of the few classes where asynchronism is critical, you
cannot design it properly unless you take into account the MT nature of
the interface. If you think you can, then show me.

>
So you can safely rely on a policy "wait till we have a bug there". (Note
this is the policy that test-free programs already follow - for everything!)
After I'm done writing the code, I usually write at least one stress
test which usually finds a couple of bugs and occasionally bugs in the
interface itself.
>
>Show me what your timer test code looks like ?

TEST_(TestDialog, SetTimer)
{
CButton slowButton = m_aDlg.GetDlgItem(IDC_SLOW_OPERATION);
slowButton.SendMessage(BM_CLICK);
BOOL thereWasaTimerToKill = m_aDlg.KillTimer(0);
CPPUNIT_ASSERT(thereWasaTimerToKill);
}
Not a very useful timer interface, how can I reset the timer ? Can I
safely delete my client and see the timer get deregistered
automatically? Can I set the timer to an absolute time or an interval?

I'd like an interface that is as independent as possible of the O.S.
interfaces since it's a fundamental too.

Compare it to the Austria C++ timer test and I think you'll see what I mean.

>
I'm aware that doesn't answer the exact question, but I already had this
answer available!
The point is, you can't design something using TDD if you don't show how
it's done. If your interface is dealing with MT issues, there really is
little point in not doing an MT unit test for the purposes on TDD.
Nov 3 '06 #48
Gianni Mariani wrote:
>
The point is, you can't design something using TDD if you don't show how
it's done. If your interface is dealing with MT issues, there really is
little point in not doing an MT unit test for the purposes on TDD.
I think we'll have to agree to disagree on this point, I still think you
can't write an MT unit test in a TDD context. Why? Because it's all
but impossible to write a single pass MT test that has a 100%
predictable outcome. There are too many variables, how busy the machine
is, which thread starts first, is there more than one CPU available when
the test runs... So you end up getting unexpected and often
unrepeatable test failures.

Sure you can cure a lot of these inconsistencies by binding everything
to one CPU, but you aren't doing an MT test and may as well mock the
threading layer.

But you can TDD the logic of an MT object and use a separate Monte Carlo
type acceptance stress test for the MT functionality.

One side effect I found doing this was I could easily adapt objects
(like a message queue) from single threaded to MT and event models.
Same logic, different engine.

--
Ian Collins.
Nov 3 '06 #49
Ian Collins wrote:
I think we'll have to agree to disagree on this point, I still think you
can't write an MT unit test in a TDD context. Why?
http://groups.google.com/group/comp....71f34966e6c5e1

Hence

http://www.xprogramming.com/xpmag/ac...ilosophers.htm

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Nov 3 '06 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

46
by: J.R. | last post by:
Hi folks, The python can only support passing value in function call (right?), I'm wondering how to effectively pass a large parameter, such as a large list or dictionary? It could achieved...
0
by: Roy Smith | last post by:
I'm writing a network application in C++, using CppUnit for unit testing. I'm thinking of forking a subprocess to run tcpdump in some of my unit tests to watch actual packets on the wire as they...
2
by: Scott | last post by:
I'm trying to run cppunit on my system under Mac OS X 10.3.3 with Xcode, and I'm getting the following error when I try to run the program: ZeroLink: unknown symbol...
9
by: Steven T. Hatton | last post by:
I finally got this thing to build. There's something to be said for using the release of the cvs image sometimes. :-/ I started reading the docs, and this example struck me as a fundamentally...
1
by: To Forum | last post by:
hi, After searching around with google, I have not reach a final answer for my problem with installation with CPPUNIT. 1/ how can I register the dll file in VC7, please tell me in detail 2/ have...
0
by: skip | last post by:
Manish> It does not work. I had already tried this earlier. Manish> Please suggest some other solutions. Manish> Also, I would like to see the stack from where the exception Manish> started. ...
3
by: Belebele | last post by:
Suppose I want to test several implementations of the same protocol (interface with clearly defined semantics) using cppUnit. How to reuse the test that checks the semantics? Take, for example,...
4
by: romcab | last post by:
Hi guys, I'd been searching about CPPUnit and I can't find a very good source about it. At first I thought it was a software that I need to install but found out that what I have is the source...
0
by: Jane Prusakova | last post by:
Hello, I would like to test some of the classes for being proper objects, the kind of tests that the Orthodox<MyClassextension of CppUnit library does. However, I can't figure out how to get...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.