By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,492 Members | 1,270 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,492 IT Pros & Developers. It's quick & easy.

Patterns for Unit Testing.

P: n/a
I’d appreciate some advice regarding unit tests. My questions are general in
nature but (as usual) best conveyed via a (simplified) example:

I have a table that contains two columns – the name of a PC and the
date/time that an event took place.
I have a stored procedure that accepts two date arguments – a start
date/time and an end date/time. It returns a list of PC’s and the number of
times the event took place within the specified time period.
I have a method that accepts three arguments – a dataset, a start date/time
and an end date/time. It populates the dataset by calling the stored
procedure.

I wanted to test my method with some unit tests.

My tests all have the same structure:
• First they update (delete and insert) the necessary records in my table.
• Then they create an “expected” dataset and add the necessary rows to it.
• Then they create an “actual” dataset and pass it to my method (where it is
populated).
• Then they compare the “expected” dataset with the “actual” dataset.

Then I wrote the following tests:
• No records
• One record before start date/time.
• One record at start date/time.
• One record after start date/time.
• One record before end date/time.
• One record at end date/time.
• One record after end date/time.

At this point I was very happy. However, my enthusiasm waned when I began to
consider the “many records” scenario. I realised that, to test every possible
combination of the six “one record” scenarios, I would have to write 36
tests. Worse still, I began to consider the many hundreds of tests required
to test the further combinations involved once multiple computer names were
introduced. Clearly this exponential growth is completely unmanageable, even
within my simply example, let alone more complex real world scenarios.

So my simple question is how can this be achieved efficiently? I have some
ideas of my own but none are obviously the right answer and I wondered what
other developers are doing, does anyone there some good practices they’d like
to share, has anyone read any good books or papers on the subject, etc., etc.

Oct 13 '06 #1
Share this Question
Share on Google+
6 Replies


P: n/a

Dick wrote:
Then I wrote the following tests:
No records
One record before start date/time.
One record at start date/time.
One record after start date/time.
One record before end date/time.
One record at end date/time.
One record after end date/time.

At this point I was very happy. However, my enthusiasm waned when I beganto
consider the "many records" scenario. I realised that, to test every possible
combination of the six "one record" scenarios, I would have to write 36
tests. Worse still, I began to consider the many hundreds of tests required
to test the further combinations involved once multiple computer names were
introduced. Clearly this exponential growth is completely unmanageable, even
within my simply example, let alone more complex real world scenarios.
Hi
You've found difference between "complete" and "sufficient" test suite,
which is very import to unerstand testing :-) What I'd recommend is to:
get some nice code coverage tool, implement one test, run coverage tool
to see what code has not been executed by test, deduce next test case,
implement that and so on 'till your code is, let's say 90% covered. I'd
expect that you'll find out that you need less tests than you initially
specified, it's almost always the case. Problem with this approach is
that it may be hard to get 100%, this approach can be thought of as
some "test heuristic" but you are able to manage tests at least, and
you've already discovered that this is very very important.
So my simple question is how can this be achieved efficiently? I have some
ideas of my own but none are obviously the right answer and I wondered what
other developers are doing, does anyone there some good practices they'd like
to share, has anyone read any good books or papers on the subject, etc., etc.
This is by far the most recommened way to get that right. Others do
exist. You may want to read about mutation testing, for example, but
IMO this is far more difficult to get right. Also some kind of mocks
might help you. Anyway, try to get coverage right and possibly you will
be happy with that.

Oct 13 '06 #2

P: n/a
Hi Dick,

I agree with Marcin that you need a code coverage test tool to analyze how
much percent of code has been covered during the test. Actually, the VS2005
Team Test has such tool integrated. Please check the following links for
more information:

http://msdn2.microsoft.com/en-us/library/ms182534.aspx
http://msdn.microsoft.com/msdnmag/is...erageAnalysis/

If anything is unclear, please feel free to let me know.

Kevin Yu
Microsoft Online Community Support

==================================================
Get notification to my posts through email? Please refer to
http://msdn.microsoft.com/subscripti...ult.aspx#notif
ications.
Note: The MSDN Managed Newsgroup support offering is for non-urgent issues
where an initial response from the community or a Microsoft Support
Engineer within 1 business day is acceptable. Please note that each follow
up response may take approximately 2 business days as the support
professional working with you may need further investigation to reach the
most efficient resolution. The offering is not appropriate for situations
that require urgent, real-time or phone-based interactions or complex
project analysis and dump analysis issues. Issues of this nature are best
handled working with a dedicated Microsoft Support Engineer by contacting
Microsoft Customer Support Services (CSS) at
http://msdn.microsoft.com/subscripti...t/default.aspx.
==================================================

(This posting is provided "AS IS", with no warranties, and confers no
rights.)

Oct 16 '06 #3

P: n/a
Could you not test within a loop? In exponential situations I have
found that you either don't need to test every combination, but if you
want to be thorough, you can still use an iteration of some sort to
complete the test.
for i = 0 to something
while something else
foreach something else
Create fake record/mock object
Insert into mock dataset
Assertion
next
loop
next
Overall assertion

... you get the idea. I also notice that your method may lack some
'Dependancy Injection'.
Obviously your method is using internal database connections and so on.
Maybe I have also missed the point of your posting... I tried to
address what I understood the problem to be.

Steven

Oct 16 '06 #4

P: n/a
Thank you very much for your informed response.

If I understand you correctly then, to create a “sufficient” test suite, I
should focus on covering my code and not covering the scenarios.

I like this because it sounds more straight-forward.

However, I’m concerned that important tests might be missed. For instance,
to test a mathematical function I would normally write a few tests using
arguments that are positive, negative, zero, very large, very small, etc.,
etc.

It’s possible that using the technique you describe I’ll never get beyond
the first test because, by then, all the code will be covered.

Am I correct or have I misunderstood? Is this the difference between a
“sufficient” and a “complete” test suite? Is this the price to pay to make
the suite manageable?
"Marcin Rzeznicki" wrote:
>
Dick wrote:
Then I wrote the following tests:
· No records
· One record before start date/time.
· One record at start date/time.
· One record after start date/time.
· One record before end date/time.
· One record at end date/time.
· One record after end date/time.

At this point I was very happy. However, my enthusiasm waned when I began to
consider the "many records" scenario. I realised that, to test every possible
combination of the six "one record" scenarios, I would have to write 36
tests. Worse still, I began to consider the many hundreds of tests required
to test the further combinations involved once multiple computer names were
introduced. Clearly this exponential growth is completely unmanageable, even
within my simply example, let alone more complex real world scenarios.

Hi
You've found difference between "complete" and "sufficient" test suite,
which is very import to unerstand testing :-) What I'd recommend is to:
get some nice code coverage tool, implement one test, run coverage tool
to see what code has not been executed by test, deduce next test case,
implement that and so on 'till your code is, let's say 90% covered. I'd
expect that you'll find out that you need less tests than you initially
specified, it's almost always the case. Problem with this approach is
that it may be hard to get 100%, this approach can be thought of as
some "test heuristic" but you are able to manage tests at least, and
you've already discovered that this is very very important.
So my simple question is how can this be achieved efficiently? I have some
ideas of my own but none are obviously the right answer and I wondered what
other developers are doing, does anyone there some good practices they'd like
to share, has anyone read any good books or papers on the subject, etc., etc.

This is by far the most recommened way to get that right. Others do
exist. You may want to read about mutation testing, for example, but
IMO this is far more difficult to get right. Also some kind of mocks
might help you. Anyway, try to get coverage right and possibly you will
be happy with that.

Oct 16 '06 #5

P: n/a
Dick napisal(a):
Thank you very much for your informed response.

If I understand you correctly then, to create a "sufficient" test suite, I
should focus on covering my code and not covering the scenarios.

I like this because it sounds more straight-forward.

However, I'm concerned that important tests might be missed. For instance,
to test a mathematical function I would normally write a few tests using
arguments that are positive, negative, zero, very large, very small, etc.,
etc.

It's possible that using the technique you describe I'll never get beyond
the first test because, by then, all the code will be covered.
Hi
It is possible, but very unlikely. I suspect from what you described,
that your methods are full of conditionals, loops and so on. These
statements ensure you that you do not hit 100% coverage with one test.
Furthermore, there are different metrics of coverage (take a look at:
http://www.bullseye.com/coverage.html) so you'll probably need much
more than one test to satisfy the most important metrics. For example,
let's say you have weird mathematical function defined in pseudocode
as:
f(a, b) is { if (a < 0 ) return b; if (0 < a < b) return a + b; return
b; }
Then to get 100% coverage you'll have to define tests like
test(f(-1,0)), test(f(1,5)), test(f(5,1)). The first one covers first
branch, second one covers second branch etc. This will make your code
coverage tool happy and you can be more or less sure that function
works as expected. Why not fully sure? Because seemingly tested code
can behave wrong under extreme conditions, let's say that + operator
throws exception when it detects overflow. Your tests pass, yet there
is flaw in code. Then you should refine your tests further (Please note
that you would've been fully sure if you'd implemented complete test
suite, however you'd have written ||Int||*||Int|| tests, which is not
an option. ||Int|| is the number of all possible values for int type).
I see two ways of further refinement, first one is "lazy" - you work
with your program and by chance you pass numbers that trigger overflow
exception, you notice that bug occured, hunt it down and add
appropriate test. That's the way I prefer, because you are not
distracted by unneeded cases. If a function is always used for small
numbers, because, for example, it is always applied to number of
processors available, then you are not going to improve your code by
checking for overflow. On the other hand, if it is used for big
numbers, then bug occurs sooner or later (hopefully when QA guys mess
around), and you fix the test suite. Keep in my mind, that this may not
be the best way of testing for all programs. If your program steers
spaceships then you'd rather not test like that. Still, for "normal"
programs it is ok.
The second way is ... everything else :-) You can try to prove
correctness of function, you can review code looking for possible
exceptions, you can do some data-driven testing etc. While I agree that
this sounds rather disappointing, keep in my mind that that there are
no perfect tests. Should you write prefect tests, your program would
never have bugs from this day on - which is impossible. Yet do not
despair, having good, but imperfect, tests will give you productivity
boost and will rise your code quality to very high level. And this is
the road worth to be taken. Furthermore, you can test your tests, using
tools for mutation testing. But that's another tale.

Am I correct or have I misunderstood? Is this the difference between a
"sufficient" and a "complete" test suite? Is this the price to pay to make
the suite manageable?
Well, yes, I'd say yes. Complete tests always give you full knowledge.
But, except for trivial cases, you are unable to write them. Strictly,
complete tests take into account every possible state reachable by
yoour program. If you interact with database, then that not only means
testing every permutation of data, but also things as: network down,
out of memory, database invaded by aliens and so on - it is impossible
to write them. Even if we leave "external" states out, like not
functioning network, still you are not able to cover every permutation
of data.

Good luck with testing.

Oct 16 '06 #6

P: n/a
Thanks, Marcin, for two really good replies. I'd appreciate any suggestions
you have regarding reading material on this subject. Otherwise, thanks again.

"Marcin Rzeznicki" wrote:
Dick napisal(a):
Thank you very much for your informed response.

If I understand you correctly then, to create a "sufficient" test suite, I
should focus on covering my code and not covering the scenarios.

I like this because it sounds more straight-forward.

However, I'm concerned that important tests might be missed. For instance,
to test a mathematical function I would normally write a few tests using
arguments that are positive, negative, zero, very large, very small, etc.,
etc.

It's possible that using the technique you describe I'll never get beyond
the first test because, by then, all the code will be covered.

Hi
It is possible, but very unlikely. I suspect from what you described,
that your methods are full of conditionals, loops and so on. These
statements ensure you that you do not hit 100% coverage with one test.
Furthermore, there are different metrics of coverage (take a look at:
http://www.bullseye.com/coverage.html) so you'll probably need much
more than one test to satisfy the most important metrics. For example,
let's say you have weird mathematical function defined in pseudocode
as:
f(a, b) is { if (a < 0 ) return b; if (0 < a < b) return a + b; return
b; }
Then to get 100% coverage you'll have to define tests like
test(f(-1,0)), test(f(1,5)), test(f(5,1)). The first one covers first
branch, second one covers second branch etc. This will make your code
coverage tool happy and you can be more or less sure that function
works as expected. Why not fully sure? Because seemingly tested code
can behave wrong under extreme conditions, let's say that + operator
throws exception when it detects overflow. Your tests pass, yet there
is flaw in code. Then you should refine your tests further (Please note
that you would've been fully sure if you'd implemented complete test
suite, however you'd have written ||Int||*||Int|| tests, which is not
an option. ||Int|| is the number of all possible values for int type).
I see two ways of further refinement, first one is "lazy" - you work
with your program and by chance you pass numbers that trigger overflow
exception, you notice that bug occured, hunt it down and add
appropriate test. That's the way I prefer, because you are not
distracted by unneeded cases. If a function is always used for small
numbers, because, for example, it is always applied to number of
processors available, then you are not going to improve your code by
checking for overflow. On the other hand, if it is used for big
numbers, then bug occurs sooner or later (hopefully when QA guys mess
around), and you fix the test suite. Keep in my mind, that this may not
be the best way of testing for all programs. If your program steers
spaceships then you'd rather not test like that. Still, for "normal"
programs it is ok.
The second way is ... everything else :-) You can try to prove
correctness of function, you can review code looking for possible
exceptions, you can do some data-driven testing etc. While I agree that
this sounds rather disappointing, keep in my mind that that there are
no perfect tests. Should you write prefect tests, your program would
never have bugs from this day on - which is impossible. Yet do not
despair, having good, but imperfect, tests will give you productivity
boost and will rise your code quality to very high level. And this is
the road worth to be taken. Furthermore, you can test your tests, using
tools for mutation testing. But that's another tale.

Am I correct or have I misunderstood? Is this the difference between a
"sufficient" and a "complete" test suite? Is this the price to pay to make
the suite manageable?

Well, yes, I'd say yes. Complete tests always give you full knowledge.
But, except for trivial cases, you are unable to write them. Strictly,
complete tests take into account every possible state reachable by
yoour program. If you interact with database, then that not only means
testing every permutation of data, but also things as: network down,
out of memory, database invaded by aliens and so on - it is impossible
to write them. Even if we leave "external" states out, like not
functioning network, still you are not able to cover every permutation
of data.

Good luck with testing.

Oct 17 '06 #7

This discussion thread is closed

Replies have been disabled for this discussion.