By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,235 Members | 1,008 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,235 IT Pros & Developers. It's quick & easy.

Unit-testing single function with large number of different inputs

P: n/a

Hi all ya unit-testing experts there :)

Code I'm working on has to parse large and complex files and detect
equally complex and large amount of errors before the contents of the file
is fed to module interpreting it. First I created a unit-test class named
TestLoad which loaded, say, 40 files of which about 10 are correct and
other 30 files contained over 20 different types of errors. Different
methods on the TestLoad class were coded so that they expected the
spesific error to occur. They looked like the following:

def test_xxx_err(self):
for fname in xxx_err_lst:
self.assertEqual(get_load_status(fname), xxx)

However, now we've added several more tests and we have added another
function to the system - the ability to try loading the file without
actually executing instructions in it. So, I added another class
TestValidate, which has exactly the same tests as the TestLoad class, the
only difference being in the routine get_load_status, which is not
get_load_status but get_validate_status.

I thought I'd simplify the process of adding new tests - if it is too
cumbersome to add new tests, we'll end up with having few tests which in
turn will result in more buggy software system.

Thus I came up with the simple idea of adding two directories:
test/correct-files and test/incorrect-files. Every time the test suite is
run, it goes through every file in the test/correct-files and
test/incorrect-files and expects to receive status 0 from all files under
correct-files, and appropriate error code from all files under
test/incorrect-files. How does it deduce the correct error code? Well,
I've forced things to that any file in the test/incorrect-files must
begin with code prefix 'xx-' in the file name, and the test suite reads
expected error code from the file name prefix.

All this allows me to have only two methods (in addition to setUp and
tearDown methods) in TestLoad and TestValidate classes, and what's more
convenient - I don't have to alter unit test suite at all when I add new
file tests to the system; all I have to do is to put it under either
directory, and if the file contains an error, I prefix the file name with
appropriate code.

The system has but one annoyance: first, if I use self.assertEqual in the
test_load method, it aborts on the first error found and as such doesn't
test all the other files. This would be bad, because when errors occur in
the code, I have to fix them one by one running test suite after every fix
to find other potential errors in the system.

So I changed the test_load routine by replacing self.assertEqual with my
own code which won't abort the test if errors are found, but it will
report them with sufficient verbosity and continue testing. But this
approach isn't optimal either: because there is no self.assertEqual in the
test anymore, there may be many errors in the test class but after running
all the methods, the test suite will report OK status, even though errors
did occur.

The problem I described above shows probably one of the best reasons why
each test method should test only one (type of) thing, but in my special
case I don't find that appropriate due to aforementioned reasons. Any
suggestions? I was wondering about creating test methods dynamically
according to the files found in the test directories (which would solve
all the problems), but I find it a tad too complex way of solving the
problem.

--
# Edvard Majakari Software Engineer
# PGP PUBLIC KEY available Soli Deo Gloria!

$_ = '456476617264204d616a616b6172692c20612043687269737 469616e20'; print
join('',map{chr hex}(split/(\w{2})/)),uc substr(crypt(60281449,'es'),2,4),"\n";

Jul 18 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
I'm not a unit-testing expert, and I don't necessarily recommend what
I'm about to suggest, but...

To solve the problem of having a single "test" actually run many
"internal" tests, report on the results of those many tests, and fail
(in the PyUnit sense) if any of those internal tests fail, you could
add a flag, AllTestsPassed, to the TestLoad and TestValidate classes.
Initialize that flag to True. Then, in your version of assertEqual,
if any test fails set the flag to False. Finally, after you've run
all your tests, add

self.failUnless(self.AllTestsPassed, 'Some tests failed.')

HTH,

-robert
Edvard Majakari <ed*********@majakari.net> wrote in message news:<87************@titan.staselog.com>...
Hi all ya unit-testing experts there :)

Code I'm working on has to parse large and complex files and detect
equally complex and large amount of errors before the contents of the file
is fed to module interpreting it. First I created a unit-test class named
TestLoad which loaded, say, 40 files of which about 10 are correct and
other 30 files contained over 20 different types of errors. Different
methods on the TestLoad class were coded so that they expected the
spesific error to occur. They looked like the following:

def test_xxx_err(self):
for fname in xxx_err_lst:
self.assertEqual(get_load_status(fname), xxx)

However, now we've added several more tests and we have added another
function to the system - the ability to try loading the file without
actually executing instructions in it. So, I added another class
TestValidate, which has exactly the same tests as the TestLoad class, the
only difference being in the routine get_load_status, which is not
get_load_status but get_validate_status.

I thought I'd simplify the process of adding new tests - if it is too
cumbersome to add new tests, we'll end up with having few tests which in
turn will result in more buggy software system.

Thus I came up with the simple idea of adding two directories:
test/correct-files and test/incorrect-files. Every time the test suite is
run, it goes through every file in the test/correct-files and
test/incorrect-files and expects to receive status 0 from all files under
correct-files, and appropriate error code from all files under
test/incorrect-files. How does it deduce the correct error code? Well,
I've forced things to that any file in the test/incorrect-files must
begin with code prefix 'xx-' in the file name, and the test suite reads
expected error code from the file name prefix.

All this allows me to have only two methods (in addition to setUp and
tearDown methods) in TestLoad and TestValidate classes, and what's more
convenient - I don't have to alter unit test suite at all when I add new
file tests to the system; all I have to do is to put it under either
directory, and if the file contains an error, I prefix the file name with
appropriate code.

The system has but one annoyance: first, if I use self.assertEqual in the
test_load method, it aborts on the first error found and as such doesn't
test all the other files. This would be bad, because when errors occur in
the code, I have to fix them one by one running test suite after every fix
to find other potential errors in the system.

So I changed the test_load routine by replacing self.assertEqual with my
own code which won't abort the test if errors are found, but it will
report them with sufficient verbosity and continue testing. But this
approach isn't optimal either: because there is no self.assertEqual in the
test anymore, there may be many errors in the test class but after running
all the methods, the test suite will report OK status, even though errors
did occur.

The problem I described above shows probably one of the best reasons why
each test method should test only one (type of) thing, but in my special
case I don't find that appropriate due to aforementioned reasons. Any
suggestions? I was wondering about creating test methods dynamically
according to the files found in the test directories (which would solve
all the problems), but I find it a tad too complex way of solving the
problem.

Jul 18 '05 #2

P: n/a
fe*****@diablotech.com (Robert Ferrell) writes:
add a flag, AllTestsPassed, to the TestLoad and TestValidate classes.
Initialize that flag to True. Then, in your version of assertEqual,
if any test fails set the flag to False. Finally, after you've run
all your tests, add

self.failUnless(self.AllTestsPassed, 'Some tests failed.')


But that doesn't give instant feedback (running ValidateTest takes over a
minute), neither does it report exact cause of errors. Hmm. I think I'll
look into Python's ability to create methods on the fly.

Thanks anyway!

PS. Sorry for the provoking .signature, it dates back to the Old Times
when I didn't know of Python ;)

--
#!/usr/bin/perl -w
$h={23,69,28,'6e',2,64,3,76,7,20,13,61,8,'4d',24,7 3,10,'6a',12,'6b',21,68,14,
72,16,'2c',17,20,9,61,11,61,25,74,4,61,1,45,29,20, 5,72,18,61,15,69,20,43,26,
69,19,20,6,64,27,61,22,72};$_=join'',map{chr hex $h->{$_}}sort{$a<=>$b}
keys%$h;m/(\w).*\s(\w+)/x;$_.=uc substr(crypt(join('',60,28,14,49),join'',
map{lc}($1,substr $2,4,1)),2,4)."\n"; print;
Jul 18 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.