473,385 Members | 1,798 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Failing unittest Test cases

There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work but doesn't. The
discussion got around to means of indicating such tests (because the
effort of creating a test should be captured) without disturbing the
development flow.

The following code demonstrates a decorator that might be used to
aid this process. Any comments, additions, deletions?

from unittest import TestCase
class BrokenTest(TestCase.failureException):
def __repr__(self):
return '%s: %s: %s works now' % (
(self.__class__.__name__,) + self.args)

def broken_test_XXX(reason, *exceptions):
'''Indicates unsuccessful test cases that should succeed.
If an exception kills the test, add exception type(s) in args'''
def wrapper(test_method):
def replacement(*args, **kwargs):
try:
test_method(*args, **kwargs)
except exceptions + (TestCase.failureException,):
pass
else:
raise BrokenTest(test_method.__name__, reason)
replacement.__doc__ = test_method.__doc__
replacement.__name__ = 'XXX_' + test_method.__name__
replacement.todo = reason
return replacement
return wrapper
You'd use it like:
class MyTestCase(unittest.TestCase):
def test_one(self): ...
def test_two(self): ...
@broken_test_XXX("The thrumble doesn't yet gsnort")
def test_three(self): ...
@broken_test_XXX("Using list as dictionary", TypeError)
def test_four(self): ...

It would also point out when the test started succeeding.

--Scott David Daniels
sc***********@acm.org
Jan 9 '06 #1
18 1833
On 9 January 2006, Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work but doesn't. The
discussion got around to means of indicating such tests (because the
effort of creating a test should be captured) without disturbing the
development flow.

The following code demonstrates a decorator that might be used to
aid this process. Any comments, additions, deletions?


Interesting idea. I have been prepending 'f' to my test functions that
don't yet work, so they simply don't run at all. Then when I have time
to add new functionality, I grep for 'ftest' in the test suite.

- Eric
Jan 9 '06 #2
Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work but doesn't. The
discussion got around to means of indicating such tests (because the
effort of creating a test should be captured) without disturbing the
development flow.


There is just one situation that I can think of where I would use this,
and that is the case where some underlying library has a bug. I would
add a test that succeeds when the bug is present and fails when the bug
is not present, i.e. it is repaired. That way you get a notification
automatically when a new version of the library no longer contains the
bug, so you know you can remove your workarounds for that bug. However,
I've never used a decorator or anything special for that because I never
felt the need for it, a regular testcase like this also works for me:

class SomeThirdPartyLibraryTest(unittest.TestCase):
def testThirdPartyLibraryCannotComputeSquareOfZero(sel f):
self.assertEqual(-1, tplibrary.square(0),
'They finally fixed that bug in tplibrary.square')

Doesn't it defy the purpose of unittests to give them a easy switch so
that programmers can turn them off whenever they want to?

Cheers, Frank
Jan 10 '06 #3
Scott David Daniels <sc***********@acm.org> writes:
Recently there was a checkin of a test that _should_ work but
doesn't. The discussion got around to means of indicating such
tests (because the effort of creating a test should be captured)
without disturbing the development flow.


Do you mean "shouldn't work but does"? Anyway I don't understand
the question. What's wrong with using assertRaises if you want to
check that a test raises a particular exception?
Jan 10 '06 #4
Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that should work but don't.**One*of*the*rules*has*been
the test suite should be runnable and silent at every checkin.**Recently
there was a checkin of a test that should work but doesn't.**The
discussion got around to means of indicating such tests (because the
effort of creating a test should be captured) without disturbing the
development flow.

The following code demonstrates a decorator that might be used to
aid this process.**Any*comments,*additions,*deletions?


Marking a unittest as "should fail" in the test suite seems just wrong to
me, whatever the implementation details may be. If at all, I would apply a
"I know these tests to fail, don't bother me with the messages for now"
filter further down the chain, in the TestRunner maybe. Perhaps the code
for platform-specific failures could be generalized?

Peter

Jan 10 '06 #5
Paul Rubin wrote:
Recently there was a checkin of a test that _should_ work but
doesn't. The discussion got around to means of indicating such
tests (because the effort of creating a test should be captured)
without disturbing the development flow.
Do you mean "shouldn't work but does"?


no, he means exactly what he said: support for "expected failures"
makes it possible to add test cases for open bugs to the test suite,
without 1) new bugs getting lost in the noise, and 2) having to re-
write the test once you've gotten around to fix the bug.
Anyway I don't understand the question.


it's a process thing. tests for confirmed bugs should live in the test
suite, not in the bug tracker. as scott wrote, "the effort of creating
a test should be captured".

(it's also one of those things where people who have used this in
real life find it hard to believe that others don't even want to under-
stand why it's a good thing; similar to indentation-based structure,
static typing, not treating characters as bytes, etc).

</F>

Jan 10 '06 #6
Scott David Daniels wrote:
There has been a bit of discussion about a way of providing test cases
in a test suite that _should_ work but don't. One of the rules has been
the test suite should be runnable and silent at every checkin. Recently
there was a checkin of a test that _should_ work but doesn't. The
discussion got around to means of indicating such tests (because the
effort of creating a test should be captured) without disturbing the
development flow.


I like the concept. It would be useful when someone raises an issue which
can be tested for easily but for which the fix is non-trivial (or has side
effects) so the issue gets shelved. With this decorator you can add the
failing unit test and then 6 months later when an apparently unrelated bug
fix actually also fixes the original one you get told 'The thrumble doesn't
yet gsnort(see issue 1234)' and know you should now go and update that
issue.

It also means you have scope in an open source project to accept an issue
and incorporate a failing unit test for it before there is an acceptable
patch. This shifts the act of accepting a bug from putting it onto some
nebulous list across to actually recognising in the code that there is a
problem. Having a record of the failing issues actually in the code would
also helps to tie together bug fixes across different development branches.

Possible enhancements:

add another argument for associated issue tracker id (I know you could put
it in the string, but a separate argument would encourage the programmer to
realise that every broken test should have an associated tracker entry),
although I suppose since some unbroken tests will also have associated
issues this might just be a separate decorator.

add some easyish way to generate a report of broken tests.
Jan 10 '06 #7
"Fredrik Lundh" <fr*****@pythonware.com> writes:
no, he means exactly what he said: support for "expected failures"
makes it possible to add test cases for open bugs to the test suite,
without 1) new bugs getting lost in the noise, and 2) having to re-
write the test once you've gotten around to fix the bug.


Oh I see, good idea. But in that case maybe the decorator shouldn't
be attached to the test like that. Rather, the test failures should
be filtered in the test runner as someone suggested, or the filtering
could even integrated with the bug database somehow.
Jan 10 '06 #8
Peter Otten wrote:
Marking a unittest as "should fail" in the test suite seems just wrong
to me, whatever the implementation details may be. If at all, I would
apply a "I know these tests to fail, don't bother me with the messages
for now" filter further down the chain, in the TestRunner maybe.
Perhaps the code for platform-specific failures could be generalized?


It isn't marking the test as "should fail" it is marking it as "should
pass, but currently doesn't" which is a very different thing.
Jan 10 '06 #9
Paul Rubin wrote:
no, he means exactly what he said: support for "expected failures"
makes it possible to add test cases for open bugs to the test suite,
without 1) new bugs getting lost in the noise, and 2) having to re-
write the test once you've gotten around to fix the bug.


Oh I see, good idea. But in that case maybe the decorator shouldn't
be attached to the test like that. Rather, the test failures should
be filtered in the test runner as someone suggested, or the filtering
could even integrated with the bug database somehow.


separate filter lists or connections between the bug database and the
code base introduces unnecessary couplings, and complicates things
for the developers (increased risk for checkin conflicts, mismatch be-
tween the code in a developer's sandbox and the "official" bug status,
etc).

this is Python; annotations belong in the annotated code, not in some
external resource.

</F>

Jan 10 '06 #10
Duncan Booth wrote:
Peter Otten wrote:
Marking a unittest as "should fail" in the test suite seems just wrong
to me, whatever the implementation details may be. If at all, I would
apply a "I know these tests to fail, don't bother me with the messages
for now" filter further down the chain, in the TestRunner maybe.
Perhaps the code for platform-specific failures could be generalized?


It isn't marking the test as "should fail" it is marking it as "should
pass, but currently doesn't" which is a very different thing.


You're right of course. I still think the "currently doesn't pass" marker
doesn't belong into the test source.

Peter
Jan 10 '06 #11
> Scott David Daniels about marking expected failures:

<snip>

I am +1, I have wanted this feature for a long time. FWIW,
I am also +1 to run the tests in the code order.

Michele Simionato

Jan 10 '06 #12
Peter Otten <__*******@web.de> wrote:
You're right of course. I still think the "currently doesn't pass" marker
doesn't belong into the test source.


The agile people would say that if a test doesn't pass, you make fixing it
your top priority. In an environment like that, there's no such thing as a
test that "currently doesn't pass". But, real life is not so kind.

These days, I'm working on a largish system (I honestly don't know how many
lines of code, but full builds take about 8 hours). A fairly common
scenario is some unit test fails in a high level part of the system, and we
track it down to a problem in one of the lower levels. It's a different
group that maintains that bit of code. We understand the problem, and know
we're going to fix it before the next release, but that's not going to
happen today, or tomorrow, or maybe even next week.

So, what do you do? The world can't come to a screeching halt for the next
couple of weeks while we're waiting for the other group to fix the problem.
What we typically do is just comment out the offending unit test. If the
developer who does that is on the ball, a PR (problem report) gets opened
too, to track the need to re-instate the test, but sometimes that doesn't
happen. A better solution would be a way to mark the test "known to fail
because of xyz". That way it continues to show up on every build report
(so it's not forgotten about), but doesn't break the build.
Jan 10 '06 #13

Michele> I am also +1 to run the tests in the code order.

Got any ideas how that is to be accomplished short of jiggering the names so
they sort in the order you want them to run?

Skip
Jan 10 '06 #14
sk**@pobox.com writes:
Got any ideas how that is to be accomplished short of jiggering the
names so they sort in the order you want them to run?


How about with a decorator instead of the testFuncName convention,
i.e. instead of

def testJiggle(): # "test" in the func name means it's a test case
...

use:

@test
def jiggletest(): # nothing special about the name "jiggletest"
...

The hack of searching the module for functions with special names was
always a big kludge and now that Python has decorators, that seems
like a cleaner way to do it.

In the above example, the 'test' decorator would register the
decorated function with the test framework, say by appending it to a
list. That would make it trivial to run them in code order.
Jan 10 '06 #15

sk**@pobox.com wrote:
Michele> I am also +1 to run the tests in the code order.

Got any ideas how that is to be accomplished short of jiggering the names so
they sort in the order you want them to run?

Skip


Well, it could be done with a decorator, but unittest is already
cumbersome how it is,
I would not touch it. Instead, I would vote for py.test in the standard
library.

Michele Simionato

Jan 11 '06 #16
Duncan Booth wrote:
... Possible enhancements: add another argument for associated issue tracker id ... some unbroken tests
will also have associated issues this might just be a separate decorator.
This is probably easier to do as a separate decoration which would have to
precede the "failing test" decoration:
def tracker(identifier):
def markup(function):
function.tracker = identifier
return function
return markup
add some easyish way to generate a report of broken tests.


Here's a generator for all the "marked broken" tests in a module:

import types, unittest

def marked_broken(module):
for class_name in dir(module):
class_ = getattr(module, class_name)
if (isinstance(class_, (type, types.ClassType)) and
issubclass(class_, unittest.TestCase)):
for test_name in dir(class_):
if test_name.startswith('test'):
test = getattr(class_, test_name)
if (hasattr(test, '__name__') and
test.__name__.startswith('XXX_')):
yield class_name, test_name, test.todo
You could even use it like this:

import sys
import mytests

for module_name, module in sys.modules.iteritems():
last_class = ''
for class_name, test_name, reason in marked_broken(module):
if module_name:
print 'In module %s:' % module_name
module_name = ''
if last_class != class_name:
print 'class', class_name
last_class = class_name
print ' %s\t %s' % (test_name, reason)
Thanks for the thoughtful feedback.

--Scott David Daniels
sc***********@acm.org
Jan 12 '06 #17
On Tue, 10 Jan 2006 11:13:20 +0100, Peter Otten <__*******@web.de> wrote:
Duncan Booth wrote:
Peter Otten wrote:
Marking a unittest as "should fail" in the test suite seems just wrong
to me, whatever the implementation details may be. If at all, I would
apply a "I know these tests to fail, don't bother me with the messages
for now" filter further down the chain, in the TestRunner maybe.
Perhaps the code for platform-specific failures could be generalized?


It isn't marking the test as "should fail" it is marking it as "should
pass, but currently doesn't" which is a very different thing.


You're right of course. I still think the "currently doesn't pass" marker
doesn't belong into the test source.

Perhaps in a config file that can specify special conditions re running
identified tests? E.g., don't run vs run and report (warn/fail/info)
changed result (e.g. from cached result) vs run and report if pass etc.

Then if code change unexpectedly makes a test work, the config file can just
be updated, not the test.

Regards,
Bengt Richter
Jan 12 '06 #18
OK I took the code I offered here (tweaked in reaction to some
comments) and put up a recipe on the Python Cookbook. I'll allow
a week or so for more comment, and then possibly pursue adding this
to unittest.

Here is where the recipe is, for those who want to comment further (in
either that forum or this one):
http://aspn.activestate.com/ASPN/Coo.../Recipe/466288

--Scott David Daniels
sc***********@acm.org
Jan 13 '06 #19

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

5
by: Will Stuyvesant | last post by:
I have a unittest testfile like this: ----------------------- test_mod.py --------------------- import sys sys.path.append('..') import unittest import mod class...
12
by: Paul Moore | last post by:
One of the things I really dislike about Unittest (compared, say, to a number of adhoc testing tricks I've used in the past, and to Perl's "standard" testing framework) is that the...
0
by: Remy Blank | last post by:
Ok, here we go. I added the possibility for tests using the unittest.py framework to be skipped. Basically, I added two methods to TestCase: TestCase.skip(msg): skips unconditionally...
1
by: Tom Haddon | last post by:
Hi Folks, Newbie question here. I'm trying to set up some unit testing for a database abstraction class, and the first thing I want to test is the connection parameters. So, my question is, how do...
41
by: Roy Smith | last post by:
I've used the standard unittest (pyunit) module on a few projects in the past and have always thought it basicly worked fine but was just a little too complicated for what it did. I'm starting a...
7
by: Jorgen Grahn | last post by:
I have a set of tests in different modules: test_foo.py, test_bar.py and so on. All of these use the simplest possible internal layout: a number of classes containing test*() methods, and the good...
2
by: Oleg Paraschenko | last post by:
Hello, I decided to re-use functionality of "unittest" module for my purposes. More precisely, I have a list of folders. For each folder, code should enter to the folder, execute a command and...
0
by: Chris Fonnesbeck | last post by:
I have built the following unit test, observing the examples laid out in the python docs: class testMCMC(unittest.TestCase): def setUp(self): # Create an instance of the sampler...
2
by: Collin Winter | last post by:
While working on a test suite for unittest these past few weeks, I've run across some behaviours that, while not obviously wrong, don't strike me as quite right, either. Submitted for your...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.