469,939 Members | 2,409 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,939 developers. It's quick & easy.

unittest vs py.test?

I've used the standard unittest (pyunit) module on a few projects in the
past and have always thought it basicly worked fine but was just a little
too complicated for what it did.

I'm starting a new project now and I'm thinking of trying py.test
(http://codespeak.net/py/current/doc/test.html). It looks pretty cool from
the docs. Is there anybody out there who has used both packages and can
give a comparative review?
Jul 18 '05 #1
41 9892
[Roy Smith]
I've used the standard unittest (pyunit) module on a few projects in the
past and have always thought it basicly worked fine but was just a little
too complicated for what it did.

I'm starting a new project now and I'm thinking of trying py.test
(http://codespeak.net/py/current/doc/test.html). It looks pretty cool from
the docs. Is there anybody out there who has used both packages and can
give a comparative review?


I've used both and found py.test to be effortless and much less verbose.
For more complex testing strategies, py.test is also a winner. The generative
tests are easier to write than crafting a similar strategy for unittest.

py.test does not currently integrate well with doctest; however, that will
likely
be the next feature to be added (per holger's talk at PyCon).

For output, unittest's TextTestRunner produces good looking, succinct output
on successful tests. For failed tests, it is not bad either. In contrast,
py.test
output is more highly formatted and voluminous -- it takes a while to get used
to.

unittest users have to adapt to the internal structure of the unittest module
and
become familiar with its class structure (test fixture, test case, test suite,
and test
runner objects). py.test does a good job of hiding its implementation.

py.test is relatively new and is continuing to evolve. Support tools like a
GUI test runner are just emerging. In contrast, unittest is based on a proven
model and the code is mature.

unittest module updates come up in distinct releases, often months or years
apart. py.test is subject to constant update by subversion. Personally, I like
the continuous updates, but it could be unsettling if you're depending on it
for production code.
Raymond Hettinger

Jul 18 '05 #2
Roy Smith wrote:
I've used the standard unittest (pyunit) module on a few projects in the
past and have always thought it basicly worked fine but was just a little
too complicated for what it did.

I'm starting a new project now and I'm thinking of trying py.test
(http://codespeak.net/py/current/doc/test.html). It looks pretty cool
from
the docs. Is there anybody out there who has used both packages and can
give a comparative review?


Have you seen Grig Gheorghiu's 3 part comparison of unittest, and py.test?

http://agiletesting.blogspot.com/200...-unittest.html
http://agiletesting.blogspot.com/200...2-doctest.html
http://agiletesting.blogspot.com/200...test-tool.html

--
Nigel Rowe
A pox upon the spammers that make me write my address like..
rho (snail) swiftdsl (stop) com (stop) au
Jul 18 '05 #3
Nigel Rowe>Have you seen Grig Gheorghiu's 3 part comparison of
unittest, and py.test?<

Very interesting articles, thank you. Testing seems something still in
quick development.

For small functions the doctests are useful, but py.test has some
advantages. Probably something even better that py.test can be
designed, taking some ideas from the doctests, or vice versa :-]
In py.test I see a couple of features useful for the Python language
too:

The raises:
py.test.raises(NameError, "self.alist.sort(int_compare)")
py.test.raises(ValueError, self.alist.remove, 6)
(A try can probably do something similar)

And the improved error messages:
"When it encounters a failed assertion, py.test prints the lines [3-4
lines?] in the method containing the assertion, up to and including the
failure. It also prints the actual and the expected values involved in
the failed assertion."

http://agiletesting.blogspot.com/200...test-tool.html

Such things can help avoid (just for simple situations/functions!)
testing frameworks in the first place, so you can use just the normal
Python code to test other code.

Bye,
Bearophile

Jul 18 '05 #4
Nigel Rowe <rh*@see.signature.invalid> wrote:
Have you seen Grig Gheorghiu's 3 part comparison of unittest, and py.test?

http://agiletesting.blogspot.com/200...-unittest.html
http://agiletesting.blogspot.com/200...2-doctest.html
http://agiletesting.blogspot.com/200...test-tool.html


Just finished reading them now. Thanks for the pointer, they make an
excellent review of the space.

One thing that worries me a little is that all three seem to have
advantages and disadvantages, yet none is so obviously better than the
others that it stands out as the only reasonable way to do it. This means
some groups will adopt one, some will adopt another, and the world will
become fragmented.
Jul 18 '05 #5
In my mind, practicing TDD is what matters most. Which framework you
choose is a function of your actual needs. The fact that there are 3 of
them doesn't really bother me. I think it's better to have a choice
from a small number of frameworks rather than have no choice or have a
single choice that might not be the best for your specific environment
-- provided of course that this doesn't evolve into a PyWebOff-like
nightmare :-)

Grig

Jul 18 '05 #6
[Roy Smith <ro*@panix.com>]
One thing that worries me a little is that all three seem to have
advantages and disadvantages, yet none is so obviously better than the
others that it stands out as the only reasonable way to do it. This means
some groups will adopt one, some will adopt another, and the world will
become fragmented.


Worry is a natural thing for someone with "panix" in their email address ;-)

FWIW, the evolution of py.test is to also work seemlessly with existing tests
from the unittest module.

the world diversifies, the world congeals,
Raymond Hettinger
Jul 18 '05 #7
On Fri, 01 Apr 2005 16:42:30 +0000, Raymond Hettinger wrote:
FWIW, the evolution of py.test is to also work seemlessly with existing tests
from the unittest module.


Is this true now, or is this planned?

I read(/skimmed) the docs for py.test when you linked to the project, but
I don't recall seeing that. Certainly some of the features made me drool
but I have an investment in unittest. If I can relatively easily port them
over, I'd love to use py.test. (I don't care about a small per-file
change, it'd probably be one I can automate anyhow. But I can't afford to
re-write every test.) I didn't see anything like this in the docs, but I
may have missed it.

That'd be cool.
Jul 18 '05 #8
>From what I know, the PyPy guys already have a unittest-to-py.test
translator working, but they didn't check in the code yet. You can send
an email to py-dev at codespeak.net and let them know you're interested
in this functionality.

Grig

Jul 18 '05 #9
Grig Gheorghiu wrote:
In my mind, practicing TDD is what matters most. Which framework you
choose is a function of your actual needs. The fact that there are 3 of
them doesn't really bother me. I think it's better to have a choice
from a small number of frameworks rather than have no choice or have a
single choice that might not be the best for your specific environment
-- provided of course that this doesn't evolve into a PyWebOff-like
nightmare :-)

Grig

Grig,

Many thanks for your helpful essays.

unittest seems rather heavy. I don't like mixing tests with
documentation, it gives the whole thing a cluttered look.
Py.test is the more appealing but it doesn't appear to be
ready for packaging yet.

Thanks,

Colin W.
Jul 18 '05 #10
Colin J. Williams wrote:
unittest seems rather heavy. I don't like mixing tests with
documentation, it gives the whole thing a cluttered look.


unittest can really be rather light. Most of our
test cases are variations on the following, with
primarily application-specific code added rather than
boilerplate or other unittest-related stuff:

import unittest

class TestCase(unittest.TestCase):
def test01(self):
'''some test....'''
self.assertEquals(a, b)

def test02(self):
'''another test'''
self.assertRaises(Error, func, args)
if __name__ == '__main__':
unittest.main()
That's it... add testXX() methods as required and
they will be executed in sorted order (alphabetically)
automatically when you run from the command line.
The above might look excessive in comparison to the
test code, but add some real code and the overhead
quickly dwindles to negligible.

I'm a little puzzled why folks so often consider this
particularly "heavy". No need to deal with suites,
TestResult objects, etc, as others have suggested,
unless you are trying to extend it in some special
way.

-Peter
Jul 18 '05 #11
Peter Hansen <pe***@engcorp.com> wrote:
unittest can really be rather light. Most of our
test cases are variations on the following, with
primarily application-specific code added rather than
boilerplate or other unittest-related stuff:

import unittest

class TestCase(unittest.TestCase):
def test01(self):
'''some test....'''
self.assertEquals(a, b)
Well, right there the "extra" stuff you needed to do (vs. py.test) was
import unittest, inherit from it, and do "self.assertEquals" instead of
just plain assert. But (see below), that's not the big thing that attracts
me to py.test.
I'm a little puzzled why folks so often consider this
particularly "heavy". No need to deal with suites,
TestResult objects, etc, as others have suggested,
unless you are trying to extend it in some special
way.


In all but the most trivial project, you're going to have lots of tests.
Typically, each class (or small set of closely related classes) will go in
one source file, with a corresponding test file. You'll probably have
stuff scattered about a number of different directories too. That means
you need to build some infrastructure to find and run all those various
tests.

One way would be futzing with suites (which I still haven't completely
figured out). Another way would be building a hierarchical series of
dependencies in Make (or whatever build tool you use) to run your tests.
The latter is what I usually do. The idea that I can just type "python
py.test" at the top level and have it find and run everything for me just
blows me away convenience-wise.

I also like the idea that I just stick print statements into my tests and
the output automagically goes away unless the test fails. I'm a firm
believer that unit tests should NOT PRODUCE ANY OUTPUT unless they fail.
I'm working with a system now where the unit tests not only produce reams
of output, but it's also rigged to keep going in the face of failure.
Trying to find the actual error in the output is a nightmare.
Jul 18 '05 #12
[Peter Hansen]
unittest can really be rather light. Most of our
test cases are variations on the following, with
primarily application-specific code added rather than
boilerplate or other unittest-related stuff:

import unittest

class TestCase(unittest.TestCase):
def test01(self):
'''some test....'''
self.assertEquals(a, b)

def test02(self):
'''another test'''
self.assertRaises(Error, func, args) . . . I'm a little puzzled why folks so often consider this
particularly "heavy".


unittest never felt heavy to me until I used py.test. Only then do you realize
how much boilerplate is needed with unittest. Also, the whole py.test approach
has a much simpler object model.

BTW, the above code simplifies to:

from py.test import raises
assert a == b
raises(Error, func, args)
Raymond Hettinger
Jul 18 '05 #13
Raymond Hettinger wrote:
BTW, the above code simplifies to:

from py.test import raises
assert a == b
raises(Error, func, args)


This is pretty, but I *want* my tests to be contained
in separate functions or methods. The trivial amount
of extra overhead that unittest requires fits with
the way I want to write my tests, so it basically
represents zero overhead for me.

The above doesn't look like it would scale very
well to many tests in terms of maintaining some
semblance of structure and readability.

And once you add some functions or whatever to do
that, I'm still unclear on how the one or two lines
of extra code that unittest requires represents
an amount of code that really merits the label "heavy".

As for Roy's comments: I use a small internally
developed driver script which uses os.walk to find
all the files matching tests/*_unit.py or tests/story*.py
in all subfolders of the project, and which runs them
in separate processes to ensure none can pollute
the environment in which other tests run. I can
dispense with the unittest.main() call, but I like
to be able to run the tests standalone. I guess
with py.test I couldn't do that...

If py.test provides a driver utility that does
effectively this, well, that's nice for users. If
it doesn't run them as separate processes, it wouldn't
suit me anyway.

Still, it sounds like it does have a strong following
of smart people: enough to make me want to take a
closer look at it to see what the fuss is about. :-)

-Peter
Jul 18 '05 #14
In article <_4********************@powergate.ca>,
Peter Hansen <pe***@engcorp.com> wrote:
As for Roy's comments: I use a small internally
developed driver script which uses os.walk to find
all the files matching tests/*_unit.py or tests/story*.py
in all subfolders of the project, and which runs them
in separate processes to ensure none can pollute
the environment in which other tests run. I can
dispense with the unittest.main() call, but I like
to be able to run the tests standalone. I guess
with py.test I couldn't do that...


Actually, I believe it does. I'm just starting to play with this, but it
looks like you can do:

py.test test_sample.py

and it'll run a single test file. I imagine you could use your os.walk
fixture in combination with this to run each test in its own process if you
wanted to.
Jul 18 '05 #15
[Peter Hansen]
If py.test provides a driver utility that does
effectively this, well, that's nice for users. If
it doesn't run them as separate processes, it wouldn't
suit me anyway.

Still, it sounds like it does have a strong following
of smart people: enough to make me want to take a
closer look at it to see what the fuss is about. :-)


FWIW, py.test scales nicely. Also, it takes less time to try
it out or read the docs than discuss it to death on a newsgroup.
The learning curve is minimal.
Raymond Hettinger


Jul 18 '05 #16
[Peter Hansen]
This is pretty, but I *want* my tests to be contained
in separate functions or methods.


In py.test, those would read:

def test1():
assert a == b

def test2():
raises(Error, func, args)

Enclosing classes are optional.
Raymond
Jul 18 '05 #17
On Sat, 02 Apr 2005 09:24:30 GMT, "Raymond Hettinger" <vz******@verizon.net> wrote:
[Peter Hansen]
If py.test provides a driver utility that does
effectively this, well, that's nice for users. If
it doesn't run them as separate processes, it wouldn't
suit me anyway.

Still, it sounds like it does have a strong following
of smart people: enough to make me want to take a
closer look at it to see what the fuss is about. :-)


FWIW, py.test scales nicely. Also, it takes less time to try
it out or read the docs than discuss it to death on a newsgroup.
The learning curve is minimal.

Is there a package that is accessible without svn?

Regards,
Bengt Richter
Jul 18 '05 #18
In article <42****************@news.oz.net>, bo**@oz.net (Bengt Richter)
Is there a package that is accessible without svn?


That seems to be its weak point right now.

Fortunately, you can get pre-built svn clients for many platforms
(http://subversion.tigris.org/project...nary-packages), and
from there you just have to run a single command (svn get URL). Still, the
py.test folks would be doing themselves a favor if they made it available
with more common tools.
Jul 18 '05 #19
Roy Smith wrote:
Actually, I believe it does. I'm just starting to play with this, but it
looks like you can do:

py.test test_sample.py

and it'll run a single test file.


Well, my driver script can do that too. I just meant
I could do "test_sample.py" and have it run the test
any time, if I wanted, and I was mainly trying to
show that even the __name__ == '__main__' part of
my example was not essential to the use of unittest.
Comparing apples to apples, so to speak, since it
looks like you don't use an import to get access
to py.test.

As near as I can tell, other than requiring an import
statement, and a class statement, there is zero
additional overhead with unittest versus py.test,
given the way I want to structure my tests (in
functions or methods). Is that true? If so, I stand
by my claim that the difference in "weight" between
the two is much much less than some have claimed.

-Peter
Jul 18 '05 #20
Raymond Hettinger wrote:
[Peter Hansen]
This is pretty, but I *want* my tests to be contained
in separate functions or methods.

In py.test, those would read:

def test1():
assert a == b

def test2():
raises(Error, func, args)

Enclosing classes are optional.


So basically py.test skips the import statement,
near as I can tell, at the cost of requiring a
utility to be installed in the PATH.

Where was all that weight that unittest supposedly
has?

(I'm not dissing py.test, and intend to check it
out. I'm just objecting to claims that unittest
somehow is "heavy", when those claiming that it
is seem to think you have to use TestSuites and
TestRunner objects directly... I think they've
overlooked the relatively lightweight approach
that has worked so well for me for four years...)

-Peter
Jul 18 '05 #21
[Peter Hansen]
(I'm not dissing py.test, and intend to check it
out.
Not to be disrepectful, but objections raised by someone
who hasn't worked with both tools equate to hot air.

I'm just objecting to claims that unittest
somehow is "heavy", when those claiming that it
is seem to think you have to use TestSuites and
TestRunner objects directly... I think they've
overlooked the relatively lightweight approach
that has worked so well for me for four years...)


Claiming? Overlooked? You do know that I wrote the
example in unittest docs, the tutorial example, and hundreds
of the test cases in the standard library. It is not an
uninformed opinion that the exposed object model for
unittest is more complex.

As for "heaviness", it is similar to comparing alkaline AA
batteries to lithium AA batteries. The first isn't especially heavy,
but it does weigh twice as much as the latter. It only becomes a
big deal when you have to carry a dozen battery packs on a hiking
trip. My guess is that until you've written a full test suite with
py.test, you won't get it. There is a distinct weight difference between
the packages -- that was their whole point in writing a new testing tool
when we already had two.

When writing a large suite, you quick come to appreciate being able
to use assert statements with regular comparision operators, debugging
with normal print statements, and not writing self.assertEqual over and
over again. The generative tests are especially nice.

Until you've exercised both packages, you haven't helped the OP
whose original request was: "Is there anybody out there who has
used both packages and can give a comparative review?"
Raymond
Jul 18 '05 #22
"Raymond Hettinger" <vz******@verizon.net> writes:
When writing a large suite, you quick come to appreciate being able
to use assert statements with regular comparision operators, debugging
with normal print statements, and not writing self.assertEqual over and
over again. The generative tests are especially nice.


But assert statements vanish when you turn on the optimizer. If
you're going to run your application with the optimizer turned on, I
certainly hope you run your regression tests with the optimizer on.
Jul 18 '05 #23
In article <7x************@ruckus.brouhaha.com>,
Paul Rubin <http://ph****@NOSPAM.invalid> wrote:
"Raymond Hettinger" <vz******@verizon.net> writes:
When writing a large suite, you quick come to appreciate being able
to use assert statements with regular comparision operators, debugging
with normal print statements, and not writing self.assertEqual over and
over again. The generative tests are especially nice.


But assert statements vanish when you turn on the optimizer. If
you're going to run your application with the optimizer turned on, I
certainly hope you run your regression tests with the optimizer on.


That's an interesting thought. In something like C++, I would never think
of shipping anything other than the exact binary I had tested ("test what
you ship, ship what you test"). It's relatively common for turning on
optimization to break something in mysterious ways in C or C++. This is
both because many compilers have buggy optimizers, and because many
programmers are sloppy about depending on uninitialized values.

But, with something like Python (i.e. high-level interpreter), I've always
assumed that turning optimization on or off would be a much safer
operation. It never would have occurred to me that I would need to test
with optimization turned on and off. Is my faith in optimization misguided?

Of course, all of the Python I write is for internal use; I haven't yet
been able to convince an employer that we should be shipping Python to
customers.
Jul 18 '05 #24

"Paul Rubin" <"http://phr.cx"@NOSPAM.invalid> wrote in message
news:7x************@ruckus.brouhaha.com...
"Raymond Hettinger" <vz******@verizon.net> writes:
When writing a large suite, you quick come to appreciate being able
to use assert statements with regular comparision operators, debugging
with normal print statements, and not writing self.assertEqual over and
over again. The generative tests are especially nice.


But assert statements vanish when you turn on the optimizer. If
you're going to run your application with the optimizer turned on, I
certainly hope you run your regression tests with the optimizer on.


I don't see why you think so. Assertion statements in the test code make
it harder, not easier for the test to pass. Ditto, I believe, for any in
the run code, if indeed there are any.

Terry J. Reedy

Jul 18 '05 #25
"Terry Reedy" <tj*****@udel.edu> writes:
But assert statements vanish when you turn on the optimizer. If
you're going to run your application with the optimizer turned on, I
certainly hope you run your regression tests with the optimizer on.


I don't see why you think so. Assertion statements in the test code make
it harder, not easier for the test to pass. Ditto, I believe, for any in
the run code, if indeed there are any.


If the unit tests are expressed as assert statements, and the assert
statements get optimized away, then running the unit tests on the
optimized code can obviously never find any test failures.
Jul 18 '05 #26
Paul Rubin wrote:
"Terry Reedy" <tj*****@udel.edu> writes:
But assert statements vanish when you turn on the optimizer. If
you're going to run your application with the optimizer turned on, I
certainly hope you run your regression tests with the optimizer on.


I don't see why you think so. Assertion statements in the test code make
it harder, not easier for the test to pass. Ditto, I believe, for any in
the run code, if indeed there are any.

If the unit tests are expressed as assert statements, and the assert
statements get optimized away, then running the unit tests on the
optimized code can obviously never find any test failures.


Any code depending upon __debug__ being 0 won't be tested. Sometimes
test structures update values as a side-effect of tracking the debugging
state. Not massively likely, but it makes for a scary environment when
your tests cannot be run on a non-debug version.

--Scott David Daniels
Sc***********@Acm.Org
Jul 18 '05 #27
Scott David Daniels <Sc***********@Acm.Org> wrote:
Any code depending upon __debug__ being 0 won't be tested. Sometimes
test structures update values as a side-effect of tracking the debugging
state. Not massively likely, but it makes for a scary environment when
your tests cannot be run on a non-debug version.

--Scott David Daniels
Sc***********@Acm.Org


What would happen if you defined

def verify (value):
if not value:
throw AssertionError

and then everyplace in your py.test suite where you would normally have
done "assert foo", you now do "verify (foo)"? A quick test shows that it
appears to do the right thing. I made a little test file:

------------------------------
#!/usr/bin/env python

def verify (value):
if not value:
raise AssertionError

class Test_foo:
def test_one (self):
assert 0

def test_two (self):
verify (0)
------------------------------

when I run that with "python py.test", I get two failures. When I run it
with "python -O py.test", I get one pass and one fail, which is what I
expected to get if the assert gets optimized away.

The output is a little more verbose, since it shows the exception raised in
verify(), but it gives you a stack dump, so it's not that hard to look one
frame up and see where verify() was called from.

It's interesting that given the penchant for light-weight-ness in py.test,
that the default output is so verbose (and, to my mind, confusing) compared
to unittest. I guess one could write their own output formatter and cut
down on the verbosity?
Jul 18 '05 #28
Raymond Hettinger wrote:
[Peter Hansen]
(I'm not dissing py.test, and intend to check it
out.
Not to be disrepectful, but objections raised by someone
who hasn't worked with both tools equate to hot air.


Not to be disrespectful either, but criticism by
someone who has completely missed my point (and apparently
not read my posts) doesn't seem entirely fair.

At no time in this thread have I objected to py.test.
The sole point of my posts has been to object to those
claiming unittest as "heavy" when in the same
breath they seem to think you have to know all kinds
of details about TestSuite, TestRunner, and TestResult
objects just to use it. I tried to demonstrate that
my way of using it appears to be on the same order of
"lightness" as some of the samples that were being used
to show how much lighter py.test was.
Until you've exercised both packages, you haven't helped the OP
whose original request was: "Is there anybody out there who has
used both packages and can give a comparative review?"


It seems possible to me that I might have helped him
solely by pointing out that unittest might not be so
"heavy" as some people claimed. I got the impression
that he might be swayed by some unfounded claims not
even to look further at unittest, which I felt would
be a bad thing. (Not to say your comments are unfounded,
as clearly they are valid... I happen to believe mine
have been, in this thread, as well. I guess you're
free to believe otherwise. Cheers.)

-Peter
Jul 18 '05 #29
Peter Hansen <pe***@engcorp.com> wrote:
It seems possible to me that I might have helped him
solely by pointing out that unittest might not be so
"heavy" as some people claimed. I got the impression
that he might be swayed by some unfounded claims not
even to look further at unittest, which I felt would
be a bad thing.


I'm the "him" referred to above. I've been using unittest ever since
it was first added to the standard library (actually, now that I think
about it, I believe I may have been using it even before then).

And yes, I think unittest brings along a certain amount of baggage.
There is something attractive about having the same basic framework
work in many languages (PyUnit, JUnit, C++Unit, etc), but on the other
hand, it does add ballast. I use it, I certainly don't hate it, but
on the other hand, there are enough things annoying about it that it's
worth investing the effort to explore alternatives.

From the few days I've been playing with py.test, I think I like what
I see, but it's got other issues. The "optimization elides assert"
issue we've been talking about is one.

It's also neat that I can write unittest-style test classes or go the
simplier route of just writing static test functions, but there's a
certain amount of TIMTOWTDI (did I spell that right?) smell to that.

I'm also finding the very terse default output from unittest (basicly
a bunch of dots followed by "all N tests passed") highly preferable to
py.test's verbosity.

In short, I haven't made up my mind yet, but I do appreciate the input
I've gotten.
Jul 18 '05 #30
ro*@panix.com (Roy Smith) writes:
Peter Hansen <pe***@engcorp.com> wrote:
It seems possible to me that I might have helped him
solely by pointing out that unittest might not be so
"heavy" as some people claimed. I got the impression
that he might be swayed by some unfounded claims not
even to look further at unittest, which I felt would
be a bad thing.


I'm the "him" referred to above. I've been using unittest ever since
it was first added to the standard library (actually, now that I think
about it, I believe I may have been using it even before then).

And yes, I think unittest brings along a certain amount of baggage.
There is something attractive about having the same basic framework
work in many languages (PyUnit, JUnit, C++Unit, etc), but on the other
hand, it does add ballast. I use it, I certainly don't hate it, but
on the other hand, there are enough things annoying about it that it's
worth investing the effort to explore alternatives.

From the few days I've been playing with py.test, I think I like what
I see, but it's got other issues. The "optimization elides assert"
issue we've been talking about is one.

It's also neat that I can write unittest-style test classes or go the
simplier route of just writing static test functions, but there's a
certain amount of TIMTOWTDI (did I spell that right?) smell to that.

I'm also finding the very terse default output from unittest (basicly
a bunch of dots followed by "all N tests passed") highly preferable to
py.test's verbosity.

In short, I haven't made up my mind yet, but I do appreciate the input
I've gotten.


I haven't used pytest, so no comparisons to offer. But for unittest,
I've found a lot of the "baggage" can be automated. My mkpythonproj
(http://www.seanet.com/~hgg9140/comp/index.html#L006) does that. When
you generate a project, you get a unittest suite with a default test
ready to run, and the mechanisms needed to add more.
--
ha************@boeing.com
6-6M21 BCA CompArch Design Engineering
Phone: (425) 294-4718
Jul 18 '05 #31
"Raymond Hettinger" <vz******@verizon.net> writes:
[Peter Hansen]
(I'm not dissing py.test, and intend to check it
out.


Not to be disrepectful, but objections raised by someone
who hasn't worked with both tools equate to hot air.

[...]

Why? Peter had a reasonable question which, AFAICT, doesn't depend on
any more detailed knowledge that what he had to work with.

What I don't understand about py.test (and trying it out seems
unlikely to answer this) is why it uses the assert statement.
unittest used to do that, too, but then it was pointed out that that
breaks when python -O is used, so unittest switched to self.assert_
&c. Does py.test have some way around that?
John
Jul 18 '05 #32
On Mon, 04 Apr 2005 22:50:35 +0000, John J. Lee wrote:
What I don't understand about py.test (and trying it out seems
unlikely to answer this) is why it uses the assert statement.
unittest used to do that, too, but then it was pointed out that that
breaks when python -O is used, so unittest switched to self.assert_
&c. Does py.test have some way around that?


"Don't use -O because it doesn't do anything significant?"

Is this an issue in practice? (Honest question.) If -O did something
interesting I might use it, but I don't think it does.
Jul 18 '05 #33
py.test intercepts the assert statements before they are optimized
away. It's part of the profuse "magic" that py.test does.

Grig

Jul 18 '05 #34
In article <pa****************************@jerf.org>,
Jeremy Bowers <je**@jerf.org> wrote:
On Mon, 04 Apr 2005 22:50:35 +0000, John J. Lee wrote:
What I don't understand about py.test (and trying it out seems
unlikely to answer this) is why it uses the assert statement.
unittest used to do that, too, but then it was pointed out that that
breaks when python -O is used, so unittest switched to self.assert_
&c. Does py.test have some way around that?


"Don't use -O because it doesn't do anything significant?"

Is this an issue in practice? (Honest question.) If -O did something
interesting I might use it, but I don't think it does.


The following program produces different output depending on whether you
run it with -O or not:

try:
assert 0
print "I am running with -O"
except AssertionError:
print "I'm not"
Jul 18 '05 #35
On Mon, 04 Apr 2005 19:59:20 -0400, Roy Smith wrote:
The following program produces different output depending on whether you
run it with -O or not:

try:
assert 0
print "I am running with -O"
except AssertionError:
print "I'm not"


But my question is whether it does anything *else*; if all -O does is
remove asserts (and set __debug__), and you're (not you personally)
complaining about -O removing asserts in testing code, the solution is to
not use -O. There's only a real *problem* if -O does something *more*,
such that you want one but not the other.

Up to this point, after years of Python use, I just ignore -O, because
it's not like it accelerates code or anything. (And I don't work on
embedded systems where my docstrings are a memory problem.) If everybody
else is like me, then if we even do have "real" optimizations, then we'll
actually need to use another switch.
Jul 18 '05 #36
Roy Smith <ro*@panix.com> writes:
In article <42****************@news.oz.net>, bo**@oz.net (Bengt Richter)
Is there a package that is accessible without svn?


That seems to be its weak point right now.

Fortunately, you can get pre-built svn clients for many platforms
(http://subversion.tigris.org/project...nary-packages), and
from there you just have to run a single command (svn get URL).


wget -r should work fine!

Cheers,
mwh

--
All obscurity will buy you is time enough to contract venereal
diseases. -- Tim Peters, python-dev
Jul 18 '05 #37
Peter Hansen <pe***@engcorp.com> writes:
Raymond Hettinger wrote:
[Peter Hansen]
This is pretty, but I *want* my tests to be contained
in separate functions or methods.

In py.test, those would read:
def test1():
assert a == b
def test2():
raises(Error, func, args)
Enclosing classes are optional.


So basically py.test skips the import statement,
near as I can tell, at the cost of requiring a
utility to be installed in the PATH.

Where was all that weight that unittest supposedly
has?


For PyPy we wanted to do some things that the designers of unittest
obviously hadn't expected[1], such as formatting tracebacks
differently. This was pretty tedious to do[2], involving things like
accessing __variables, defining subclasses of certain classes,
defining subclasses of other classes so the previous subclasses would
actually get used, etc. I've not programmed in Java, but I imagine
this is what it feels like all the time...

(Not to knock unittest too much, we did manage to get the
customizations we needed done, but it wasn't fun).

Cheers,
mwh

[1] this in itself is hardly a criticism: there are many special
things about PyPy.
[2] *this* is the criticism.

--
Well, you pretty much need Microsoft stuff to get misbehaviours
bad enough to actually tear the time-space continuum. Luckily
for you, MS Internet Explorer is available for Solaris.
-- Calle Dybedahl, alt.sysadmin.recovery
Jul 18 '05 #38
Michael Hudson wrote:
Peter Hansen <pe***@engcorp.com> writes:
Where was all that weight that unittest supposedly
has?


For PyPy we wanted to do some things that the designers of unittest
obviously hadn't expected[1], such as formatting tracebacks
differently. This was pretty tedious to do[2],


I'd agree with that! The guts of unittest I find to
be somewhat opaque, and not straightforward to extend.
Trying to do so here has also proven tedious, though
ultimately feasible, as you say the PyPy folks have
found.

Which, in the end, is precisely why I just use the
simplest vanilla approach that I can, no different
than anything done, for example, in the excellent
chapter on testing in http://www.diveintopython.org
or other examples of using unittest.

unitttest is surely not the be all and end all of
Python unit testing frameworks... but it's one of
the batteries included in the standard distribution,
and it's pretty trivial to get started using it,
unless maybe you try to go by the documentation instead
of by the examples...

-Peter
Jul 18 '05 #39
On Tue, 5 Apr 2005 14:02:49 GMT, Michael Hudson <mw*@python.net> wrote:
Roy Smith <ro*@panix.com> writes:
In article <42****************@news.oz.net>, bo**@oz.net (Bengt Richter)
> Is there a package that is accessible without svn?


That seems to be its weak point right now.

Fortunately, you can get pre-built svn clients for many platforms
(http://subversion.tigris.org/project...nary-packages), and
from there you just have to run a single command (svn get URL).


wget -r should work fine!

If you'll bear with my laziness, what should replace "should work fine!" in that?
TIA

Regards,
Bengt Richter
Jul 18 '05 #40
py.test is awesome, but there is one slight flaw in it. It produces to
much output. All I want to see when all tests pass is "All X passes
succeded!" (or something similar). py.test's output can be
distracting.

--
mvh Björn
Jul 18 '05 #41
<bj*****@gmail.com> wrote:
py.test is awesome, but there is one slight flaw in it. It produces to
much output. All I want to see when all tests pass is "All X passes
succeded!" (or something similar). py.test's output can be
distracting.


I agree.
Jul 18 '05 #42

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

5 posts views Thread by Will Stuyvesant | last post: by
2 posts views Thread by JAWS | last post: by
reply views Thread by Remy Blank | last post: by
7 posts views Thread by Jorgen Grahn | last post: by
5 posts views Thread by paul kölle | last post: by
3 posts views Thread by David Vincent | last post: by
2 posts views Thread by Oleg Paraschenko | last post: by
reply views Thread by Chris Fonnesbeck | last post: by
1 post views Thread by Chris Fonnesbeck | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.