473,320 Members | 1,858 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Re: Code correctness, and testing strategies

Hi list.

Do test-driven development or behaviour-driven development advocate
how to do higher-level testing than unit testing?
>From reading up on the subject, I see that there are formally these
types of testing:

unit
integration
system
acceptance

I'm asking about this, because as suggested by various posters, I have
written my latest (small) app by following a Behaviour-Driven
Development style.

The small app consists of about 5 relatively simple classes, each with
behaviour tests, and mocks for the other objects it uses. I wrote the
tests before any code (I cheated a bit here and there, like adding
logging before adding logger mocks and tests to check that the logging
took place).

That went well, and the code ended up much more modular than if I
hadn't followed BDD. And I feel more confident about the code quality
than before ;-) The testing module has about 2x the lines of code as
the code being tested.

My problem is that I haven't run the app once yet during development :-/

It looks like I've fallen into the trap described here:

http://en.wikipedia.org/wiki/Test-dr...egration_tests

Should I go ahead and start manually testing (like I would have from
the beginning if I wasn't following TDD), or should I start writing
automated integration tests?

Is it worth the time to write integration tests for small apps, or
should I leave that for larger apps?

I've tried Googling for integration testing in the context of TDD or
BDD and haven't found anything. Mostly information about integration
testing in general.

When following BDD or TDD, should one write integration tests first
(like the unit tests), or later? Or don't those practices cover
anything besides unit testing? Which integration test process should
one use? (top down, bottom up, big bang, etc).

Thanks in advance for any tips.

David.
Jun 27 '08 #1
4 1907
David <wi******@gmail.comwrites:
I'm asking about this, because as suggested by various posters, I
have written my latest (small) app by following a Behaviour-Driven
Development style.
Congratulations on taking this step.
That went well, and the code ended up much more modular than if I
hadn't followed BDD. And I feel more confident about the code
quality than before ;-) The testing module has about 2x the lines of
code as the code being tested.
This ratio isn't unusual, and is in fact a little on the low side in
my experience. If you get decent (well-founded!) confidence in the
resulting application code, then it's certainly a good thing.
My problem is that I haven't run the app once yet during development
:-/
That might be an artifact of doing bottom-up implementation
exclusively, leading to a system with working parts that are only
integrated into a whole late in the process.

I prefer to alternate between bottom-up implementation and top-down
implementation.

I usually start by implementing (through BDD) a skeleton of the entire
application, and get it to the point where a single trivial user story
can be satisfied by running this minimally-functional application.

Then, I make an automated acceptance test for that case, and ensure
that it is run automatically by a build infrastructure (often running
on a separate machine) that:

- exports the latest working tree from the version control system

- builds the system

- runs all acceptance tests, recording each result

- makes those results available in a summary report for the build
run, with a way to drill down to the details of any individual
steps

That automated build is then set up to run either as a scheduled job
periodically (e.g. four times a day), or as triggered by every commit
to the version control system branch nominated for "integration" code.
Should I go ahead and start manually testing (like I would have from
the beginning if I wasn't following TDD), or should I start writing
automated integration tests?
In my experience, small applications often form the foundation for
larger systems.

Time spent ensuring their build success is automatically determined at
every point in the development process pays off tremendously, in the
form of flexibility in being able to use that small application with
confidence, and time saved not needing to guess about how ready the
small app is for the nominated role in the larger system.
Is it worth the time to write integration tests for small apps, or
should I leave that for larger apps?
There is a threshold below which setting up automated build
infrastructure is too much overhead for the value of the system being
tested.

However, this needs to be honestly appraised: can you *know*, with
omniscient certainty, that this "small app" isn't going to be pressed
into service in a larger system where its reliability will be
paramount to the success of that larger system?

If there's any suspicion that this "small app" could end up being used
in some larger role, the smart way to bet would be that it's worth the
effort of setting up automated build testing.
I've tried Googling for integration testing in the context of TDD or
BDD and haven't found anything. Mostly information about integration
testing in general.
I've had success using buildbot <URL:http://buildbot.net/(which is
packaged as 'buildbot' in Debian GNU/Linux) for automated build and
integration testing and reporting.
When following BDD or TDD, should one write integration tests first
(like the unit tests), or later?
All the tests should proceed in parallel, in line with the evolving
understanding of the desired behaviour of the system. This is why the
term "behaviour driven development" provide more guidance: the tests
are subordinate to the real goal, which is to get the developers, the
customers, and the system all converging on agreement about what the
behaviour is meant to be :-)

Your customers and your developers will value frequent feedback on
progress, so:

- satisfying your automated unit tests will allow you to

- satisfy your automated build tests, which will allow you to

- satisfy automated user stories ("acceptance tests"), which will
allow the customer to

- view an automatically-deployed working system with new behaviour
(and automated reports for behaviour that is less amenable to
direct human tinkering), which will result in

- the customers giving feedback on that behaviour, which will inform

- the next iteration of behaviour changes to make, which will inform

- the next iteration of tests at all levels :-)

--
\ "[W]e are still the first generation of users, and for all that |
`\ we may have invented the net, we still don't really get it." |
_o__) -- Douglas Adams |
Ben Finney
Jun 27 '08 #2
Thanks for your informative reply.

On Sun, Jun 8, 2008 at 12:28 PM, Ben Finney
<bi****************@benfinney.id.auwrote:
David <wi******@gmail.comwrites:
[...]
>
>My problem is that I haven't run the app once yet during development
:-/

That might be an artifact of doing bottom-up implementation
exclusively, leading to a system with working parts that are only
integrated into a whole late in the process.
I did do it in a mostly top-down way, but didn't stop the BDD process
to actually run the app :-)

It sounds like what you are suggesting is something like this:

1) Following BDD, get a skeleton app working

Then, your BDD process gets a few extra steps:

Old steps:

1) Write a test which fails for [new feature]
2) Write code for [new feature] to pass the test
3) Refactor if needed

New steps:

4) Run the app like an end-user, and see that it works for the [new feature]
5) Write an automated test which does (4), and verifies the [new
feature] is working correctly

Does this mean that you leave out the formal 'integration' and
'systems' testing steps? By actually running the app you are doing
those things more or less.

Could you also leave out the unit tests, and just write automated
acceptance tests? I guess that would have problems if you wanted to
re-use code in other apps. Or, if acceptance tests break then it's
harder to see which code is causing the problem.

Also, if you had to implement a few "user stories" to get your app
into a skeleton state, do you need to go back and write all the
missing acceptance tests?

I have a few problems understanding how to write automated acceptance
tests. Perhaps you can reply with a few URLs where I can read more
about this :-)

1) services

If your app starts, and keeps running indefinitely, then how do you
write acceptance tests for it? Does your acceptance tests need to
interact with it from the outside, by manipulating databases, system
time, restarting the service, etc?

I presume also that acceptance tests need to treat your app as a black
box, so they can only check your apps output (log files, database
changes, etc), and not the state of objects etc directly.

2) User interfaces

How do you write an acceptance test for user interfaces? For unit
tests you can mock wx or gtk, but for the 'real' app, that has to be
harder. Would you use specialised testing frameworks that understand X
events or use accessibility/dcop/etc interaction?

3) Hard-to-reproduce cases.

How do you write acceptance tests for hard-to-reproduce cases where
you had to use mock objects for your unit tests?

....

In cases like the above, would you instead:

- Have a doc with instructions for yourself/testers/qa to manually
check features that can't be automatically tested
- Use 'top-down' integration tests, where you mock parts of the system
so that that features can be automatically tested.
- Some combination of the above

[...]
>Is it worth the time to write integration tests for small apps, or
should I leave that for larger apps?

There is a threshold below which setting up automated build
infrastructure is too much overhead for the value of the system being
tested.
There is no 'build' process (yet), since the app is 100% Python. But I
will be making a Debian installer a bit later.

My current 'build' setup is something like this:

1) Make an app (usually C++, shell-script, Python, or mixed)

2) Debianise it (add a debian subdirectory, with control files so
Debian build tools know how to build binaries from my source, and how
they should be installed & uninstalled).

3) When there are new versions, manually test the new version, build a
binary debian installer (usually in a Debian Stable chroot with debian
tools), on my Debian Unstable dev box, and upload the deb file (and
updated Debian repo listing files) to a 'development' or 'unstable'
branch on our internal Debian mirror.

4) Install the new app on a Debian Stable testing box, run it, and
manually check that the new logic works

5) Move the new version to our Debian repo's live release, from where
it will be installed into production.

If I adopt BDD, my updated plan was to use it during app development
and maintenance, but not for later testing. Do you suggest that after
building a .deb in the chroot, the app should also be automatically
installed under a chroot & acceptance tests run on my dev machine? Or
should I package the acceptance tests along with the app, so that they
can be (manually) run on test servers before going into production? Or
do both?

I've considered setting up a centralised build server at work, but
currently I'm the only dev which actually builds & packages software,
so it wouldn't be very useful. We do have other devs (PHP mostly), but
they don't even use version control :-/. When they have new versions
(on their shared PHP dev & testing servers), I copy it into my version
control, confirm the changed files with them, build an installer, and
upload onto our mirror, so it can be installed onto other boxes.

David.
Jun 27 '08 #3
David <wi******@gmail.comwrites:
On Sun, Jun 8, 2008 at 12:28 PM, Ben Finney
<bi****************@benfinney.id.auwrote:

I did do it in a mostly top-down way, but didn't stop the BDD
process to actually run the app :-)

It sounds like what you are suggesting is something like this:

1) Following BDD, get a skeleton app working
Specifically, BDD requires that the behaviour be asserted by automated
tests. So, this skeleton should have automated tests for the behaviour
currently implemented.
Then, your BDD process gets a few extra steps:

Old steps:

1) Write a test which fails for [new feature]
2) Write code for [new feature] to pass the test
3) Refactor if needed

New steps:

4) Run the app like an end-user, and see that it works for the [new feature]
5) Write an automated test which does (4), and verifies the [new
feature] is working correctly
Rather, the test for the new feature should be written first, just
like any other test-implement-refactor cycle.
Does this mean that you leave out the formal 'integration' and
'systems' testing steps? By actually running the app you are doing
those things more or less.
I'm not sure why you think one would leave out those steps. The
integration and systems tests should be automated, and part of some
test suite that is run automatically on some trigger (such as every
commit to the integration branch of the version control system).
Could you also leave out the unit tests, and just write automated
acceptance tests?
They serve very different, complementary purposes, so you should be
writing both (and automating them in a test suite that runs without
manual intervention).
Also, if you had to implement a few "user stories" to get your app
into a skeleton state, do you need to go back and write all the
missing acceptance tests?
Treat them as new behaviour. Find a way to express each one as a set
of "given this environment, and subject to this event, such-and-such
happens", where "such-and-such" is subject to a true/false assertion
by the test code.
I have a few problems understanding how to write automated
acceptance tests.
This is just another way of saying that determining requirements in
sufficient detail to implement them is hard.

True, and inescapable; but with a BDD approach, you only put in that
hard work for the behaviour you're *actually* implementing, avoiding
it for many false starts and dead-ends that get pruned during the
development process.

You also ensure the requirements are specified in at least one way: in
automated tests that are run as a suite by your build tools. Not
exactly perfect documentation, but it has the significant properties
that it will *exist*, and that divergence of the system from the
specification will be noticed very quickly.
Perhaps you can reply with a few URLs where I can read more about
this :-)
This weblog posting discusses unit tests and acceptance tests in agile
development
<URL:http://blog.objectmentor.com/articles/2007/10/17/tdd-with-acceptance-tests-and-unit-tests>.
I disagree with the author that there are *only* two types of tests
(as the comments discuss, there are other valuable types of automated
tests), but it does address important issues.
There is no 'build' process (yet), since the app is 100% Python.
The "build" process does whatever is needed to prove that the
application is ready to run. For applications written in compiled
languages, it's usually enough to compile and link the stand-alone
executable.

For a Python application, "the build process" usually means preparing
it to be installed into the system, or some testing environment that
simulates such an environment.

A Python distutils 'setup.py' is a very common way to set up the build
parameters for an application.
But I will be making a Debian installer a bit later.
That's an important part of the build process, but you should write
(and test via your automated build process) a 'setup.py' before doing
that.
If I adopt BDD, my updated plan was to use it during app development
and maintenance, but not for later testing. Do you suggest that
after building a .deb in the chroot, the app should also be
automatically installed under a chroot & acceptance tests run on my
dev machine? Or should I package the acceptance tests along with the
app, so that they can be (manually) run on test servers before going
into production? Or do both?
BDD doesn't change in such a situation. The goal is to ensure that
desired behaviour of some part of the system is specified by automated
tests, that are all run as a suite triggered automatically by
significant changes to the system.

That description applies at any level of "part of the system", from
some small unit of code in a function to entire programs or larger
subsystems. The only things that change are which tools one uses to
implement and automated the tests.
I've considered setting up a centralised build server at work, but
currently I'm the only dev which actually builds & packages
software, so it wouldn't be very useful.
It's extremely useful to ensure that the automated application build
infrastructure is in place early, so that it is easy to set up and
automate. This ensures that it actually gets done, rather than put in
the "too hard basket".

The benefits are many even for a single-developer project: you have
confidence that the application is simple to deploy somewhere other
than your development workstation, you get early notification when
this is not the case, you are encouraged from the start to keep
external dependencies low, your overall project design benefits
because you are thinking of the needs of a deployer of your
application, and so on.

While the project is small and the infrastructure simple is exactly
the right time to set up automated builds. When the time comes to make
the system and process more complicated, you'll have far more things
to do, and it's likely that you won't take the time at that point to
implement automated build systems that should have been in place from
the beginning.
We do have other devs (PHP mostly), but they don't even use version
control :-/.
Fix that problem first. Seriously.

--
\ "The Stones, I love the Stones; I can't believe they're still |
`\ doing it after all these years. I watch them whenever I can: |
_o__) Fred, Barney, ..." -- Steven Wright |
Ben Finney
Jun 27 '08 #4
Thanks again for an informative reply :-)

I finished that small app I mentioned last time (before reading the
the last reply to this thread). A few points (apologies for the
length):

I added a few integration tests, to test features which unit tests
weren't appropriate for. The main thing they do is call my main
object's 'run_once()' method (which iterates once) instead of 'run()'
(which runs forever, calling run_once() endlessly), and then check
that a few expected things took place during the iteration.

Those integration tests helped me to find a few issues. There were a
few 'integration' tests which were actually acceptance/functional
tests because integration tests weren't good enough (eg: run script in
unix daemon mode and check for pid/lock/etc files). I left those under
integration because it's annoying to change to 3 different directories
to run all of the tests. I should probably split those off and
automate the testing a bit more.

After doing all the automated tests, I did some manual testing (added
'raise UNTESTED' lines, etc), and found a few more things. eg, that
one of my (important) functions was being unit tested but not ever
being run by the main app. Fixed it with BDD, but it is something I
need to be aware of when building bottom-up with BDD :-) Even better:
develop top down also, and have acceptance tests from the start, like
you mentioned before.

Another thing I found, was that after installing on a test system
there were more failures (as expected). Missing depedencies which had
to be installed. Also Python import errors because the files get
installed to and run from different directories than on my
workstation. Not sure how I would use BDD to catch those.

Finally - I didn't use BDD for the debianization (checking that all
the shell scripts & control files work correctly, that the debian
package had all the files in the correct places etc). I figured it
would be too much trouble to mock the Debian package management
system, and all of the linux utilities.

Should a BDD process also check 'production' (as opposed to 'under
development') artifacts? (installation/removal/not being run under a
version control checkout directory/missing dependancies/etc)?

In the future I'll probably create 'production' acceptance tests (and
supporting utilities) following BDD before debianization. Something
automated like this:

1) Build the installer package from source
2) Inspect the package and check that files are in the expected places
3) Package should pass various lint tests
4) Install the package under a chroot (reset to clear out old deps, etc)
5) Check that files etc are in the expected locations
6) Run functional tests (installed with the package) under the chroot
7) Test package upgrades/removal/purges/etc under the chroot.

There are a few existing Debian utilities that can help with this.

On Wed, Jun 11, 2008 at 4:36 AM, Ben Finney
<bi****************@benfinney.id.auwrote:
David <wi******@gmail.comwrites:
[...]
>
>Does this mean that you leave out the formal 'integration' and
'systems' testing steps? By actually running the app you are doing
those things more or less.

I'm not sure why you think one would leave out those steps. The
integration and systems tests should be automated, and part of some
test suite that is run automatically on some trigger (such as every
commit to the integration branch of the version control system).
It sounded that way because when I asked about integration tests
originally, you said to use approval testing. Which seems to
completely skip the automated 'integration' and 'systems' testing
steps I was expecting.
>
A Python distutils 'setup.py' is a very common way to set up the build
parameters for an application.
I don't make setup.py, because my work projects are always installed
via apt-get onto Debian servers. setup.py is a bit redundant and less
functional for me :-)

Might be a bad practice on my part. Usually you start with a
non-Debian-specific install method. Which then gets run to install the
files into a directory which gets packaged into a Debian installer. I
cheat with Python apps because I'm both the upstream author and the
Debian maintainer. I setup my Debian control files so they will copy
the .py files directly into the packaged directory, without running
any non-Debian-specific install logic.

I should probably have a setup.py anyway... :-)
>But I will be making a Debian installer a bit later.

That's an important part of the build process, but you should write
(and test via your automated build process) a 'setup.py' before doing
that.
Should I have a setup.py if it won't ever be used in production?
>
>I've considered setting up a centralised build server at work, but
currently I'm the only dev which actually builds & packages
software, so it wouldn't be very useful.

It's extremely useful to ensure that the automated application build
infrastructure is in place early, so that it is easy to set up and
automate. This ensures that it actually gets done, rather than put in
the "too hard basket".
Thanks for the advice. I'll look into setting it up :-)

At the moment I have ad-hoc projects in my home directory, each with
it's own git repo. There's at least 100 of them. Usually I run a local
script to build deb files from one of those dirs, and push them to our
internal debian repo. I'll setup a system where public git repos (with
hook scripts) get setup on the build server. Then I'll push my changes
to the server when I want a package to be built & uploaded. I also
want the same system on my workstation for testing, and in case there
are network problems.
>
The benefits are many even for a single-developer project: you have
confidence that the application is simple to deploy somewhere other
than your development workstation, you get early notification when
this is not the case, you are encouraged from the start to keep
external dependencies low, your overall project design benefits
because you are thinking of the needs of a deployer of your
application, and so on.
I already think this way because I manage our installation
infrastructure (I have a developer hat, a package maintainer hat, and
a release manager hat). But it might be useful for other devs in the
future, if I can get them to use the sysem ;-) Also so they can make
new packages etc while I'm on leave without having to use my
workstation :-) Hasn't happened yet. Usually they would just SCP
updated PHP files into production without using Debian installers.
Their work is important, but it doesn't perform any critical function
:-) (as in bugs = lost mega bucks per minute).
>We do have other devs (PHP mostly), but they don't even use version
control :-/.

Fix that problem first. Seriously.
I've tried a few times, but haven't succeeded yet. It's not an easy
concept to sell to people who aren't interested, and there is more
work involved than simply hacking away at source and making occasional
copies to a new 'bpk' or 'old' sub-directory.

There have been a few cases where the devs were working on code, when
clients came in and needed to see an earlier version (also not in
production yet). I've saved their bacon by restoring the correct
version from our daily backups. We didn't have much in the way of
backups either before I set them up :-)

David.
Jun 27 '08 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Bilgehan.Balban | last post by:
Hi, A simple question - Is it common/good practice to test C code using Python? For example one could wrap individual C functions, and test each of them using python, maybe not for low-level...
1
by: Mark Jerde | last post by:
I'm working on a testing application. It will run on multiple computers. Each computer has the same "suite" of tests. The test operator can choose which tests to run. It is expected that each...
46
by: Profetas | last post by:
Hi, I know that this is off topic. but I didn't know where to post. Do you comment your source code while coding or after coding. for example: you write a procedure and after it is...
3
by: Ben | last post by:
Hi There I am doing some unit testing at the moment, and the majority of the leg work involves taking two objects (expected vs actual) and verifying that their properties are equal. The objects...
6
by: Alvin | last post by:
Hello, I have a static library that I created. I've been updating the source to be const-correct as according to the C++ FAQ Lite section 18...
27
by: David Golightly | last post by:
This is just a quick poll for all you web devs out there: What browsers do you test on/are concerned about compatibility with? Obviously, you're going to test on current-generation browsers such...
27
by: brad | last post by:
Does anyone else feel that unittesting is too much work? Not in general, just the official unittest module for small to medium sized projects? It seems easier to write some quick methods that are...
24
by: David | last post by:
Hi list. What strategies do you use to ensure correctness of new code? Specifically, if you've just written 100 new lines of Python code, then: 1) How do you test the new code? 2) How do...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
0
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.