Creating a culture of quality: Part 1

Test scripts... for your CEO

This column was contributed by Huddle.net, a London-based start-up offering unified collaboration with a social networking twist.
Huddle

What does a test script look like?

There are three kinds of test scripts in use here. We have automated test scripts for load testing and acceptance testing; we have formal test scripts which are used to QA a User Story and we have higher-level test scripts which are used for Regression Testing at the end of an iteration. This is what I meant by “all-office bug hunt”.

A QA test script might look like this:

On selecting a package a red outline should appear together with some text

  • If user clicks on cancel they should be taken back to the page they were on before entering huddle
  • If the user clicks on ‘Go to upgrade page’ they should be taken to the package details page on huddle
  • The package details must already contain the information about the requested package changes
  • The confirm package and addon section must display the information about the package the user is upgrading to
  • On clicking on cancel button user should be taken to the upgrade page
  • On clicking on ‘confirm’ the user should reload the page with a form to upgrade package
  • Verify that once the user has upgraded their account the account has capabilities of upgraded package.
  • Verify that the account settings display the users upgraded package

Writing a test script is the first thing that we do when we start on a new story. This helps to shake out any bits of functionality that haven’t been properly thought through.

When we’re writing the code, we’ll work against the test plan, making sure that we cover each point in turn. If we’re feeling truly virtuous, we’ll write unit tests to cover each point so that we have an automated suite of acceptance tests for the story. This is something that we will do for every single story once we’ve got our codebase to the point where it becomes possible. TDD and testability will be the focus of the next few articles.

How long do you spend?

The first time we did a bug-stomp, it took a full day and a half of banging on things to see if they broke. From that session we drew up a huge list of bugs, and began prioritising them. A lot of those bugs have now been fixed, or are obsolete because we’re replacing the functionality.

Seriously - we've been mad busy!

If you do a major release (like we will in a couple of months when we release Super Top Secret Project II) you should do a full bug-hunt.

You can increase the effectiveness of a bug hunt by giving people some direction.

The best way to run a hunt in a collaboration tool like Huddle would be to use Huddle to run a small project. Something fun works best: the office party perhaps. We can create a new workspace; send out invitations, and set up new users; upload some PDFs with venue ideas; create some possible dates on the calendar; set up a whiteboard and a discussion forum; upload some fake “party photos” and generally make sure that we hit all the major areas of functionality.

Giving people a real use case will yield much better results than just telling them to go and push buttons for an hour or two.

How often do you run an office-wide bug hunt?

At the end of each two week iteration Medhavi – our tester – sends out an Excel spreadsheet with a set of use cases. These are divided amongst the non-technical members of the office – salespeople, PR, our office manager, our CEO … the little people. We run a full set of manual tests to make sure that we didn’t break anything while we were fixing bugs and adding new functionality.

The schedule for the testing looks like this:

Week 1

Monday - Last day of development for the iteration
Tuesday - Iteration demo and retrospective
Thursday - All regression tests must be finished and any bugs filed

Week 2

Monday - All regression bugs must be fixed
Tuesday - Andy decides whether the iteration is good for release based on the regression tests
Wednesday - We release the latest version of the code

The test cases themselves look like this:

  • View Account Settings
  • Edit Account Settings
    • Edit visual preferences (customisation)
    • Create a team template
    • Edit team template
    • Add member to team template
    • Remove member from team template
    • Delete team template
  • View Workspaces
  • Edit Workspace
    • change name
    • change description
    • allow rss feeds on
    • allow rss feeds off
    • users can email on
    • users can email off
    • anyone can invite users on
    • anyone can invite users off

Notice that we’re not verifying anything specific here, the regression tests are just a to-do list to make sure that we cover each area of the site. The testers are keeping an eye out for anything that looks wrong or falls over. A full test-case will take two to three hours of clicking and typing and monkeying around. It’s a good idea to give people different sets of use cases each time, or they’ll get bored and start doing things on auto-pilot.

More specific tests should be automated, so that you don’t drive all your testers insane with the sheer drudgery of their drab little lives.

tl; dr: I like Thai food. Test often. Test in-depth. Test with a range of users. Test every single bit of your app, every time you release new code. I LIKE THAI FOOD.

To read more stories from Bob, check out the Huddle blog

You might also like...

Comments

Bob Gregory Bob is senior developer at huddle.net, a london based start-up offering online project management and document collaboration tools

Contribute

Why not write for us? Or you could submit an event or a user group in your area. Alternatively just tell us what you think!

Our tools

We've got automatic conversion tools to convert C# to VB.NET, VB.NET to C#. Also you can compress javascript and compress css and generate sql connection strings.

“Linux is only free if your time has no value” - Jamie Zawinski