This is the first in a series featuring stories "from the trenches" (so to speak) of developers working at technology start-ups. Kudos and many thanks to the team at Huddle for taking the plunge and being the first! So with no further ado, over to Bob, Senior software developer at Huddle...
- James C, Editor.
Introduction from Bob...
I got involved in a conversation about testing when I was at DrinkTank [a great networking event in London for tech companies - Ed.]. I’m not entirely sure how coherent I was because I’d been lapping up the free alcohol, and I wanted to flesh out some of the ideas I’d been spouting. I need to start with a disclaimer: I’m not a testing guru.
These articles are not intended for gurus, they are intended for geek-ish people running or thinking about running a dev team in a start up.
I know testing gurus. I used to live with a Perl programmer who from what I could gather would literally never write a line of production code that wasn’t backed by a unit test. He would sometimes walk around the house in his boxers all weekend muttering about tests and test engines and test techniques and text-based test protocols, stopping only to make coffee or Tex-Mex food. This fiendish obsession with testing made him one of the best engineers I have ever met.
Me? I’m weak-minded, and I test when I feel the urge. Over the years, though, I find that I have to scratch the testing itch more and more often. Maybe I’ll end up the same way.
For now I’m not using real test driven development as a rule, even though I find it an incredibly useful tool in the right situation. There are technical reasons that make it difficult to use day-in-day out, and we’re taking BIG steps to improve our testability, which I ALSO want to talk about, but here I want to start with the real basics.
How many testers do I need per-developer?
System complexity scales non-linearly. Big systems are really hard to test. Microsoft have a tester-to-developer ratio of 1:1 and look how far it gets them.
While your app is small, it’s easy to test, but as you start to add more features, you create dependencies between features. This allows changes in one part of the system to ripple, in subtle ways, and bugs can appear in what appears to be a completely unrelated area.
At Huddle, we’ve got one full-time tester, and we could really use another. Regression testing is done en-masse by the whole office in bug-hunts. The development team all pitch in with testing, and we all work to a test plan that covers the functionality we’re building. If you aren’t Absolutely Convinced that your code will get through the QA tests, then you haven’t finished yet.
If we get a second full-time QA, then we’ll have a ratio of 3 developers to each tester, but at least 30% of each developer’s time is already being spent on nothing but testing, either as peer review, or writing some kind of automated tests. 3:1 is often vaunted as the magic number for small companies. Of course, three is also a magic number for wishes and rescuing princesses and clicking your heels together, so take it with a pinch of salt.
Our current ratio of 5:1 is too high, and we’ve recently been picking up some of the slack by pulling people in to test random things. I used to do this a LOT when I was at Cogentic. If you think something is working, go and fetch Barney from the accounts – he loves breaking software, and he’ll push buttons that you’d forgotten existed. If you’re getting people to do testing part-time, make sure you have a formal test plan, and that they know how to use your bug tracker.
Can developers test their own code?
Short answer: no.
Writing code is a bit like map-making. Bear with me here, it’s a terrible simile, but it’ll do.
You point your developers in the direction of some uncharted wasteland, supply them with coffee, and let them go hacking through the undergrowth. If you’re lucky, they’ll pop back up in a few weeks time, and they’ll have brought geometric exactitude to Terra Incognita. If you’re unlucky, they’ll go native and it’ll end up with heads-on-spikes and tears-before-bedtime.
Anyway: you want to test your new maps. The WORST people to test the maps are the people who wrote them. They know, without thinking about it, that you have to veer West when you reach The Swamp of Unicode Misery or you’ll drown. They know that the distances are slightly unreliable because they contracted malaria near the Caves of Ui and the resulting fever made the cartography kinda hazy.
Stretching the metaphor beyond breaking point, they also have a tendency to say “Here Be Dragons And all Manner of Phantastic Beasts” and leave it at that.
Good testers will wander off from the beaten path, get lost in the wilderness, fight off bandits and ague, and return to tell the story. Good testers will bring a fresh pair of eyes and an enviable sense of naivete to the task.
Good testers won’t make allowances for minor errors and niggles, which is great because NOR WILL YOUR CUSTOMERS.
How can I get cheap testers?
Short answer: You can’t. Good testers will cost at least as much as good developers. Most people who are smart enough to be really good at testing are smart enough to find it really boring.
The absolute worst thing you can do is to employ developers and make them do a stint in the QA department. Developers don’t want to be full-time testers, and they’ll suck at it. If you do this, you’ll find that your Quality Assurance has become Quick Assessment.
Pay peanuts: hire monkeys
If you really want to find cheap labour, then look at graduates. You can pick up two smart graduates for the price of one good developer. There’re lots of people who start programming or CompSci degrees and then realise it isn’t for them after all.
Those people, the smart people who don’t want to be programmers, but know a bit about how computers work, are solid gold as far as testing is concerned. The only problem is that if they’re GOOD at it, they’ll end up wanting to be paid well for it, and then you’re right back to square one, but at least you got a year’s slave labour.
tl; dr: hire more testers, spend more money.