In my Test Infecting talk, I do my best to counter a number of myths that I (and others) have encountered when introducing testing into an organization. One of the most persistent misconception revolves around time - or rather the lack thereof. Many a developer has claimed they don’t have time to test to which I generally reply with a Pat Parelli quote from this post on Kathy Sierra’s blog:
“Take the time it takes so it takes less time.”
Kathy was talking about multitasking but my point is simple: forgo testing and you’ll pay that price plus more later when the defects start rolling in. While I *think* this is persuasive, Dean Wampler went one better by using charts which we all know makes for a better argument Dean makes some great points in Why you have time for TDD (but may not know it yet…) though the part about moving unscheduled project end time up earlier into the project really hit home.
Ranges are fine and the key to success is frequent milestones; as we learn more about the problem domain and the technology we are using, the more accurate our estimates. But most organizations take a random guess (with, I’d say, a wind’s spittle of support) and turn that into a concrete date around which the world turns. They then ignore all the little milestones (if they track them at all) or they green shift the project status. The result is failure, though sometimes we redefine that word to mean something else entirely…