It's never too early to think about performance and resiliency testing
Many believe that there is no need to perform technical tests such as performance and fail over tests until very late in the development cycle. The reasoning is that there is no sense in making something fast and resilient if it doesn't actually perform the required function.
That's true, up to a point. However, if you aren't looking at performance until late in the project cycle, you have lost an incredible amount of information as to when performance changed. If you're well within your non-functional parameters, great job. Indeed, this axiom doesn't apply if there isn't at least some concern about meeting the performance or load requirements. In many cases, however, performance is going to be an important architectural and design criteria. If it is, performance testing should begin as soon as possible. If you're using an Agile methodology based on two week iterations, I'd say performance testing should be included in the process no later than the third iteration.
Why is this so important? The biggest reason is that at least you know what kinds of changes when made performance has fallen off a cliff. Instead of having to think about the entire development effort, you can focus yourself on the most recent changes. Just as with functional defect resolution triggered by continuous integration, doing performance testing early and often provides you with a narrow range of changes on which to focus. In early testing, you may not even try to diagnose performance, but you do have a baseline of performance figures to work from.
Another reason to start early is because technical testing is notoriously difficult to get going. Setting up appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time.
(RMH Edited 7/12/2008)
This work is licensed under a Creative Commons Attribution 3
Back to 97 Things Every Software Architect Should Know home page