Contribution 13

From WikiContent

Revision as of 21:23, 13 May 2008 by Rebeccajp (Talk | contribs)
Jump to: navigation, search

Many believe that there is no need to perform technical tests such as performance and fail over test until very late in the development cycle. The reasoning goes that there is no sense in making some fast and resilient if it doesn't actually perform the required function.

That's true, up to a point. However, if you aren't even looking at, say, performance until late in the cycle, you have lost an incredible amount of information about when things changed. Now, if you're well within your non-functional parameters, great job. Indeed, this axiom likely doesn't apply if there isn't at least some concern about meeting the performance or load requirements. In many cases, however, performance is going to be an important architectural and design criteria. If it is, performance testing should begin as soon as possible. If you're using an Agile methodology with two week iterations, I'd say no later than iteration three without a really good reason.

Why is this so important? The biggest reason is that at least you know what kinds of changes you've made if, or more likely when, you notice performance has fallen off a cliff. Instead of having to think about the entire development effort, you can focus yourself on the recent changes. Just as with functional defect resolution triggered by continuous integration, doing performance testing early and often provides you with a narrow range of changes to focus your attention on. In the early testing, you might actually not even try to diagnose performance, but you do have a baseline of performance figures to work from.

Another reason to start early is because technical testing is notoriously difficult to get going. Setting up appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time.

(Work in progress_

Personal tools