Planning for performance is not a premature optimization

From WikiContent

(Difference between revisions)
Jump to: navigation, search
(New page: Donald Knuth's comments on premature optimizations have been with us since 1974. And while it is true that one shouldn't try to solve performance problems that they don't have, not plannin...)
Current revision (18:41, 19 April 2009) (edit) (undo)
 
Line 1: Line 1:
-
Donald Knuth's comments on premature optimizations have been with us since 1974. And while it is true that one shouldn't try to solve performance problems that they don't have, not planning for performance is something we should do. when is a performance related activity premature and when it is necessary. The answer depends on what phase of your project is in.
+
Tony Hore's comments on premature optimizations have been with us since 1974. And while it is true that one shouldn't try to solve performance problems that they don't have, not planning for performance is something we should do. when is a performance related activity premature and when it is necessary. The answer depends on what phase of your project is in.
 +
 
 +
Project development may have moved from a waterfall approach to a more iterative development cycle. Even so projects still go through distinctive phases. Early on the focus is on discovery while later on the focus will have shifted to deployment issues. Naturally the performance related activities should also shift as the project progresses. In the discovery phases, one need to focus on defining performance requirements and benchmarking. If performance is a requirement that is important to your users, it needs to be treated as any other soft or functional requirement would be. In addition, working with users to define performance requirements sets expectations. It gives us a chance to adjust expectations up front.
 +
 
 +
The next phase of discovery come when we define our architecture. The performance tool of choice in this case is benchmarking. We want to benchmark our architectural choices to ensure that they will allow us to meet our performance requirements. This is not much different than validating our architectural choices to ensure they will meet our functional requirements.
 +
 
 +
After architecture has been defined, we will move to a phase where coding dominates our activities. In this case we should be performance testing our components to ensure that they will support our performance requirements. If this sounds a bit like unit testing, it is. The key here is to focus on the right levels of granularity. While unit testing focuses on small bits of functionality, performance testing cannot. It cannot because performance testing small units of code is known as micro-benchmarking. Micro-benchmarking is notoriously difficult and often requires that we contort our code in unnatural ways in order to get meaningful results. This is why we need to focus on larger units of code such as at a component level or greater.
 +
 
 +
Finally our code will be deployed. With the performance testing that we've already performed we should be fairly confident that our deployed application will meet our performance requirements. That said, there are things that can happen as a result of integration that we couldn't test for or predict. To catch these integration level performance bugs, we should conduct a full round of performance testing as part of the acceptance testing process.
 +
 
 +
If all of this sounds like a lot of extra work rest assured, it is more work but one shouldn't consider it to be extra. After all you are only doing what is necessary to meet the requirements specified by the projects sponsors. Also, if you are practicing continuous integration, piggybacking continuous performance testing shouldn't be a lot of extra work. We should also note that just as it is cheaper to catch a functional bug prior to deployment, it is also cheaper to catch a performance bug prior to deployment. With proper planning, performance won't be premature.

Current revision

Tony Hore's comments on premature optimizations have been with us since 1974. And while it is true that one shouldn't try to solve performance problems that they don't have, not planning for performance is something we should do. when is a performance related activity premature and when it is necessary. The answer depends on what phase of your project is in.

Project development may have moved from a waterfall approach to a more iterative development cycle. Even so projects still go through distinctive phases. Early on the focus is on discovery while later on the focus will have shifted to deployment issues. Naturally the performance related activities should also shift as the project progresses. In the discovery phases, one need to focus on defining performance requirements and benchmarking. If performance is a requirement that is important to your users, it needs to be treated as any other soft or functional requirement would be. In addition, working with users to define performance requirements sets expectations. It gives us a chance to adjust expectations up front.

The next phase of discovery come when we define our architecture. The performance tool of choice in this case is benchmarking. We want to benchmark our architectural choices to ensure that they will allow us to meet our performance requirements. This is not much different than validating our architectural choices to ensure they will meet our functional requirements.

After architecture has been defined, we will move to a phase where coding dominates our activities. In this case we should be performance testing our components to ensure that they will support our performance requirements. If this sounds a bit like unit testing, it is. The key here is to focus on the right levels of granularity. While unit testing focuses on small bits of functionality, performance testing cannot. It cannot because performance testing small units of code is known as micro-benchmarking. Micro-benchmarking is notoriously difficult and often requires that we contort our code in unnatural ways in order to get meaningful results. This is why we need to focus on larger units of code such as at a component level or greater.

Finally our code will be deployed. With the performance testing that we've already performed we should be fairly confident that our deployed application will meet our performance requirements. That said, there are things that can happen as a result of integration that we couldn't test for or predict. To catch these integration level performance bugs, we should conduct a full round of performance testing as part of the acceptance testing process.

If all of this sounds like a lot of extra work rest assured, it is more work but one shouldn't consider it to be extra. After all you are only doing what is necessary to meet the requirements specified by the projects sponsors. Also, if you are practicing continuous integration, piggybacking continuous performance testing shouldn't be a lot of extra work. We should also note that just as it is cheaper to catch a functional bug prior to deployment, it is also cheaper to catch a performance bug prior to deployment. With proper planning, performance won't be premature.

Personal tools