The Road to Performance Is Littered with Dirty Code Bombs
Improving the performance of your code will necessarily involve refactoring it. Whilst we may know how to avoid the messy parts of code, the same will not be true for others or for times when we must tune code written by others. In those cases, every overly complex or highly coupled chunk of code is a dirty code bomb laying in wait to derail the effort. The first casualty of a dirty code will be your schedule. If the road is smooth it's easy to predict when you'll reach your destination. Lining the road with dirty code makes it very difficult to make a sane prediction.
Consider the case where you find an execution hot spot. The normal course of action is to reduce the strength of the underlying algorithm. Let's say you respond to your manager's request for an estimate with an answer of 3-4 hours. As you apply the fix you quickly realize that you’ve broken a dependent part. Since closely related things are often necessarily coupled, this breakage is most likely expected and accounted for. But what happens if fixing that dependency results in other dependent parts breaking? Also, the farther away from the dependency is from the source the less likely you are to recognize it and account for it in your estimate. It is these unexpected breakage that put your schedule at risk. All of a sudden your 3-4 hour estimate may have ballooned to 3-4 weeks, often this expansion happens 1 or 2 days at a time. It is not uncommon to see "quick" refactorings ended up taking several months to complete. In these instances, the damage to the credibility and political capital of the responsible team has ranged from severe to terminal. If only we had a tool to help us identify and measure this risk.
In fact we have many ways of measuring the degree and depth of couplings and complexity of our code. Included are the Law of Demeter, Coupling between Objects (CBO), fan in, fan out, efferent and afferent couplings, McCabe's cyclometric complexity, and so on. These metrics describe features in our code that we can look for and count. Moreover, the magnitudes of these counts infer code quality. Consider fan out for example. Fan out is defined as the number of classes that are dependent upon a class of interest. Think of it as the number of classes that must be compiled in order to compile the class of interest. If for every class in an application this value is small, you can conclude that couplings were shallow. Conclusion: couplings pose a minimal risk were I to refactor.
A downside of software metrics is that the huge array of numbers that metrics tools will produce can be intimidating to the uninitiated. That said, software metrics is a powerful tool in our fight for clean code that can help us eliminate dirty code bombs before they are a serious risk to a performance tuning exercise.
This work is licensed under a Creative Commons Attribution 3
Back to 97 Things Every Programmer Should Know home page