The Road to Performance Is Littered with Dirty Code Bombs

From WikiContent

Revision as of 11:41, 6 July 2009 by Kcpeppe (Talk | contribs)
Jump to: navigation, search

More often than not, performance tuning a system requires you to alter code. When we need to alter code, every chunk that is overly complex or highly coupled is a dirty code bomb laying in wait to derail the effort. The first casualty of dirty code will be your schedule. If the way forward is smooth it will be easy to predict when you'll finish. Unexpected encounters with dirty code will make it very difficult to make a sane prediction.

Consider the case where you find an execution hot spot. The normal course of action is to reduce the strength of the underlying algorithm. Let's say you respond to your manager's request for an estimate with an answer of 3-4 hours. As you apply the fix you quickly realize that you've broken a dependent part. Since closely related things are often necessarily coupled, this breakage is most likely expected and accounted for. But what happens if fixing that dependency results in other dependent parts breaking? Further more, the farther away the dependency is from the origin, the less likely you are to recognize it as such and account for it in your estimate. All of a sudden your 3-4 hour estimate can easily balloon to 3-4 weeks. Often this unexpected inflation in the schedule happens 1 or 2 days at a time. It is not uncommon to see "quick" refactorings eventually taking several months to complete. In these instances, the damage to the credibility and political capital of the responsible team will range from severe to terminal. If only we had a tool to help us identify and measure this risk.

In fact we have many ways of measuring and contorlling the degree and depth of couplings and complexity of our code. Included are the Law of Demeter, Coupling between Objects (CBO), fan in, fan out which create efferent and afferent couplings and so on. These metrics describe features in our code that we can look for and count. Moreover, the magnitudes of these counts do correlate with code quality. Consider the fan-out metric for example where couplings "fan out" from the class of interest. You can think of this as a count of all the classes that must be compiled before your class can be compiled. Dependencies all "fan in" from classes that reference your class. If for every class in an application these counts are small, you can safely conclude that couplings are shallow and, therefore, pose a minimal risk to refactoring. That said, maintaining low fan-in runs the danger of repeating units of logic in independent modules of your application. A guideline to follow is to fan-in locally but violate DRY globally.

A downside of software metrics is that the huge array of numbers that metrics tools produce can be intimidating to the uninitiated. That said, software metrics can be a powerful tool in our fight for clean code. They can help us to identify and eliminate dirty code bombs before they are a serious risk to a performance tuning exercise.


By Kirk Pepperdine

This work is licensed under a Creative Commons Attribution 3


Back to 97 Things Every Programmer Should Know home page

Personal tools