The Road to Performance Is Littered with Dirty Code Bombs

From WikiContent

(Difference between revisions)
Jump to: navigation, search
Current revision (05:57, 7 July 2009) (edit) (undo)
 
(8 intermediate revisions not shown.)
Line 1: Line 1:
-
Improving the performance of your code will necessarily involve refactoring it. Whilst we may know how to avoid the messy parts our own code, the same will not be true for others or for times when we must tune code written by others. In those cases, every overly complex or highly coupled chunk of code is a dirty code bomb laying in wait to derail the effort. The first casualty of dirty code will be your schedule. If the road is smooth it's easy to predict when you'll reach your destination. Lining the road with dirty code makes it very difficult to make a sane prediction.
+
More often than not, performance tuning a system requires you to alter code. When we need to alter code, every chunk that is overly complex or highly coupled is a dirty code bomb laying in wait to derail the effort. The first casualty of dirty code will be your schedule. If the way forward is smooth it will be easy to predict when you'll finish. Unexpected encounters with dirty code will make it very difficult to make a sane prediction.
-
Consider the case where you find an execution hot spot. The normal course of action is to reduce the strength of the underlying algorithm. Let's say you respond to your manager's request for an estimate with an answer of 3-4 hours. As you apply the fix you quickly realize that you've broken a dependent part. Since closely related things are often necessarily coupled, this breakage is most likely expected and accounted for. But what happens if fixing that dependency results in other dependent parts breaking? Also, the farther away from the dependency is from the source the less likely you are to recognize it and account for it in your estimate. It is these unexpected breakages that put your schedule at risk. All of a sudden your 3-4 hour estimate may have ballooned to 3-4 weeks, often this expansion happens 1 or 2 days at a time. It is not uncommon to see "quick" refactorings eventually taking several months to complete. In these instances, the damage to the credibility and political capital of the responsible team has ranged from severe to terminal. If only we had a tool to help us identify and measure this risk.
+
Consider the case where you find an execution hot spot. The normal course of action is to reduce the strength of the underlying algorithm. Let's say you respond to your manager's request for an estimate with an answer of 3-4 hours. As you apply the fix you quickly realize that you've broken a dependent part. Since closely related things are often necessarily coupled, this breakage is most likely expected and accounted for. But what happens if fixing that dependency results in other dependent parts breaking? Further more, the farther away the dependency is from the origin, the less likely you are to recognize it as such and account for it in your estimate. All of a sudden your 3-4 hour estimate can easily balloon to 3-4 weeks. Often this unexpected inflation in the schedule happens 1 or 2 days at a time. It is not uncommon to see "quick" refactorings eventually taking several months to complete. In these instances, the damage to the credibility and political capital of the responsible team will range from severe to terminal. If only we had a tool to help us identify and measure this risk.
-
In fact we have many ways of measuring the degree and depth of couplings and complexity of our code. Included are the Law of Demeter, Coupling between Objects (CBO), fan in, fan out, efferent and afferent couplings, McCabe's cyclometric complexity, and so on. These metrics describe features in our code that we can look for and count. Moreover, the magnitudes of these counts are correlated with code quality. Consider fan out, for example. Fan out is defined as the number of classes that are dependent upon a class of interest. Think of it as the number of classes that must be compiled in order to compile the class of interest. If for every class in an application this value is small, you can conclude that couplings are shallow. Conclusion: couplings pose a minimal risk to refactoring.
+
In fact we have many ways of measuring and controlling the degree and depth of couplings and complexity of our code. Software metrics can be used to count the occurrences of specific features in our code. The values of these counts do correlate with code quality. Two of the number of metrics that measure couplings are fan-in and fan-out. Consider fan-out: It is defined as the number of classes referenced either directly or indirectly from a class of interest. You can think of this as a count of all the classes that must be compiled before your class can be compiled. Fan-in, on the other hand, is a count of all classes that depend upon the class of interest. Knowing fan-out and fan-in we can calculate an instability factor using ''I = f<sub>o</sub> / (f<sub>i</sub> + f<sub>o</sub>)''. As ''I'' approaches 0, the package becomes more stable. As ''I'' approaches 1, the package becomes unstable. Packages that are stable are low risk targets for recoding whereas unstable packages are more likely to be filled with dirty code bombs. The goal in refactoring is to move ''I'' closer to 0.
-
A downside of software metrics is that the huge array of numbers that metrics tools will produce can be intimidating to the uninitiated. That said, software metrics are a powerful tool in our fight for clean code that can help us eliminate dirty code bombs before they are a serious risk to a performance tuning exercise.
+
When using metrics one must remember that they are only rules of thumb. Purely on maths we can see that increasing ''f<sub>i</sub>'' without changing ''f<sub>o</sub>'' will move ''I'' closer to 0. There is, however, a downside to a very large fan-in value in that these class will be more difficult to alter without breaking dependents. Also, without addressing fan-out you're not really reducing your risks so some balance must be applied.
 +
 
 +
One downside to software metrics is that the huge array of numbers that metrics tools produce can be intimidating to the uninitiated. That said, software metrics can be a powerful tool in our fight for clean code. They can help us to identify and eliminate dirty code bombs before they are a serious risk to a performance tuning exercise.

Current revision

More often than not, performance tuning a system requires you to alter code. When we need to alter code, every chunk that is overly complex or highly coupled is a dirty code bomb laying in wait to derail the effort. The first casualty of dirty code will be your schedule. If the way forward is smooth it will be easy to predict when you'll finish. Unexpected encounters with dirty code will make it very difficult to make a sane prediction.

Consider the case where you find an execution hot spot. The normal course of action is to reduce the strength of the underlying algorithm. Let's say you respond to your manager's request for an estimate with an answer of 3-4 hours. As you apply the fix you quickly realize that you've broken a dependent part. Since closely related things are often necessarily coupled, this breakage is most likely expected and accounted for. But what happens if fixing that dependency results in other dependent parts breaking? Further more, the farther away the dependency is from the origin, the less likely you are to recognize it as such and account for it in your estimate. All of a sudden your 3-4 hour estimate can easily balloon to 3-4 weeks. Often this unexpected inflation in the schedule happens 1 or 2 days at a time. It is not uncommon to see "quick" refactorings eventually taking several months to complete. In these instances, the damage to the credibility and political capital of the responsible team will range from severe to terminal. If only we had a tool to help us identify and measure this risk.

In fact we have many ways of measuring and controlling the degree and depth of couplings and complexity of our code. Software metrics can be used to count the occurrences of specific features in our code. The values of these counts do correlate with code quality. Two of the number of metrics that measure couplings are fan-in and fan-out. Consider fan-out: It is defined as the number of classes referenced either directly or indirectly from a class of interest. You can think of this as a count of all the classes that must be compiled before your class can be compiled. Fan-in, on the other hand, is a count of all classes that depend upon the class of interest. Knowing fan-out and fan-in we can calculate an instability factor using I = fo / (fi + fo). As I approaches 0, the package becomes more stable. As I approaches 1, the package becomes unstable. Packages that are stable are low risk targets for recoding whereas unstable packages are more likely to be filled with dirty code bombs. The goal in refactoring is to move I closer to 0.

When using metrics one must remember that they are only rules of thumb. Purely on maths we can see that increasing fi without changing fo will move I closer to 0. There is, however, a downside to a very large fan-in value in that these class will be more difficult to alter without breaking dependents. Also, without addressing fan-out you're not really reducing your risks so some balance must be applied.

One downside to software metrics is that the huge array of numbers that metrics tools produce can be intimidating to the uninitiated. That said, software metrics can be a powerful tool in our fight for clean code. They can help us to identify and eliminate dirty code bombs before they are a serious risk to a performance tuning exercise.


By Kirk Pepperdine

This work is licensed under a Creative Commons Attribution 3


Back to 97 Things Every Programmer Should Know home page

Personal tools