From Requirements to Tables to Code and Tests

From WikiContent

(Difference between revisions)
Jump to: navigation, search
Current revision (18:36, 6 August 2009) (edit) (undo)
 
(6 intermediate revisions not shown.)
Line 1: Line 1:
-
It is generally reckoned that the process of getting from requirements to implementation is error prone and expensive. I guess I do not need to elaborate on all of the reasons as they are pretty well known. However, some key reasons are lack of clarity on the part of the user ..incomplete / inadequate requirements and misunderstandings on the part of the developer .. catch-all "else" statements and so on. We then test like crazy to show that what we have done at least satisfies the law of least surprise!
+
It is generally reckoned that the process of getting from requirements to implementation is error prone and expensive. Many reasons are given for this: a lack of clarity on the part of the user; incomplete or inadequate requirements; misunderstandings on the part of the developer; catch-all ''else'' statements; and so on. We then test like crazy to show that what we have done at least satisfies the law of least surprise! I want to focus here on business logic. It is the execution of the business logic which delivers the value of an application.
-
Now I want to focus here on the business logic, not the interconnecting tissue. The execution of the business logic delivers the value of our application.
+
-
Let's abstract business logic a little. We can imagine that for some circumstance, say evaluation of a client, or a risk, or simply the next step in a process, there are some criteria which when satisfied, determine one or more, actions to be performed. It would be convenient to capture the requirements in this form... a list of the criteria, C1, C2,... Cn and a list of Actions. Then for each combination of Criteria we have a selection of Actions, Am. If each of the criteria is a boolean, then we know that there are possibly 2 raised to the nth power of possible criteria combinations to be addressed (depending on the number of possibel states of each of the criteris. For now we assume simple boolean states). This is a big step forward from conventional requirements capture. We can [[prove]] completeness!
+
Let's abstract business logic a little. We can imagine that for some circumstance &mdash; say, evaluation of a client or a risk, or simply the next step in a process &mdash; there are some criteria that, when satisfied, determine one or more actions to be performed. It would be convenient to capture the requirements in this form: a list of the criteria, ''C<sub>1</sub>'', ''C<sub>2</sub>'', ..., ''C<sub>n</sub>'' and a list of corresponding actions. Then for each combination of criteria we have a selection of actions, ''A<sub>m</sub>''. If each of the criteria is simply a condition that evaluates to a Boolean, we know that there are possibly 2<sup>n</sup> possible criteria combinations to be addressed. Given such a requirements model we can ''prove'' completeness!
-
Further, if the requirements are captured in this form then we have an opportunity to largely automate the code generation process. In doing this we have massively reduced the gap between the user and the developer. Some readers may recognise what I am describing .. they run under the heading of Decision Tables and have been around for at least 40 years!
+
Assuming that the requirements are captured in this form, then we have an opportunity to (largely) automate the code generation process. In doing this we have massively reduced the gap between the business community and the developer. Some readers may recognise what I am describing: decision tables, which have been around for at least 40 years! Although there are processors around for decision tables, they are not necessary to get many of the benefits. For example, the enumeration of the criteria and the actions is a significant step in abstraction and understanding of the problem. It also provides the developer with the opportunity to analyse and detect logical nonsense in the requirements because of the formalism of the decision table representation. Feeding this back to the user, we can overcome some of the tension in the relationship.
-
Although there are processors around for Decision Tables, they are not necessary to get many of the benefits. For example the enumeration of the criteria and the actions is a significant step in abstraction and understanding of the problem. It also provides the developer with the opportunity to analyse and detect logical nonsense in the requirements because of the formalism of the DT representation. By feeding this back to the user (in a constructive manner of course), we can overcome some of the tension in the relationship.
+
-
We can explore the code generated from a Decision Table. We can write (or generate) conventional "if - then - else" logic. This is just one choice. We could also use the combination of criteria to index into an in-memory table representation of the decision table. This has the advantage of preserving the clear relationship between the decision table used for requirements specification and its implementation. Of course there is nothing that says the table need be in memory ---we could store it in a relational database, and then access the appropriate table and index directly to the set of actions.
+
For implementation, one choice would be to write (or generate) conventional ''if-then-else'' logic. We could also use the combination of criteria to index into an in-memory table representation of the decision table. This has the advantage of preserving the clear relationship between the decision table used for requirements specification and its implementation, still reducing the gap between specification and implementation even more. Of course, there is nothing that says the table need be in memory &mdash; we could store it in a relational database, and then access the appropriate table and index directly to the set of actions. The general idea is to try to capture logic in a tabular form, and interpret it at runtime. You can also have runtime modification of table content.
-
I hope that you can see where this is going. The general idea is to try to capture logic in a tabular form, and interpret it at run-time. You can of course have (simple) real-time modification of table content.
+
And so to testing. If the code generation process from the table has been largely automated, there is little need to path test all the possible paths through the table. The code is the table; the table is the code. Certain tests need to be done, but these are more along the lines of configuration management and reality checking than path testing. The total test effort is significantly reduced because we have moved the work into unambiguous requirements specification.
-
Implementations of this type of system run from the humble programmer using the technique as a private coding method to full run-time environments with natural English translation to generate rule tables for interpretation.
+
Implementations of this type of system run from the humble programmer using the technique as a private coding method to full runtime environments with natural language translation to generate rule tables for interpretation.
 +
 
 +
 
 +
By [[George Brooke]]
 +
 
 +
This work is licensed under a [http://creativecommons.org/licenses/by/3.0/us/ Creative Commons Attribution 3]
 +
 
 +
Back to [[97 Things Every Programmer Should Know]] home page

Current revision

It is generally reckoned that the process of getting from requirements to implementation is error prone and expensive. Many reasons are given for this: a lack of clarity on the part of the user; incomplete or inadequate requirements; misunderstandings on the part of the developer; catch-all else statements; and so on. We then test like crazy to show that what we have done at least satisfies the law of least surprise! I want to focus here on business logic. It is the execution of the business logic which delivers the value of an application.

Let's abstract business logic a little. We can imagine that for some circumstance — say, evaluation of a client or a risk, or simply the next step in a process — there are some criteria that, when satisfied, determine one or more actions to be performed. It would be convenient to capture the requirements in this form: a list of the criteria, C1, C2, ..., Cn and a list of corresponding actions. Then for each combination of criteria we have a selection of actions, Am. If each of the criteria is simply a condition that evaluates to a Boolean, we know that there are possibly 2n possible criteria combinations to be addressed. Given such a requirements model we can prove completeness!

Assuming that the requirements are captured in this form, then we have an opportunity to (largely) automate the code generation process. In doing this we have massively reduced the gap between the business community and the developer. Some readers may recognise what I am describing: decision tables, which have been around for at least 40 years! Although there are processors around for decision tables, they are not necessary to get many of the benefits. For example, the enumeration of the criteria and the actions is a significant step in abstraction and understanding of the problem. It also provides the developer with the opportunity to analyse and detect logical nonsense in the requirements because of the formalism of the decision table representation. Feeding this back to the user, we can overcome some of the tension in the relationship.

For implementation, one choice would be to write (or generate) conventional if-then-else logic. We could also use the combination of criteria to index into an in-memory table representation of the decision table. This has the advantage of preserving the clear relationship between the decision table used for requirements specification and its implementation, still reducing the gap between specification and implementation even more. Of course, there is nothing that says the table need be in memory — we could store it in a relational database, and then access the appropriate table and index directly to the set of actions. The general idea is to try to capture logic in a tabular form, and interpret it at runtime. You can also have runtime modification of table content.

And so to testing. If the code generation process from the table has been largely automated, there is little need to path test all the possible paths through the table. The code is the table; the table is the code. Certain tests need to be done, but these are more along the lines of configuration management and reality checking than path testing. The total test effort is significantly reduced because we have moved the work into unambiguous requirements specification.

Implementations of this type of system run from the humble programmer using the technique as a private coding method to full runtime environments with natural language translation to generate rule tables for interpretation.


By George Brooke

This work is licensed under a Creative Commons Attribution 3

Back to 97 Things Every Programmer Should Know home page

Personal tools