Application Architecture Determines Application Performance
Application architecture determines application performance. That might seem rather obvious. But apparently it is not obvious to some people involved in the software development industry.
Unenlightened personnel in information technology management, for example, may believe that simply switching from one brand of software infrastructure to another will be sufficient to solve an application’s performance challenges.
Such beliefs may be based on a vendor’s benchmark trumpeting, say, 25% better performance than the closest competition’s. But, taken in context, if the vendor’s product performs an operation in three milliseconds while the competition’s product takes four milliseconds, the 25% or one-millisecond advantage matters little in the midst of a highly inefficient architecture at the root of an application’s performance characteristics.
In addition to IT managers and vendor benchmarking teams, other groups of people – vendor support departments, and authors of application performance management literature – recommend simply “tuning” the software infrastructure, by fiddling with memory allocations, connection pool sizes, thread pool sizes, and the like.
But if the deployment of an application is insufficiently architected for the expected load, or if the application’s functional architecture is too inefficient in its utilization of computing resources, then no amount of “tuning” will bring about the desired performance and scalability characteristics. Instead a re-architecting of internal logic, or deployment strategy, or both, will be required.
Examples abound. In the world of domain-driven design [Evans DDD] –based enterprise applications, response time is improved by caching objects mapped from relational databases. But the scalability of a middle-tier application server process is limited by the extent of caching in that process, due to the memory consumption of the cached objects. It matters little to the application’s performance which application server is used, or how much more than adequate the thread and connection pools are. Once the maximum heap allocation is exhausted, the application has to be re-architected to cache less and query more, at the cost of performance, or to leverage a larger clustered deployment architecture, with entailed concurrency conflict and cache coherence consequences, to continue scaling.
In the SOA world, some BPEL products install as Java Platform Enterprise Edition applications deployed into a JEE container, and in turn support their own deployment semantics in which BPEL processes are deployed to the installed BPEL server. Naïve architects of BPEL systems set up single-container installations, deploy many tens of BPEL processes to the server, subject the system to a load that instantiates thousands of concurrent BPEL process instances, then suspect product defects when performance tanks from the heap and CPU consumption of all the XML processing. But this is not the fault of any product, and it can’t be solved with “tuning”. Rather, the application’s deployment and functional architecture needs to recognize that computing resources are finite and limited, and ensure sufficient resources and computing efficiency for the expected load, by leveraging architecture patterns for SOA applications.
In the data grid / extreme transaction processing world, there is a whole raft of new considerations for application architects to ponder: the amount of grid interaction by the application in one response; the location of processing in the application and grid; the ratio of cluster members to processor cores in the grid tier of architecture. If a grid-based application performs or scales poorly because of too much grid interaction in one response, or because of overloaded processors in the deployment architecture, this again cannot be solved by simply tweaking the “configuration” of the grid product. Instead, the application architect has to be cognizant of the response time breakdown and resource utilization characteristics of the application’s architecture, and improve performance and scalability by eliminating response time contributions and resource contention.
In the end, all vendor products and application architectures are constrained by the same fundamental principles of distributed computing and underlying physics: applications, and the products they use, run as processes on computers of limited capacity, communicating with each other via protocol stacks and links of non-zero latency. Therefore people need to appreciate that application architecture is the primary determinant of application performance and scalability. Those quality attributes cannot be miraculously improved with some silver-bullet switch of software brands, or infrastructure “tuning”. Instead, improvements in those areas require the hard work of carefully-considered (re-) architecting.
This work is licensed under a Creative Commons Attribution 3
Back to 97 Things Every Software Architect Should Know home page