Message Passing Leads to Better Scalability in Parallel Systems

From WikiContent

Revision as of 19:04, 7 February 2009 by Russel (Talk | contribs)
(diff) ←Older revision | Current revision (diff) | Newer revision→ (diff)
Jump to: navigation, search

Programmers are taught from the very outset of their study of computing that concurrency and parallelism is hard, that only the very best can ever hope to get it right, and even they get it wrong. There is invariably great focus on threads, semaphores, monitors, and how hard it is to get concurrent access to variables to be thread-safe. True, there are many difficult problems, and they can be very hard to solve. But what is the core of the problem? Shared memory. Almost all the problems of concurrency and parallelism that people go on and on about relate to the use of shared, mutable memory. The answer seems obvious, either forgo concurrency and parallelism, or eschew shared memory!

Forgoing concurrency and parallelism is almost certainly not an option. Computers have more and more cores on an almost quarterly basis, so harnessing parallelism becomes more and more important. We can no longer rely on ever increasing processor clock speeds to improve application performance. Only by applying parallelism will the performance of applications improve. Obviously, not improving performance is an option, but it is unlikely to be acceptable to users.

So can we eschew shared memory? Definitely.

Instead of using threads and shared memory as our programming model, we can use processes and message passing. Languages such as Erlang (and occam before it) have shown that concurrency and parallelism based on processes is a very successful way of programming concurrent and parallel systems. Such systems do not have all the synchronization stresses that shared memory, multi-threaded systems have. Moreover there is a mathematics -- Communicating Sequential Processes (CSP) -- that can be applied as part of the engineering of concurrent and parallel systems.

We can go further and introduce dataflow systems as a way of computing. In dataflow, there is no explicitly programmed control flow. Instead a directed graph of operators connected by data paths is set up and then data fed into the system. Evaluation is controlled by the readiness of data within the system. Definitely no synchronization problems.

Having said all this C, C++, Java, Python and Groovy are the principal language of systems development and all of these are presented to programmers as languages for developing shared memory, multi-threaded systems. So what can be done? The answer is to use (or if they don't exist, create) libraries and frameworks that provide processes and message passing and avoid all use of shared,mutable memory.

In the C and C++ (also Fortran) worlds, MPI (Message Passing Interface) is a popular tool. The problem it has is that it doesn't provide support for non-primitive data types, programmers have to work with primitive data types or provide a lot of extra infrastructure.

Various people have tried to apply MPI in Java but it has failed -- and this is probably a good thing. Instead there is a growing and increasingly sophisticated infrastructure based on using special data structures to replace any access to shared memory: abstraction is used correctly to raise the level at which programmers work to develop concurrent and parallel systems. These data structures are tools for sending and receiving messages.

Python is going a different route due to implementation issues with the standard system. In Python parallel systems make use of multiple instances of the Python system passing messages.

All in all, not programming with shared memory, but instead using message passing, is likely to be the most successful way of implementing systems that harness the parallelism that is now endemic in computer hardware. Bizarrely perhaps, although processes predate threads as a unit of concurrency, the future seems to be in using threads to implement processes.

Personal tools