
For years the innovations of CPU's have been focused on increasing the speed at which a single sequence of instructions gets executed (mainly by increasing the clock frequency). In the last years this trend is changing from increasing sequential execution speed to parallel execution (multi CPU / hyper-threading / multi core). If this trend continues the number of processor cores will no longer be measured in dual or quad, but in K, M or G.
Today's popular programming languages (both OO as procedural) encourage (or force) us to write software as a sequence of statements that get executed in a specific order. This paradigm is called an execution thread and it maps very nicely on the traditional CPU model which has a single instruction pointer. Executing tasks in parallel is made possible by calling a system API that creates a new execution thread. This is more a platform than a language feature, which requires even more system API's to do the required locking and synchronization.
Generally multi threading is considered an advanced and error prone feature. This is why it is generally used only in situations where parallel execution is an explicit requirement, like in server processes to handle multiple client requests and in GUI applications to allow background processing while the UI keeps handling user events. In situations where tasks could be executed parallel, but there is no direct need to do so, we usually stick to our traditional sequential programming model. This is a great waste of the hardware’s parallel processing ability
My personally experience to non-sequential languages is pretty much limited to SQL and XSLT. Maybe it is time to spread the horizon and take a look at some of the modern Functional languages like F# and Fortress. Who is with me?
3 comments
Check out this article: http://www.defmacro.org/ramblings/fp.html
I too believe that functional languages will be the future of multi processor/core programming. In the article you can see why. So far I have been playing with F#, L# on the .NET platform.
ernow
People tend to this sequential as well; a task usually consists of a series of steps where most steps require previous steps to be completed. At least, that how they describe a task.
By using techniques like use-cases where flows are described sequential as well, they are probably forced to think this way. Parallel steps could be described as well with use-cases, but this would make them pretty unreadable and harder to understand by the average functional specifier/end-user.
Therefore I think it will probably always be the the technician that decides (‘is able to decide’) whether parallel execution is possible.
By the way: using Workflow Foundation, it’s very easy to use parallel execution. You just use a Parallel-activity and don’t have to be a hard-core programmer with detailed knowledge about threads and processes. (technical detail: the workflow host may still decide to execute one parallel task at a time, depending on the scheduler-service that is used).
“This is a great waste of the hardware’s parallel processing ability”
– Driving 120 kmph on the dutch highways, while most cars can do at least 180 kmph can be considered a waste as well 😉
robertka
I agree with Robert. Most tasks we have to program are fixed sequential steps. You can express this in a functional language too, but then you have to “translate” it into a solution oriented plan, which is very different from simple fixed steps, or it cannot be executed in parallel either.
Programming in functional languages also requires a different level of abstraction. On a whole, it is considered harder to learn (although for some problems, it is much easier). Traditional languages also get more API support, like several grid computing initiatives: http://sun.java.net/sungrid/. This solves a lot of the complexity without having to find programmers smart enough to do functional languages.
Note that parallel execution happens anyway. If you are on a serverside machine, you will do short executions in parallel. On the client side, you (or the browser) are doing background execution, running multiple programs at the same time, etc. Products like workflow software, databases, etc. already do parallel stuff when it makes sense. On a whole, executing things in parallel to utilize more of the processor only makes sense when something really computationally intensive has to happen. For your average application, this will only be a small part of the code you have to write yourself.
That being said, the new functional languages like F# and Fortress are interesting: broad industry support, lots of libraries available because they are based on popular VMs and the promise to be faster for the same reason. Combined with the promise of Intel providing 32 cores on a processor in the near future, Sun processors that do that already, and 3D graphics cards that are even more parallel (128 vector processors), I do imagine functional languages will be much more popular. But I think the real revolution for “mainstream” languages is beyond that, maybe something that will emerge in the next five years or so.
peterhe