1. Check out this article: http://www.defmacro.org/ramblings/fp.html
    I too believe that functional languages will be the future of multi processor/core programming. In the article you can see why. So far I have been playing with F#, L# on the .NET platform.


  2. People tend to this sequential as well; a task usually consists of a series of steps where most steps require previous steps to be completed. At least, that how they describe a task.
    By using techniques like use-cases where flows are described sequential as well, they are probably forced to think this way. Parallel steps could be described as well with use-cases, but this would make them pretty unreadable and harder to understand by the average functional specifier/end-user.
    Therefore I think it will probably always be the the technician that decides (‘is able to decide’) whether parallel execution is possible.

    By the way: using Workflow Foundation, it’s very easy to use parallel execution. You just use a Parallel-activity and don’t have to be a hard-core programmer with detailed knowledge about threads and processes. (technical detail: the workflow host may still decide to execute one parallel task at a time, depending on the scheduler-service that is used).

    “This is a great waste of the hardware’s parallel processing ability”
    – Driving 120 kmph on the dutch highways, while most cars can do at least 180 kmph can be considered a waste as well 😉


  3. I agree with Robert. Most tasks we have to program are fixed sequential steps. You can express this in a functional language too, but then you have to “translate” it into a solution oriented plan, which is very different from simple fixed steps, or it cannot be executed in parallel either.

    Programming in functional languages also requires a different level of abstraction. On a whole, it is considered harder to learn (although for some problems, it is much easier). Traditional languages also get more API support, like several grid computing initiatives: http://sun.java.net/sungrid/. This solves a lot of the complexity without having to find programmers smart enough to do functional languages.

    Note that parallel execution happens anyway. If you are on a serverside machine, you will do short executions in parallel. On the client side, you (or the browser) are doing background execution, running multiple programs at the same time, etc. Products like workflow software, databases, etc. already do parallel stuff when it makes sense. On a whole, executing things in parallel to utilize more of the processor only makes sense when something really computationally intensive has to happen. For your average application, this will only be a small part of the code you have to write yourself.

    That being said, the new functional languages like F# and Fortress are interesting: broad industry support, lots of libraries available because they are based on popular VMs and the promise to be faster for the same reason. Combined with the promise of Intel providing 32 cores on a processor in the near future, Sun processors that do that already, and 3D graphics cards that are even more parallel (128 vector processors), I do imagine functional languages will be much more popular. But I think the real revolution for “mainstream” languages is beyond that, maybe something that will emerge in the next five years or so.


Comments are closed.