ManyCoreEra refers to a post on the Parrot VM and various concurrency mechanisms.
These shared-memory mechanism discussions continue to miss the point about the Many Core Era...
The many-core era will also be a many-node era. You will not have a C: drive except for those legacy systems running in their little jars of formaldehyde.
You will have a lot of computing power "out there" and several kinds of displays that are not directly attached to "your computer". You probably will not be able to locate an "application" as being installed on your C: drive and running on the CPU on the mother board that has a ribbon running out to the disk that has C: formatted on it.
Hardware will be too cheap to be organized this way. We need to begin reorganizing our software now or five years from now, we'll be really bad off, without any excuses.
If your current programming models distinguish between "these threads threads" and "those other processes over there" then it's almost certainly the wrong model for today, not to mention tomorrow.
Update:
The Java Server Side site has a lengthy discussion based on this lil'post of mine. A number of different views there.
5 comments:
"The many-core era will also be a many-node era."
Yes - but surely the point is 'will also be'... we have to find some way of getting from here to there - and effective concurrency on a single multi-cored machine is still not yet a 'solved problem'...
Oh - and you also have a 'leaky abstraction' problem if you attempt to abstract the network out of concurrency over a network.
Concurrency with multiple processes (or threads or transactions or whatever) on a single machine don't need to know about latency - concurrency across a network probably does need to know about it - and pretending it's not there is surely a recipe for pain...
"we have to find some way of getting from here to there"
Start writing as-if. Don't write shared-memory concurrency even in a language like Java. Use other communication mechanisms, and avooid the shared memory constructs at the application level.
"you also have a 'leaky abstraction' problem if you attempt to abstract the network out of concurrency over a network."
The nice thing about Erlang is that it doesn't abstract the network out. You simply won't have as many failures when running in the same node. but the code looks the same and should still assume things will fail.
Even running a single threaded program on a single core you need to know about latency; 15 years ago you could assume that a FP division took tens of times longer than a memory fetch, and iterating over a linked list was only a few times slower than iterating over an array. You can't ignore it anywhere now.
Post a Comment