Or: "I just dropped in to see what condition my condition was in."
Tim Bray ponders more cores, hardware (and software -- cannot forget the software) transactional memory as well as erlang, or some sort of erlang transmorgrigication into java or something less weird.
Ralph Johnson addressed that idea, probably accurately.
These are fascinatingly intertwined topics. Dan Creswell sent a link to me the other day: this Sun research paper on Hybrid Transactional Memory (pdf). I hope he blogs about it. He's got a good handle on the situation.
Unlike apparently many smart people at Sun, Microsoft, Intel, and elsewhere, I'm still unconvinced that transactional memory makes the overall problem any easier or better.
I do know that focusing on transactional memory is a short-term solution at best. Erlang addresses the true problem: a simple programming model that can scale out beyond SMP and yet scale down to a single core just as well.
Tim suggests transactional memory will remain well out of sight for application programmers. But these programmers need better tools, no matter how HTM affects them in the small (and eight, even 16, cores should be considered small over the next decade). The results of system programmers using transactional memory in low level SMP code is a drop in the bucket compared to today's and tomorrow's application development problems. These have little to do with a single piece of silicon and have everything to do with networks and complex concurrent collaboration among disparate subsystems.
Not so many years from now we will be awash in more, cheaper, hardware than our current application development languages and tools can reasonably accommodate. We should have a simple model for addressing all that hardware with less developer effort. We need simple languages and tools for concurrency and *distribution* so that we can waste cheap hardware in exchange for leveraging valuable developer ergs.
Today we are wasting hardware running garbage collectors in order to save developer ergs. Increasingly we need to be wasting hardware running large numbers of small processes.
Transactional memory is not even close to the support we need. I am not sure why so many people find it shiny. Maybe I'll be surprised.
Update: Some urls in comments made easier here:
- Bob Warfield: You may have already encountered a multicore crisis and just not known it
- Bob Warfield: 70% of the Software You Build is Wasted
- Brit Butler: The state of state
Tim's example for TM is coordinating a lot of objects quickly in a shared memory for some game scenarios. Fair enough - I am unable to compare the complexity of transactional memory vs. traditional "critical section" mechanisms for this. Off the top of my head I would agree that a shared-nothing message passing mechanism does not really address this problem, but I would imagine it still useful for other aspects of that kind of a game system. My bigger point is this: there are relatively few people with that kind of a problem. Most of us have the kinds of problems that are far better addressed by shared nothing message passing.
So what frightens me as much as the transactional memory hardware and/or software itself is the *attention* it is receiving as any sort of general solution to developing software. Is the cost worth the benefit?
3 comments:
Here's the conversation with Sweeney I mentioned earlier in which he "finds it shiney": http://www.redlinernotes.com/blog/?p=811
The multicore crisis is here today and has been for some time. It's only a thing of the future for desktops. For application programmers, the old term scalability says it all. You may have already encountered a multicore crisis and just not known it:
http://smoothspan.wordpress.com/2007/08/30/youve-already-had-a-multicore-crisis-and-just-didnt-realize-it/
Included are some anecdotes about real world multicore crises that have already hit.
Not to mention the tremendous waste that traditional curly brace languages hath wrought:
http://smoothspan.wordpress.com/2007/09/04/70-of-the-software-you-build-is-wasted-part-1-of-series-of-toolplatform-rants/
I tend to agree on the TM front--I think TM is destined to rediscover all the problems database designers gave up on 20 years ago. Simon Peyton-Jones gave a talk on TM implementations in Haskell at OSCON which seemed to rest heavily on domain-specific compiler tricks. I cornered him afterward and asked whether he saw a parallel between those optimizations and relational join optimization (which was proven NP-hard ages ago). His answer was along the lines of, "we're a long way away from even thinking about those issues--we'll cross those bridges when we come to them." So I'm not holding my breath for TM to make the leap from specific to generic anytime soon.
Post a Comment