Update:This is turning into a lot of work. Why did I start this?
Oh, I remember. Because this is a really bad feature that could screw up a lot of software for years to come.
Quote of the Day
...from Phil Dawes...Blimey, that’s gonna cause a bit of a ruckus.
Wrong Programming Model
Not much love over at LtU. So LtU readers -- here's the gist of my diatribe -- the programming model is simply not needed and will lead to shared memory code at least as complex as the stuff we're writing today. Should everyone switch to Erlang? Well that is the general direction general purpose languages should go. Switching from today's monitors to transactional memory is not going to be a baby step either! It's a huge step. In the wrong direction. So, yeah, as long as we have to step this way or that way to get into a multiprocessor, multinode, concurrent world -- let's do step in a direction that is a good bit simpler than the one we're in today *and* far simpler than the one proposed by the transactional memory folks.Meanwhile back at the comments...
Guillaume Germain, the author of Termite, a really nice Scheme-like, pretty much shared-nothing, concurrent programming system, comments...
I think STM has some very attractive aspects, most notably efficiency and composability. I see it as a better replacement for other low-level concurrency constructs, but not for large scale systems.The efficiency of a mechanism may not translate into efficient *use* of the mechanism. As for composability, I think that could very well turn out to be an academic attribute. Is this something most programmers should be composing concurrent threads with? No. Absolutely not. Most programmers should not be using shared memory threads at all.
Then Guillaume comes to his senses, 8^)...
I have a few concerns about it...rektide then comments...The first concern is that I'm not sure how much the composability of STM will scale. If layers upon layers of transactions are built, I fear some dependencies between layers might start surfacing at upper levels, causing surprising conflicts. It could become hard to get it right. I can see misguided programmers starting to sprinkle their code with 'atomic' statements ("just in case"), a bit like one would do with 'yield' statements in a non-preemptive concurrent system. Also, 'atomic' statements could be forgotten, causing nasty bugs...
But my main concern with it is that after all, STM is still shared-state concurrency. It mixes together the control flows of programs in ways that can be hard to visualize and comprehend. Erlang doesn't have that problem, because every "connection point" of a process with the exterior is obvious and well-defined...
in the end, it seems to only solve a small part of the problem, and it doesn't really help with the actual design of concurrent systems.
the one thing i really like about STM is that good implementations are exactly like ZFS, copy on updateAnd hey, if you want to build a persistent file system, by all means the mechanisms in ZFS are neat. But does this automatically transfer over to main memory shared everything concurrent programming as a desired programming model for most applications? No way. I can't make that leap so easily. At all really.
And he writes...
currently, any place where data over time is relevant requires user implementation to store history, which usually means debugging tools and printf(). to me, history is just a series of transactional states, and being able to lookup old states seems like second nature. i would really like to see more Saga like transactions available in the mainstream.And I am all for making the history of state available and first-class. Yet there are many ways to do this within a sequential process, not requiring main memory shared everything concurrent programming.
if you've got magic bullets for [distributed shared state], the world is always buying.No. Avoid distributed shared state. That doesn't mean all concurrent, distributed problems go away. It just means they are that much more manageable. This shared everything transaction thingy makes the problem much worse than it has to be. I think the starry-eyed admirers see how hard shared memory monitor-based programming is. But that programming model should *never* have been admitted into the Java language to begin with. There are already better solutions than this proposed garble.
End
From the ACM Queue, "Multicore Programming With Transactional Memory"...
Transactional memory brings to mainstream... programming proven concurrency-control concepts used for decades by the database community... Under the hood, a combination of software and hardware must guarantee that concurrent transactions from multiple threads execute atomically and in isolation. The key mechanisms for a transactional memory system are data versioning and conflict detection.By the way there are a few folks at Gemstone Systems who've done this far better than anyone else. They've been doing it for over two decades and it's in production in heavy use at financial institutions, factories, shipping, and so on. They've got it working in distributed shared, multi-user transactions with efficient distributed garbage collection. I pointed some researches that way. Not the ones in this paper.
That said, this would be the most tragic turn imaginable for programming in the 21st century. There is no way I would want to do this in the small on one machine or in the large in a Gemstone-like system. That is the wrong way to program concurrency for most systems.
Very wrong. And it is scaring me how shiny this thingy looks in so many people's eyes right now. I don't think it will be so long before this shows up in Java and C#.
Wrong, Wrong, Misguided, and So Wrong
We need to be headed primarily toward shared *nothing*. Sharing at the level prescribed in this paper, whether with locks or transactions, is simply uncalled for 99% of the time. Sequential processes with shared-nothing message passing should be the direction.Get better isolation mechanisms for conceptually multiple JVMs and dotnet runtimes in a single OS process. Better yet, turn these systems into legacies and just move onto something better for future work.
You can hold me to this: transactional memory if turned out into the wild will turn out to be a *mess*.
Damien Katz writes in the comments (and since he agrees with me so well, I promote it here)...
I *completely* agree. Shared state threading is a hack built on to existing languages. A useful and fairly efficient one, but a hack nonetheless. And like raw pointers and unprotected memory, it's hard to justify it's use for most programming tasks. Compared to languages like Erlang, it's nearly impossible to justify it on efficiency or performance concerns.In another comment worth displaying up front, Dan Creswell expresses, well, let's have Dan's quote. It even deserves its own subtitles...Transaction memory is another hack bolted on to all the previous concurrency hacks, and somehow it's going to make all the other concurrency hacks to work reliably. Yeah right.
It's Deadly Shine'y
You're looking at premium car-wax, retina burning shine'y
I read the same article and it just frightened me. All that magic hidden under the covers, bleuch.8^)It's deadly shine'y because at the surface it appears like it just makes all the concurrency stuff such as locks "go away" in line with many a programmers desire to ignore such details. Couple that with the familiarity most have with transactions and you're looking at premium car-wax, retina burning shine'y.
I don't even want to imagine debugging one of these systems. You're going to be confronted with some strange behaviors due to transaction conflict or whatever and because it's all supposed to be done by magic under the covers wading in there to understand what's broken will be a nightmare. It'll make debugging from Java thread-dumps look like a holiday.
More comments. This is great.
Here's part of one from Nat Pryce...
Does software transactional memory support coordination? Or must that be supported by other primitives, such as semaphores or monitor condition variables?From what I can tell you still have to build up from the basic transaction mechanism and the shared variables. But it is worse than that.
Most systems today should be ignorant of the "inside my OS process", "on my same node", and "on some other node". The benefits of that can be seen in Erlang, which Damien pointed out. This mechanism is still an "inside my OS process" mechanism.
Well, at the lowest level of runtime implementation of an OS process, you need some mechanisms like this. I would argue all the machinery, hard and soft, for transactions, is way overkill for that level of systems programming.
But to continue to have application programmers deal with this mess for the next umpteen years is nothing but ludicrous. As Damien also wrote, it's like continuing to have today's programmers deal with raw points and memory. No way -- very few programmers should be dealing with that level of complexity.
Nat also writes...
The interaction between distributed (and therefore concurrent) activities involves two things: data transfer and coordination. Synchronisation to avoid race conditions is just one form of coordination. Systems also need to coordinate activities that don't share data.Yes, absolutely. Let's focus on the real problem for software development. Transactional memory is *not* it.
And now sigfpe (Dan Piponi?) writes...
it's nice to know that there are other people in the world having issues with STM.Yes, now's the time to raise a ruckus.
And Cale Gibbard comes to the defense of the dang thing...
The major advantage of STM as a system is that it gives you certain guarantees about composability of already working systems. It's not magical, it doesn't always ensure that things will play nicely together, but it does give you far more guarantees about the correctness of compositions than previous systems have.But Cale, there's not that much Haskell code out there yet anyway. Don't lets have the Haskell people start in with transactional memory just because there still trying to demonstrate they can do imperative programming better than the rest of us.
The world is getting ready for shared nothing, semi-functional programming. Get out of the backwater and catch up to your audience.
And if some group is going to retrofit transactional memory into some significant Java or C# system, well, they would be far better off investing that time into a simpler, shared-nothing rewrite into a language like Erlang, a simpler language like Smalltalk, or even a better shared-nothing coordination mechanism like Javaspaces.
All that low-level monitor-based garble? Just leave it will it is now. Walk away from it slowly. Turn your back, and run.
Cale again...
That being said, if you understand the additional compositionality guarantees, what exactly is it that you find lacking?Em, simplicity? Em, elimination of totally unnecessary mechanisms too far removed from the domain problem?
And on...
Limiting shared state is obviously good -- there's nothing in this system which prevents that. However, it's a system for cleanly sharing any state which should be shared.Yeah, right. People would abuse that mechanism all to bloody hell. If a relatively small group of programmers can develop the various soft real time Erlang systems we've seen over the last several years, there is no indication in my mind the effort required to implement and teach this transactional memory thingy would benefit anywhere near 90 some percent of programmers on this earth. No evidence whatsoever.
Other forms of communication, including those in Erlang still have this problem to deal with.Yes, complex problems remain with implementing concurrent systems. But mechanisms like Erlang's raises them to a higher level, closer to the problem domain. This transactional memory is a false hope covering a bottomless pit that could have easily been walked around in the first place. Stay on the path.
Suppose you have a bank server with clients happily communicating deposits and withdrawals to it, and everything is working. How do clients implement a transfer between accounts safely?First let's not turn every little data structure into a bank account transfer problem. That's just not the case. Second, these cases that really do exist should be well isolated and the mechanisms not put in every programmers' hands. Third, account transfers have been occurring quite a bit over the years without this new transactional main memory thingy. Why complicate everything for everybody, even if this were the best way to transfer funds???
Sorry, that is just a *really* unconvincing example.
Anyway, this is getting long and I probably can't do as good of a job of it as the paper can, so everyone please check it out before writing more gibberish about transactional memory.Yep, read 'em. This is not the first time I've commented on it. Just now it seems like it is gaining momentum. The people writing the gibberish are those inventing these things without comprehending the damage it will do.
17 comments:
I *completely* agree.
Shared state threading is a hack built on to existing languages. A useful and fairly efficient one, but a hack nonetheless. And like raw pointers and unprotected memory, it's hard to justify it's use for most programming tasks. Compared to languages like Erlang, it's nearly impossible to justify it on efficiency or performance concerns.
Transaction memory is another hack bolted on to all the previous concurrency hacks, and somehow it's going to make all the other concurrency hacks to work reliably. Yeah right.
I read the same article and it just frightened me. All that magic hidden under the covers, bleuch.
It's deadly shine'y because at the surface it appears like it just makes all the concurrency stuff such as locks "go away" in line with many a programmers desire to ignore such details. Couple that with the familiarity most have with transactions and you're looking at premium car-wax, retina burning shine'y.
I don't even want to imagine debugging one of these systems. You're going to be confronted with some strange behaviors due to transaction conflict or whatever and because it's all supposed to be done by magic under the covers wading in there to understand what's broken will be a nightmare. It'll make debugging from Java thread-dumps look like a holiday.
Dan
http://www.dancres.org
The interaction between distributed (and therefore concurrent) activities involves two things: data transfer and coordination. Synchronisation to avoid race conditions is just one form of coordination. Systems also need to coordinate activities that don't share data. Message-passing covers all forms of coordination. Does software transactional memory support coordination? Or must that be supported by other primitives, such as semaphores or monitor condition variables?
Transaction memory is another hack bolted on to all the previous concurrency hacks, and somehow it's going to make all the other concurrency hacks to work reliably. Yeah right.
I don't get this remark at all. How is it "bolted-on"? Perhaps the compiler will implement transactions in terms of locks and condition variables, perhaps it won't.
In order to work properly, a software transactional memory system essentially has to control what things may occur in transactions. In particular, anything with side effects cannot occur within a transaction, because those could not be rolled back should the rest of the transaction fail. (Provisions could be made for queuing side-effecting computations for when a transaction completes, but that is inessential, and could be written in terms of the core functionality.)
This is not something which you can just "bolt-on" to existing systems, unless you have very good static control over side effects in your language. For the time being, that restricts you to one of a handful of languages, Haskell being the foremost of those. Even in Haskell, STM isn't just a library, because there are provisions to be made in the garbage collector for things to work properly.
The major advantage of STM as a system is that it gives you certain guarantees about composability of already working systems. It's not magical, it doesn't always ensure that things will play nicely together, but it does give you far more guarantees about the correctness of compositions than previous systems have. (And in analysis, decomposing code into subsystems, the correctness of which will guarantee the correctness of the system as a whole via the guarantees.)
Please read the paper Composable Memory Transactions from here. The Beautiful Concurrency article there is also a good introduction. Neither article assumes that you know all that much about Haskell, but Beautiful Concurrency is probably gentler if you don't.
That being said, if you understand the additional compositionality guarantees, what exactly is it that you find lacking? Limiting shared state is obviously good -- there's nothing in this system which prevents that. However, it's a system for cleanly sharing any state which should be shared. Other forms of communication, including those in Erlang still have this problem to deal with.
Suppose you have a bank server with clients happily communicating deposits and withdrawals to it, and everything is working. How do clients implement a transfer between accounts safely? You have to be really careful, because a client which observes the accounts involved in the transfer in between the withdrawal from one and deposit into the other will observe the money in neither location, or worse, in both locations. You would have to go back to the bank server code and write your own ad-hoc transaction mechanics for this, or implement a middle-man process which handles forming the transactions and ordering the communication to the original server, or worse still, you'd end up with some sort of mess of a system of locks.
STM is at some basic level about bundling working thread communications together such that they occur as-if-atomically with regard to the rest of the system. They might not actually occur atomically, it just has to behave as if they do.
Consider the "orElse" operation. It joins two transactions such that if the first succeeds, it gets to commit, and if it retries, then the second is attempted. If the second succeeds, it commits, and if it retries, the entire transaction retries. For a transaction to retry means that it behaves as if nothing has actually happened, and it's attempted at a later time, generally after one of the inputs it had read has been updated. In this way, arbitrarily many options may be tried in turn.
Also, if an exception is thrown, the transaction it occurred within behaves as if it never happened, and the exception bubbles upward in a sane fashion until it is caught. There's no scrambling to replace all the locks that were taken before throwing the exception, up to where it will eventually be caught, like you'd get in a lock-based system.
Anyway, this is getting long and I probably can't do as good of a job of it as the paper can, so everyone please check it out before writing more gibberish about transactional memory.
I have to admit, I have a horrible gut feeling about STM. I really don't like it. It's one of those wrinkles in a carpet things. It seems to have smoothed out a wrinkle but all it's done is push the wrinkle somewhere you can't easily see it. It makes for good looking publications but I have major doubts about how it will work in practice. But as I say, this is just a gut feeling, and it'll take longer for me to articulate exactly why I think there are problems. But it's nice to know that there are other people in the world having issues with STM.
Cale, I actually think we are in agreement.
The thing is, in a functional language like Haskell, the STM semantics are both easily abstracted away and useful because almost everything else about the physical machine has also been abstract away, including the CPU.
In imperative languages, the CPU isn't abstracted away like it is in Haskell. This is often a good thing at it allows us to better predict and optimize performance bottlenecks. However, shared state threading now voids our traditional abstractions and assumptions about the data we are operating on. To take advantage of it, we have to tread very carefully and be sure to NOT do lots of things we can otherwise take for granted.
I don't see that adding STM to conventional imperative languages will improve the situation significantly, if at all. For declarative languages, like Haskell however it is a win because it fits well with the language model (same thing with SQL).
your grievances seem to be much more with distributed systems than STM in particular.
the one thing i really like about STM is that good implementations are exactly like ZFS, copy on update. instead of simply obliterating old state, STM simply forks out new states. currently, any place where data over time is relevant requires user implementation to store history, which usually means debugging tools and printf(). to me, history is just a series of transactional states, and being able to lookup old states seems like second nature. i would really like to see more Saga like transactions available in the mainstream.
your grievances seem to be much more with distributed systems than STM in particular.
the one thing i really like about STM is that good implementations are exactly like ZFS, copy on update. instead of simply obliterating old state, STM simply forks out new states. currently, any place where data over time is relevant requires user implementation to store history, which usually means debugging tools and printf(). to me, history is just a series of transactional states, and being able to lookup old states seems like second nature. i would really like to see more Saga like transactions available in the mainstream.
ubiquotous computing is, almost explicitly, the art of comptuing in distributed shared state. thats my own personal chief interest. if you've got magic bullets for this, the world is always buying.
I think STM has some very attractive aspects, most notably efficiency and composability. I see it as a better replacement for other low-level concurrency constructs, but not for large scale systems.
I admit not to have any hands-on experience with STM (though I do have a fair amount of experience with Erlang-style concurrency), but I have a few concerns about it.
The first concern is that I'm not sure how much the composability of STM will scale. If layers upon layers of transactions are built, I fear some dependencies between layers might start surfacing at upper levels, causing surprising conflicts. It could become hard to get it right. I can see misguided programmers starting to sprinkle their code with 'atomic' statements ("just in case"), a bit like one would do with 'yield' statements in a non-preemptive concurrent system. Also, 'atomic' statements could be forgotten, causing nasty bugs (Haskell neatly avoids that problem, though).
But my main concern with it is that after all, STM is still shared-state concurrency. It mixes together the control flows of programs in ways that can be hard to visualize and comprehend. Erlang doesn't have that problem, because every "connection point" of a process with the exterior is obvious and well-defined. I also don't think STM is well adapted to distributed programming.
For low-level programming, I think STM could be very useful. But in the end, it seems to only solve a small part of the problem, and it doesn't really help with the actual design of concurrent systems.
"Suppose you have a bank server with clients happily communicating deposits and withdrawals to it, and everything is working. How do clients implement a transfer between accounts safely?"
In practice banking systems do this with asynchronous messaging and post-hoc reconciliation. STM doesn't help because:
(a) account information must be persistent
(b) banks must store audit trails and coordinate different in-bank business processes so distributed transactions are needed to coordinate the different systems
(c) clients must be able to transfer money between different banks and the systems involved cannot be coupled as tightly as ACID transactions allow.
Having written several implementations of both STM and various message-passing-based concurrency models for Ruby lately, I'm a lot less sunny on STM than I used to be even a few weeks ago.
I was having fun implementing STM until I realized that I was able to implement the Actor model correctly in a few hours, versus several weeks for getting the fiddly aspects of STM down.
The biggest immediate problem for STM is starvation -- a large transaction can just keep retrying, and I'm not sure there is a way to address that without breaking composability. ...and composability is the whole raison d'etre of STM in the first place.
I just wanted to chime in because this is a rare position to hear articulated, and I agree. I'm deeply uncomfortable with the universal push toward STM. It's very similar to the universal push toward GC (which I will refrain from criticizing here). Real programs spend much of their time performing non-transactional updates to local variables and doing IO. STM does not help them. Rather, it introduces a whole family of abstractions, complexity and cost and imposes it all over the language and runtime design. This cost is real cognitive load that every programmer and implementor must pay for.
Moreover, in the few cases where some form of transactional behavior is desired, there's no reason to believe a generic STM implementation from a programming language will provide the sort of transaction your problem domain demands. It may not write to stable storage, it may have incorrect scope or priority rules, it may lack auditing or logging that you want, and it may have undesirable policy wrt. resource starvation.
I'm all for process isolation. Message passing and STM both demand that. It's true that any generic message passing service may similarly fail to meet problem-domain needs and does impose its own implementation and language-design costs. But my gut feeling is that the transaction abstractions will be more complicated to understand, more expensive to implement and more often useless in the field than the messaging abstractions.
STM (at least as normally proposed) is shared-state, optimistic transactions. Its complexity and performance problems come from the shared state and the optimistic scheduling, not the transactions. It's possible to support transactions in a shared-nothing model, and that still provides the same (actually better) composability.
Thanks for the nice post!
The bit that amuses me most here is that you are treating the Actor model and the STM model as if they are in competition.
The truth of the matter is that Erlang has STM implemented on top of Actors (Mnesia), while Clojure has Actors implemented on top of STM.
They are different abstractions, each with different advantages and disadvantages.
Oh, yeah. I remember this convoluted post of mine.
"They are different abstractions, each with different advantages and disadvantages."
Of course they are. Then the issue is how do they stack up against each other.
"The bit that amuses me most here is that you are treating the Actor model and the STM model as if they are in competition."
That's how I've seen it presented, so I would suggest I am responding to that.
"The truth of the matter is that Erlang has STM implemented on top of Actors (Mnesia), while Clojure has Actors implemented on top of STM."
No. Mnesia is a database in the traditional sense that the shared data is outside of the address space of the process/thread/vm runtime. STM is transactional over some or all of the address space of the process/thread/vm. Having programmed for a number of years with a multi-user STM (in particular, Gemstone Smaltalk and Gemstone Java), I prefer Erlang's shared-nothing message passing + Mnesia database combination.
Afaics, STM does not solve the algorithm problem for massively parallel games any way:
http://lambda-the-ultimate.org/node/2048#comment-51596
Post a Comment