"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Thursday, February 08, 2007

Memorablies

Among the several artifacts from my 24 hour rant I will always treasure are the following quotes...

  • "Refactoring for factuality, I think he means something more like..."
  • "I've seen synchronized markers employed like pixie dust until the problem seems to go away."
  • "I have a hard time telling what's really going on on that blog page."
  • "Blimey, that’s gonna cause a bit of a ruckus."
  • "It's an apples and rhinos comparison."
  • "Lots of dodgy assertions and straw-man--bashing, but there are some lucid moments."
  • "You're looking at premium car-wax, retina burning shine'y"
  • "Right now the post is so heavily edited that it looks worse than a wiki page on thread mode during a flame war."
Yes, and I edited that page by myself. No help. 8-)

Classics. Thank you for the quotes and the arguments one way or the other.

On Complexity

The question is asked over on LtU about my transactional memory rant...

why is the under-the-covers complexity of STM bad, when the under-the-covers complexity of garbage collection is good?
There is a glaringly obvious answer to this...

A garbage collector eliminates tons of complexity from the application developer's burden, allowing the app developer to focus more on the true problem domain.

Transactional memory does no such thing. Application developers have to think about shared memory, potential conflicts, how to express them as transactions, and as mental points out: ultimately how to *recover* from a transaction failure.

On comments from Mental...

I don't know that you're really forced to deal with conflict resolution -- most of the popular STM implementations deal with conflict for you by terminating the conflicted transactions and automatically retrying (perhaps with backoff). Conflict resolution is taken care of (or, if you want to be pessimistic, taken out of your hands).

Common optimistic concurrency control techniques make matters a little sticky in that regard: you can easily end up in a situation where a long-running transaction is starved by a steady stream of shorter transactions whose successful commits conflict with it.

It's a slippery slope. New kinds of conflict resolution will be thought up and implemented. I've seen this with transactional systems like Gemstone for Smalltalk and Java. The "reduced conflict" classes take various strategies during the commit process to turn bitwise conflicts into logical successes. There is an arbitrary number of retries before the transaction service just gives up and fails the transaction. Then it is back in the app developers hands to figure out what to do.

This is not unlike the "fallacies of distributed programming" where some system attempts to hide failures that can arise through distribution. No, that illusion can always be broken somewhere and cannot be fully ignored.

Without the ability to reserve objects to the longer-running transaction (i.e. locking), it's hard to resolve such a one-sided conflict.
And that's another problem. Once this thing is implemented in software (and hardware), as Nat Pryce pointed out earlier, there is still the whole coordination thing.
All that said, STM still has composition properties that other shared-state techniques do not, and there are always going to be situations where shared state is most appropriate. I think it can be a useful technique where shared state is required, and transaction sizes would be modest and relatively consistent.
And at the end of the day I am not arguing against the mechanism per se so much as I am arguing against its widespread use. Although I do believe the number of appropriate uses is so small that 90 some percent of us should not even have to pay attention to it. My fear is this will: (1) draw attention away from getting the majority of us running on simple shared-nothing message passing systems, and (2) end up in the average programmer's toolkit as a shiny fob to get out and play with all the time.
Something that may help limit STM to those situations are its practical restrictions on non-transactional effects -- even if you're not in a language where you're forced into an STM monad, any non-transactional effects need to be idempotent because the transaction may be retried arbitrarily many times. (On the other hand, cue novice programmers wondering why the output from their giant transaction occasionally gets duplicated several times...)
Yeah. It's a slippery slope that is just way to shiny. We all get fascinated by it, end up on that slope, and before you know it, smelly messes everywhere. The alternatives are just so much more promising for the majority of cases.

It's Fiddly

"Mental" writes in a comment (i've started a new post) to my rant on transactional memory...

I was having fun implementing STM until I realized that I was able to implement the Actor model correctly in a few hours, versus several weeks for getting the fiddly aspects of STM down.
Good tag line... "Transactional Memory: It's Fiddly!!!"

Here's the entire comment because I like it...

Having written several implementations of both STM and various message-passing-based concurrency models for Ruby lately, I'm a lot less sunny on STM than I used to be even a few weeks ago.

I was having fun implementing STM until I realized that I was able to implement the Actor model correctly in a few hours, versus several weeks for getting the fiddly aspects of STM down.

The biggest immediate problem for STM is starvation -- a large transaction can just keep retrying, and I'm not sure there is a way to address that without breaking composability. ...and composability is the whole raison d'etre of STM in the first place.

Wednesday, February 07, 2007

Misguided: The Road Not To Be Travelled

Update:This is turning into a lot of work. Why did I start this?

Oh, I remember. Because this is a really bad feature that could screw up a lot of software for years to come.

Quote of the Day

...from Phil Dawes...
Blimey, that’s gonna cause a bit of a ruckus.

Wrong Programming Model

Not much love over at LtU. So LtU readers -- here's the gist of my diatribe -- the programming model is simply not needed and will lead to shared memory code at least as complex as the stuff we're writing today. Should everyone switch to Erlang? Well that is the general direction general purpose languages should go. Switching from today's monitors to transactional memory is not going to be a baby step either! It's a huge step. In the wrong direction. So, yeah, as long as we have to step this way or that way to get into a multiprocessor, multinode, concurrent world -- let's do step in a direction that is a good bit simpler than the one we're in today *and* far simpler than the one proposed by the transactional memory folks.

Meanwhile back at the comments...

Guillaume Germain, the author of Termite, a really nice Scheme-like, pretty much shared-nothing, concurrent programming system, comments...

I think STM has some very attractive aspects, most notably efficiency and composability. I see it as a better replacement for other low-level concurrency constructs, but not for large scale systems.
The efficiency of a mechanism may not translate into efficient *use* of the mechanism. As for composability, I think that could very well turn out to be an academic attribute. Is this something most programmers should be composing concurrent threads with? No. Absolutely not. Most programmers should not be using shared memory threads at all.

Then Guillaume comes to his senses, 8^)...

I have a few concerns about it...

The first concern is that I'm not sure how much the composability of STM will scale. If layers upon layers of transactions are built, I fear some dependencies between layers might start surfacing at upper levels, causing surprising conflicts. It could become hard to get it right. I can see misguided programmers starting to sprinkle their code with 'atomic' statements ("just in case"), a bit like one would do with 'yield' statements in a non-preemptive concurrent system. Also, 'atomic' statements could be forgotten, causing nasty bugs...

But my main concern with it is that after all, STM is still shared-state concurrency. It mixes together the control flows of programs in ways that can be hard to visualize and comprehend. Erlang doesn't have that problem, because every "connection point" of a process with the exterior is obvious and well-defined...

in the end, it seems to only solve a small part of the problem, and it doesn't really help with the actual design of concurrent systems.

rektide then comments...
the one thing i really like about STM is that good implementations are exactly like ZFS, copy on update
And hey, if you want to build a persistent file system, by all means the mechanisms in ZFS are neat. But does this automatically transfer over to main memory shared everything concurrent programming as a desired programming model for most applications? No way. I can't make that leap so easily. At all really.

And he writes...

currently, any place where data over time is relevant requires user implementation to store history, which usually means debugging tools and printf(). to me, history is just a series of transactional states, and being able to lookup old states seems like second nature. i would really like to see more Saga like transactions available in the mainstream.
And I am all for making the history of state available and first-class. Yet there are many ways to do this within a sequential process, not requiring main memory shared everything concurrent programming.
if you've got magic bullets for [distributed shared state], the world is always buying.
No. Avoid distributed shared state. That doesn't mean all concurrent, distributed problems go away. It just means they are that much more manageable. This shared everything transaction thingy makes the problem much worse than it has to be. I think the starry-eyed admirers see how hard shared memory monitor-based programming is. But that programming model should *never* have been admitted into the Java language to begin with. There are already better solutions than this proposed garble.

End

From the ACM Queue, "Multicore Programming With Transactional Memory"...

Transactional memory brings to mainstream... programming proven concurrency-control concepts used for decades by the database community... Under the hood, a combination of software and hardware must guarantee that concurrent transactions from multiple threads execute atomically and in isolation. The key mechanisms for a transactional memory system are data versioning and conflict detection.
By the way there are a few folks at Gemstone Systems who've done this far better than anyone else. They've been doing it for over two decades and it's in production in heavy use at financial institutions, factories, shipping, and so on. They've got it working in distributed shared, multi-user transactions with efficient distributed garbage collection. I pointed some researches that way. Not the ones in this paper.

That said, this would be the most tragic turn imaginable for programming in the 21st century. There is no way I would want to do this in the small on one machine or in the large in a Gemstone-like system. That is the wrong way to program concurrency for most systems.

Very wrong. And it is scaring me how shiny this thingy looks in so many people's eyes right now. I don't think it will be so long before this shows up in Java and C#.

Wrong, Wrong, Misguided, and So Wrong

We need to be headed primarily toward shared *nothing*. Sharing at the level prescribed in this paper, whether with locks or transactions, is simply uncalled for 99% of the time. Sequential processes with shared-nothing message passing should be the direction.

Get better isolation mechanisms for conceptually multiple JVMs and dotnet runtimes in a single OS process. Better yet, turn these systems into legacies and just move onto something better for future work.

You can hold me to this: transactional memory if turned out into the wild will turn out to be a *mess*.

Damien Katz writes in the comments (and since he agrees with me so well, I promote it here)...

I *completely* agree. Shared state threading is a hack built on to existing languages. A useful and fairly efficient one, but a hack nonetheless. And like raw pointers and unprotected memory, it's hard to justify it's use for most programming tasks. Compared to languages like Erlang, it's nearly impossible to justify it on efficiency or performance concerns.

Transaction memory is another hack bolted on to all the previous concurrency hacks, and somehow it's going to make all the other concurrency hacks to work reliably. Yeah right.

In another comment worth displaying up front, Dan Creswell expresses, well, let's have Dan's quote. It even deserves its own subtitles...

It's Deadly Shine'y

You're looking at premium car-wax, retina burning shine'y

I read the same article and it just frightened me. All that magic hidden under the covers, bleuch.

It's deadly shine'y because at the surface it appears like it just makes all the concurrency stuff such as locks "go away" in line with many a programmers desire to ignore such details. Couple that with the familiarity most have with transactions and you're looking at premium car-wax, retina burning shine'y.

I don't even want to imagine debugging one of these systems. You're going to be confronted with some strange behaviors due to transaction conflict or whatever and because it's all supposed to be done by magic under the covers wading in there to understand what's broken will be a nightmare. It'll make debugging from Java thread-dumps look like a holiday.

8^)

More comments. This is great.

Here's part of one from Nat Pryce...

Does software transactional memory support coordination? Or must that be supported by other primitives, such as semaphores or monitor condition variables?
From what I can tell you still have to build up from the basic transaction mechanism and the shared variables. But it is worse than that.

Most systems today should be ignorant of the "inside my OS process", "on my same node", and "on some other node". The benefits of that can be seen in Erlang, which Damien pointed out. This mechanism is still an "inside my OS process" mechanism.

Well, at the lowest level of runtime implementation of an OS process, you need some mechanisms like this. I would argue all the machinery, hard and soft, for transactions, is way overkill for that level of systems programming.

But to continue to have application programmers deal with this mess for the next umpteen years is nothing but ludicrous. As Damien also wrote, it's like continuing to have today's programmers deal with raw points and memory. No way -- very few programmers should be dealing with that level of complexity.

Nat also writes...

The interaction between distributed (and therefore concurrent) activities involves two things: data transfer and coordination. Synchronisation to avoid race conditions is just one form of coordination. Systems also need to coordinate activities that don't share data.
Yes, absolutely. Let's focus on the real problem for software development. Transactional memory is *not* it.

And now sigfpe (Dan Piponi?) writes...

it's nice to know that there are other people in the world having issues with STM.
Yes, now's the time to raise a ruckus.

And Cale Gibbard comes to the defense of the dang thing...

The major advantage of STM as a system is that it gives you certain guarantees about composability of already working systems. It's not magical, it doesn't always ensure that things will play nicely together, but it does give you far more guarantees about the correctness of compositions than previous systems have.
But Cale, there's not that much Haskell code out there yet anyway. Don't lets have the Haskell people start in with transactional memory just because there still trying to demonstrate they can do imperative programming better than the rest of us.

The world is getting ready for shared nothing, semi-functional programming. Get out of the backwater and catch up to your audience.

And if some group is going to retrofit transactional memory into some significant Java or C# system, well, they would be far better off investing that time into a simpler, shared-nothing rewrite into a language like Erlang, a simpler language like Smalltalk, or even a better shared-nothing coordination mechanism like Javaspaces.

All that low-level monitor-based garble? Just leave it will it is now. Walk away from it slowly. Turn your back, and run.

Cale again...

That being said, if you understand the additional compositionality guarantees, what exactly is it that you find lacking?
Em, simplicity? Em, elimination of totally unnecessary mechanisms too far removed from the domain problem?

And on...

Limiting shared state is obviously good -- there's nothing in this system which prevents that. However, it's a system for cleanly sharing any state which should be shared.
Yeah, right. People would abuse that mechanism all to bloody hell. If a relatively small group of programmers can develop the various soft real time Erlang systems we've seen over the last several years, there is no indication in my mind the effort required to implement and teach this transactional memory thingy would benefit anywhere near 90 some percent of programmers on this earth. No evidence whatsoever.
Other forms of communication, including those in Erlang still have this problem to deal with.
Yes, complex problems remain with implementing concurrent systems. But mechanisms like Erlang's raises them to a higher level, closer to the problem domain. This transactional memory is a false hope covering a bottomless pit that could have easily been walked around in the first place. Stay on the path.
Suppose you have a bank server with clients happily communicating deposits and withdrawals to it, and everything is working. How do clients implement a transfer between accounts safely?
First let's not turn every little data structure into a bank account transfer problem. That's just not the case. Second, these cases that really do exist should be well isolated and the mechanisms not put in every programmers' hands. Third, account transfers have been occurring quite a bit over the years without this new transactional main memory thingy. Why complicate everything for everybody, even if this were the best way to transfer funds???

Sorry, that is just a *really* unconvincing example.

Anyway, this is getting long and I probably can't do as good of a job of it as the paper can, so everyone please check it out before writing more gibberish about transactional memory.
Yep, read 'em. This is not the first time I've commented on it. Just now it seems like it is gaining momentum. The people writing the gibberish are those inventing these things without comprehending the damage it will do.

Monday, February 05, 2007

WS-DCOM

Tim Bray writes...

WS-*? In the real world, it’s about being able to interoperate with WCF, and while that’s a worthwhile thing, that’s all it’s about. It’s not like HTTP or TCP/IP, truly interoperable frameworks, it’s like DCOM; the piece of Windows’ network-facing surface that Microsoft would like you to code to. For now, anyhow; it’ll be at least as easy as DCOM to walk away from when the embarrassment gets too great.
Which is the way it should be, since SOAP essentially exists because DCOM was even worse.

Sunday, February 04, 2007

TSS: Using Javaspaces

There is a so-so article on Javaspaces over at TSS but it has spawned a long, interesting thread of multiple topics. After undergoing a substantial signup process and clicking on the url in their email, TSS still refuses to allow me to participate. That sucks, but I'll just put my various responses here. Sites that create barriers to participate irk me.

Going down the comments, pulling out what catches my eye. Some of my responses are clarifications, some are educated guesses. Too bad these are not able to be in-line for others to correct.

"public fields"

The top level object written to a space implements Entry. Only the public fields are marshalled to/from the space. This strikes people as funny at first. Ken Arnold has a rationale. This is another of those things that should be in the core documentation.

First of all, the objects those public fields refer to have all their data serialized. (They implement Serializable.) This is only the top-level Entry that considers just public fields.

The analogy Ken gives is that an Entry's public fields are like the parameters to an asynchronous procedure call. Read his explanation. It works for me. This choice also ties into keeping things simple for the application developer and the space implementor. More elaborate choices bring more complexity.

"[spaces] works best when the problem is... 'data-centric'... as opposed to 'process-centric'"

I am not sure what this means. Spaces are good for distributed processes as well as data. An Entry and its referenced data is marshalled along with their codebase so that systems reading and taking them can use their code without it being on that system's classpath a priori. That is very powerful.

"Javaspaces is a poor model for building large-scale... non-holistic data-intensive compute work-loads"

Maybe. I don't know. What the heck does this even mean?

"I'm assuming that when you are altering something in a space, there's some sort of locking on it"

Nothing can be altered while in a space. Those things are not in a JVM per se. (Although a JVM may be used in the implementation.) Each public field is marshalled independently on a write, and then marshalled back into a JVM on a read or take. Object identity is not preserved. So if JVM #1 does a write and then a take of some Entry and its data, there is now the original Entry and data, plus a new deep copy of the Entry and data.

And so you can see clearly that a Javaspace is *not* a cache, and it is *not* an OODB for Java objects. It is something else altogether that can serve many purposes. The way to exclusively modify something is to take it, update it, and write (a copy of it) back to the space.

"what, if any, value JavaSpaces has vs. messaging"

I think if you want widespread, anonymous publish/subscribe of data across many disperate business processes, then something like JMS or AMQP is a good choice.

JavaSpaces can be used to implement pub/sub-like behavior but that is not its only, or even core, strength. Likewise a space can be used to implement queue-like structures with topics (i.e. the public fields of an Entry acting like topic information as well as payloads).

The big message is JavaSpaces can be used to more quickly create a wider variety of coordination conventions like sparse, distributed arrays of objects, hierarchies of objects, and so on. Moreover, those objects are marshalled with their *codebase* urls for other participants to load. JMS has no such capability to my knowledge.

There is less setup, and more options, e.g. a JavaSpace can look more like a simple database with a JVM doing the writing and taking of its own Entry instances.

"[JavaSpaces] guarantees persistent storage"

More accurately, I hope, a JavaSpace *leases* storage. Leases can expire, not be renewed, etc. So there is no guarantee of persistent storage forever, although in some cases this could be provided.

"How does Coherence relate to this"

I don't know that much about it, but it seems to me Coherence implements various forms of distributed, shared java.util.Map implementations. Based on the choice, more or less of the Map exists in the application JVM, and locks, etc. are used to updated entries more or less atomically.

A JavaSpace exists outside of any of the participating JVMs. Locks are used to get an Entry to/from a space, but there is no update in place at all. Neither are there key/value pairs. Just Entry instances with public fields.

"a Java only solution"

Yes, unless you go with Gigaspaces which has support for C/C++ and dotnet.

Or if you can take a "Java in the middle" position then the Jini parts of Jini/Javaspaces allow integration with other languages and protocols. But Java does have to show up in the middle of everything, and everything else is essentially second-class.

"why are relational databases so dominant?"

Query languages and long-lived support for static data that survives multiple generations of programming languages, etc. Not great, but they've been around for the better part of 30 years.

"no booleans, ints, or doubles"

True, the public fields of an Entry have to be Objects. An Entry is used to "query by example" in a very simple way, and null means "don't care" for that field.

This is not such a big deal, especially with Java 5 which has better support for mixing primitive types and their Object equivalents.

"What Gelernter envisioned..."

I think it is only fair to say that Tuple Spaces was an *influence* on JavaSpaces. I do not believe that the intention was to implement the strict definition of Linda.

"Croquet"

My understanding of Croquet and TeaTime is the intention is to keep shared objects in sync while allowing concurrent modifications among all the participants. A space is different in that participants may come and go, only one participant can update an object, and moreover the update only occurs on the *copy* that was read or taken from the space. When the update is written back, it will be a new thing, not an update.

"why is open source risk free"

I don't think it is risk free. It reduces some risks. e.g. it reduces the risk of having to update on a vendor's schedule. It also reduces the risk not deploying as many instances as desired, when desired, i.e. potential negative results of a combination of licensing structures (e.g. per CPU) and IT budgest (e.g. not wanting to license a lot of development and test environments, or more than a minimum number of production machines).

It is not just about source code. I've worked for vendors and have been a customer of vendors, where source code was part of escrow agreements... if something goes wrong, the code is in escrow and should become available to the customer.

Many commercial vendors such as Confluence (wiki) and Cincom Smalltalk, provide access to, even modification of, their source code within limits.

Rabbit, Run: RabbitMQ

In the too cool category, those LShift wizards come up with a doozy...

We’re proud to announce that the project we’ve been working on for the past few months, RabbitMQ, has been released. RabbitMQ is an AMQP server written using Erlang/OTP. Check it out at http://www.rabbitmq.com/ - or you can go straight to the downloads page for sources and binaries.
What a great combination: Erlang is perfect for scaling out, reliability, persistence, and so on. Plus with AMQP's binary message format and Erlang's bit pattern matching, another, em, match made in heaven.

This could end up kicking some enterprise ass. What's Iona going to use? C++? Java?

Here is the RabbitMQ Java API documentation. Erlang itself has good support for integration with Java and C.

Validating Dynamic Systems

Let's pull our heads out of our enterprisey IDEs, code generators, and J2EE containers. Consider this from Gregor Hohpe...

During our talk we mention an "advanced" technique that would not just render an image of the system model, but also examines the model and alerts the user about potential problems. It does so by applying known rules for "do's" and "don'ts" to the model. These rules could be as simple as "circles in your dependency graph are bad" or one of those sophisticated self-learning AI algorithms that we never quite understand...

One of the key messages we are trying to highlight during our talk is the importance of mapping the captured data to a model that is suited to answering the questions you are interested in. This model can be a graph, a process or any other abstract representation of your system. Making a model is important for a number of reasons...

Which reminds me I forgot to mention a few weeks ago, the second edition of Concurrency: State Models and Java Programs is available. This is a really nice book whether you are a Java programmer or not. Chances are you are using concurrency if not distribution, or will soon.

The second edition has more support for dynamic events and systems.

Not So Bad

Jim Washington compares Python implementations of JSON, including my now-quite-old json.py. It's slow relative to the fastest implementation, but the intent always was to have a fairly clear implementation without trickery or dependence on other modules.

Bugwise it holds up pretty well to Jim's tests. I've not worked on it since JSON adopted scientific notation, and so all those exceptions about finding an "e" when parsing a number.

There are a few other problems, but it holds up pretty well.

Discouraging Words

There are many things I like about Jini/Javaspaces, but as I wrote over on Dan Creswell's blog recently, the documentation...

...needs some work...

Hopefully the long time participants understand just how bad it is. It is discouragingly bad.

Especially for software that's been around for the better part of a decade. Do you want more Jini adoption? Get better documentation for it.

I hope to help. I currently need to build software for enterprise Java programmers, and this is some of the best stuff going.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.