"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Sunday, December 30, 2007

Milemeter: Buy Your Insurance By the Mile

Brady Forrest writes about Milemeter, a new insurance startup. First, the words "insurance" and "startup" seem to form an oxymoron. But I don't see them listed, so I read on.

An interesting aspect of insurance to me is that the "product" is completely information-based (most of my career has been in software for the design and manufacture of tangible, electronic products; I currently work in the insurance industry). Unlike some other information-based products in financial services, though, insurance seems to be more heavily regulated, and the period between events is long (days, weeks, months). The most successful sale of a product (i.e. no claims, no changes to the coverage) has one significant event a year, (re-)issue, and some minor billing/payment events.

Compare this to financial portfolios, which can be wide ranging and nearly unregulated (cf. the current sub-prime crisis and obscure bundling as supposedly secure instruments), and the period between trading events can be measured in sub-seconds and/or distributed around worldwide markets.

On the one hand there does not seem to be a lot of pressure to change the way information technology works in insurance. On the other hand all these aspects seem to open up new opportunities for change.

Brady observes...

As you may have guessed, they are built on AWS (you can see a video discussing their usage on their blog). They are also using Ruby on Rails with Postgres...

This is what I want to see, large, black-box industries being taken down and made consumer-friendly. (Can the health system please be next?) I don't really know what I pay for with my current insurance, but with Milemeter I'll have a much better understanding.

The internet will sooner or later affect all these industries. (Amazingly much of the current B2B transactions in insurance takes place over proprietary networks, that is, when they are automated at all.)

The established insurance IT has to get its cost of change significantly lower. The best way to do this is to copy the way software is developed for the internet. As Milemeter demonstrates, this will come from the "outside" whether or not the "inside" is ready for it.

I had a chance to visit with Steve Loughran and some of his local friends, when Steve was in Oregon last week. We had a good talk about all these changes, where they are trending, and which kinds of organizations are doing what along those trend lines.

There is no doubt the cost of change is the limiting factor in established organizations from following those trends as aggressively as possible. The opportunities are there and the pressure to change will increase.

Chris Gay of Milemeter notes in the video, linked above, "Amazon Web Services is a pay-as-you-go infrastructure and Milemeter is a pay-as-you-go insurance provider". The ability to use nimble infrastructure(s) will aid the product itself to remain nimble.

Friday, December 21, 2007

Patterns of Change

Reg Braithwaite knows...

Of course I recommend reading the original. But may I add, please do not get sucked into arguing whether Design Patterns are good, or whether IDE refactorings really work, or any of the other technical points that are so much fun to rehash for the millionth time.

Instead, consider the cultural forces at work. Cultural problems cannot be solved with technology. If you are an advocate for change, ask yourself what sort of cultural change is needed, not what sort of technical problems need to be solved.

Thursday, December 20, 2007

RubyCamp 2008 in Vancouver

This announcement came around...

RubyCamp 2008 in Vancouver on Saturday, January 26th.

RubyCamp is an one-day gathering for Rubyists and Railers.

When and Where:

WorkSpace in downtown Vancouver, B.C., Canada
January 26th, 2008 from 9:00 to 5:00

We've been getting into the JRuby here, and a couple days in Vancouver might be nice. This may go on the list of game-time decisions.

Sunday, December 16, 2007

Where's the Beef?

Subbu Allamaraju shows that a significantly more RESTful api than Amazon's SimpleDB is not so difficult to conceive.

The SimpleDB API is neither resource oriented nor HTTP friendly. Having said that, how should such an API be designed in a resource-oriented manner? Here is my take, a version-0.1 of a RESTful SimpleDB. In the design below, I tried to keep the semantics of this version as close as possible to the official SimpleDB API

The Wayback Machine: Internet Archive

James Robertson and Robert Scoble lament the loss of one's data one old sites they themselves do not own. But the Internet Archive's Wayback Machine does a reasonable job of rescuing this. (Could wayback rescue your facebook if that were to disappear?)

e.g. my old Radio Weblog is no longer around: http://radio.weblogs.com/0100812/

However the Wayback Machine can find it: http://web.archive.org/web/*/http://radio.weblogs.com/0100812/

Note that the wayback machine continues to archive the "not found" page. Actually whatever service is running the old Radio is returning a 300, "multiple choices", rather than a 404, "not found".

So using the archive you can actually see when my Radio went off the air, sometime between December 7, 2003 and April 5, 2004.

It looks like the archive has not been able to retrieve at least all the longer essays, which Radio stored under a "story" URL.

Also without exhaustively searching the archive, the earliest criticism of WS-* that I have found of mine is August 25, 2002. See "Protocols, Documents, and Transactions"


How did Amazon allow this design out into the wild? Apparently they've already had a private beta period with non-Amazon developers. No one suspected this use of HTTP GET to be a poor choice?

It's 2007. There are all kinds of basic REST reference and getting started materials. Didn't the whole net go through that "Google is pre-caching everything it can GET" episode a couple years ago. That's the episode where the Rails folks learned that GET should not have side effects.

Wow. Hopefully Amazon will POST a fix asap.

Subbu Allamaraju calls this "SOAPy REST". Didn't even the SOAP folks learn to avoid these kinds of GETs a couple years ago?

IBM published an article recently on using HTTP to access DB2 and IDS. The URLs they provide need improvement, embedding the operation in the URL. This despite that they do seem to be distinguishing between GET and POST. From the information at that site and another bit I could get behind their developerWorks login, it's not clear to me yet whether they allow the use of GET with a URL that includes a destructive operation.

Maybe the RESTful Web Services book will begin to turn these things around. But that Amazon did this in 2007 is a bit of a shock.

Update: Steve Loughran nominates SimpleDB for a new award...

I nominate it for the 2007 Restless awards, in the much contested category of

"things that claim to be RESTful but do side effects in their GETs" along with the ever popular "SOAP endpoint in disguise" category

I know this mailing list has not, historically, had such awards, but now is as good a time to start as any.

Yeah, maybe there should be annual (or more frequently, on Internet-time, awards for "RESTless" and "RESTful" web services that degrade or enhance, RESTpectfully, the web.

Friday, December 14, 2007

Column-Oriented RDF Storage

A couple years ago it occured to me that a database with column-oriented storage, such as Sybase IQ, might make a reasonable database for storing RDF data, where a star schema can be seen as a way to represent related tuples together.

Now it turns out some folks have been working on such a thing... C-Store is an open source, column-oriented databse. The paper "Scalable Semantic Web Data Management Using Vertical Partitioning" (pdf) discusses using C-Store for RDF...

Efficient management of RDF data is an important factor in realizing the Semantic Web vision. Performance and scalability issues are becoming increasingly pressing as Semantic Web technology is applied to real-world applications. In this paper, we examine the reasons why current data management solutions for RDF data scale poorly, and explore the fundamental scalability limitations of these approaches. We review the state of the art for improving performance for RDF databases and consider a recent suggestion, “property tables.” We then discuss practically and empirically why this solution has undesirable features. As an improvement, we propose an alternative solution: vertically partitioning the RDF data. We compare the performance of vertical partitioning with prior art on queries generated by a Web-based RDF browser over a large-scale (more than 50 million triples) catalog of library data. Our results show that a vertical partitioned schema achieves similar performance to the property table technique while being much simpler to design. Further, if a column-oriented DBMS (a database architected specially for the vertically partitioned case) is used instead of a row-oriented DBMS, another order of magnitude performance improvement is observed, with query times dropping from minutes to several seconds.

Thursday, December 13, 2007

Release It Again

Pete Lacey -- what he said, about Release It and Michael Nygard's blog.

If you find yourself in a panic over some centralized resource, wonder if the full costs are accounted for. What alternatives might exist for decentralizing, and how do the cost/benfits really add up over time?

The cost of operations is dropping. The cost of change is still too high for many to take advantage of that though. By the time we can get our systems onto more budget-friendly architectures... well, I guess heading in that direction puts you on the path toward even better things.

Depending on the business, if one extrapolates out from one's current position, through the point where more open/available/scalable systems are in use... well, then is this evidence that for most of us, our ultimate position is out in "software as a service/utility" land? Exactly who should be in the data center business five to ten years from now?

Another data point shows up. (Oh, and speaking of cost of change, it's in Erlang and it provides a REST api available from any language.)

From the SimpleDB FAQ...

Q: Where is my data stored?

Amazon SimpleDB stores your data in our multiple data centers in the United States. We anticipate adding other geographies over time.

Q: Does Amazon store its own data in Amazon SimpleDB?

Yes. Developers within Amazon use Amazon SimpleDB for a wide variety of projects. Many of these projects use Amazon SimpleDB as their authoritative data and query store and rely on it for business-critical operations.

So, "business-critical" seems kind of reliable. Not just scaled out databases, but scaled out data centers. Those are even more expensive to operate on your own.

Yahoo Flex Skin, Other Flex News

Among other recent Flash/Flex/Air/Adobe news, Yahoo released an open source skin for Flex.

Elsewhere new versions of Flash/Flex/Air from Adobe have been released and in some cases newly opened up.

I don't feel compelled to use the Flex Data Services, but for those who do, or when I do, today's news should be encouraging. You can use their open source implementation, or use their open specification for an alternative, e.g. in some other non-JVM language.

Saturday, December 08, 2007

Embedded Lightness

I was out with a long-time friend and former co-worker last night. He's been working the last year or so on software for managing blade servers. Technically this is an "embedded system". Some veteran embedded system developers on his team have been learning just how much their field has changed.

Essentially the system consists of linux, lighttpd, php, and sqlite. Where the veterans were set on writing a lot of C and custom protocols for handling events in the server (blades coming and going, for example), they've been able to do everything using the web's architecture. They wrote some C to enhance the php interface to snmp.

They had some pre-existing C code that has a memory problem after some number of days. No problem, they just kill and restart the process well before that. When that code becomes the most important problem, they'll look at it.

Job Opportunity

Mike's got a job posting up at his new gig. You'd be working with a smart, funny, curious (in the good sense!) project manager/developer, on an interesting problem, and a high-powered executive team.

You don't have to live in Portland, although why would you want to live anywhere else?

Monday, December 03, 2007

Safety Dance

Douglas Crockford at XML Conference 2007. You can dance if you want to...

The current browser implementations have problems because they share all of the information between the current sessions (problems with cookie stealing, replay attacks, and chrome changes). That’s the dangerous web 1.0. Now, we’re trying to intentionally mashing stuff up (which we’d always tried to prevent when unintentional).

Meet the new Reg...

Paul Fremantle on WSO2's registry work...

For a while I've been thinking that the SOA registry space has been a little overcomplicated...

So fundamentally the approach we have taken is to build a registry/repository based on REST concepts. And as we looked at the REST space, we kept noticing how close the Atom Publishing Protocol (APP) is to our needs, so we've made that the public remote API to access the repository. Of course, if you are just browsing the registry, you only need a browser - APP is mainly there to support updating resources.

Of course, using Atom and APP gives some really nice benefits too - like being able to subscribe a feed of new resources that meet your search criteria.

Glen Daniels was talking about this registry in the hallway at QCon a few weeks ago. Nice. You could definitely see Glen's gears turning during the sessions and discussions.

Sunday, December 02, 2007


Some folks have been quoting Reg Braithwaite lately. (Steve Vinoski did just a bit ago.) Here's another...

I have never met someone who desperately wanted to be great but failed to be at least decent.
That's a good observation. And almost Goethe-like.

Saturday, December 01, 2007

The name of this band is Talking Heads

I was poking around youtube for the Late Night with David Letterman show (the one back on NBC), the episode back around 1983-84, where the entire screen rotated clockwise over the course of the hour.

I've not found a clip yet. But I've been watching other clips, like this one...

Amazing. There was a time as an early 20-something when we *had* to watch Late Night and talk about it the next day. The current Letterman is more mild, but still bits like, "Is this anything" and "Will it float" provide glimpses back to the original show, which at the time seemed like "anything goes".

I wish I could find a clip of that episode with the screen rotating slowly over the hour.

Andy Kaufman and Elayne Boozler

Andy Kaufman and Elayne Boozler...

Boozler I think has gone under appreciated over the years. She keeps up with Kaufman here. Painfully hilarious improv.

Friday, November 30, 2007

"http is like air"

Quote of the day from an IBM exec on the importance they are placing on rest, http, atom being everywhere.

Wednesday, November 28, 2007

And Then There Were Three... Then Five Again

Mike? He gone.

He formed the band, a little over a year ago, chartered to launch an open source project within a large "vertical". That was four charters, a CIO, one viking, and Joanne ago.

Don't ask. That's the nature of large "verticals" and being a small team, unlike any other, weaving in and out of the established structures. We got pretty good at writing charters on the wiki.

But in between we've done some pretty good work across a couple tables of old Dells running the Ubuntu server. And we're still being asked to be creative, agile, and infectious, which is not often the nature of large "verticals".

This fourth charter could stick. We're back to a full complement of five. And a real budget. And simple tools for restful web services.

Good luck, Mike. You made a lot of good things happen and made them a lot of fun. And took more than one for the team. Now your original charter has come around... nice.

Bird of prey.

Learning Scalable Internet Services

Via Planet Trapexit, and this interesting post from RightScale on the network performance of Amazon EC2 and S3, there is this fascinating page from a UC Santa Barbara course...

"CS290F - Scalable Internet Services"

The project consisted of building a transactional dynamic web site in Ruby on Rails and running on Amazon's Elastic Compute Cloud (EC2). Each site had to hold >100'000 database records that could be searched and explored, have user accounts, and include some form of transaction, such as a shopping cart check-out.

Each project then had to be deployed on multiple servers on EC2 and the groups had to use httperf to demonstrate that they could scale the performance of their site by running a front-end load balancer server, a database server, a memcached server, and up to 10 application servers. All this had to fit into a 10-week quarter, with none of the students knowing either Ruby or Rails at the outset!

More information from the instructor's RightScale blog. The wiki link there is broken, but at least pieces are still around, including the course material.

Sunday, November 18, 2007


From Jean-Jacques Dubray...

I am a bit saddened by the Open Source community mainly siding behind REST. REST will not get you there, I hope you guys know what you are doing, because this could be a historical mistake. I don't see any serious open source Composite Application Platform.
BTW, many of the links are broken on your site.


JRuby and JXPath

Final Update:

Bottom line: for the current implementation of JRuby implementing this does not seem possible. A near-future release is supposed to improve on the Java/Ruby integration. In this case what is needed:

  • A Java class that is the superclass of all JRuby classes. Similar to the PyInstance class (a Java class) in the Jython implementation.
  • A Java class that corresponds to a class defined in JRuby such that when instantiated via newInstance() in Java also creates a corresponding JRuby instance. Similar to the JythonPropertyHandler class I defined in Jython, corresponding to the JRubyPropertyHandler class I attempted to define in JRuby, below.
End Update

Sometimes I drink to forget, and sometimes I just forget. Several years ago I blogged about using JXPath to query Jython objects. Sometime since then I forgot all about JXPath. I remember it now being fairly simple, and working with all kinds of structures. (And so JXPath should be a fairly simple way to use XPath to query JSON objects in Java, BTW.)

I started playing with JRuby recently to see how it's doing. When I came across the Jython code I just mentioned, I thought I'd try the equivalent in Jruby. Here's what I have so far, but I have two question marks preventing it from running. I sent them to the JRuby users list, but if anyone reading this has an answer or a hint, I'd appreciate that.

class JRubyPropertyHandler 
  include org.apache.commons.jxpath.DynamicPropertyHandler

  def getPropertyNames(robject)
    return robject.instance_variables.to_java

  def getProperty(robject, property)
    if robject.instance_variables.include?(property)

  def setProperty(robject, property, value)
    robject.instance_variable_set(property, value)

  def JRubyPropertyHandler.register()
    introspector = org.apache.commons.jxpath.JXPathIntrospector
    introspector.registerDynamicClass(???, ???)
It's that call to registerDynamicClass (javadoc) that has me stumped. The first argument is a class of all the instances that should be handled by JRubyPropertyHandler.

The second argument ideally should just be JRubyPropertyHandler. i.e. not an instance of the handler, but the handler class itself.

I tried calling...

    introspector = org.apache.commons.jxpath.JXPathIntrospector
    introspector.registerDynamicClass(Object, JRubyPropertyHandler)
But these two arguments are each instances of org.jruby.RubyClass, not java.lang.Class as needed.

In Jython the two arguments to register a handler are PyInstance, the Java class that is the superclass of all Jython instances, and the handler class itself. The code for my working Jython implementation is on my original blog post.

Update: from the jruby user's mail list, Nick Sieger drops a good hint, and I have more questions until I can get back to a jirb prompt...

> Perhaps this snippet helps?
> $ bin/jruby -S irb
> irb(main):001:0> class Foo
> irb(main):002:1> include java.lang.Runnable
> irb(main):003:1> end
> => Foo
> irb(main):004:0> Foo.new.java_class
> => $Proxy7

Thanks. Maybe. I'll look into it when I can get back to my jirb prompt.

That will work for the second argument *if* it implies that when the
jxpath java code creates a new instance of $Proxy7 then what actually
happens is a new jruby instance of Foo is created.

But what does this say about all jruby objects?

i.e. in jruby, they are all instances of the class Object, but if I do
the following...

o = Object.new
j = o.java_class

...then can I assume that all jruby objects are (in java-land)
instances of a java class that inherits from "j"?

Or is the java_class $ProxyN class more of an on-demand, dynamically
generated, bridge to the other world?

VW: An Efficient Dynamic Runtime

Cincom VisualWorks Smalltalk is an extremely mature, efficient implementation of a dynamic language runtime. James Robertson points to one of the payoffs, running Seaside, on VW Smalltalk...

I tested: Seaside 2.8 in Squeak (using the "one click experience" image), Seaside 2.7 on VW, and Seaside 2.8 on VW - the latter required the under development release that's coming. Here's a summary of what I got:

PlatformSessionsAvg Sessions per SecondAvg Pages per Second
Seaside on VW 7.5971.6217.9
Seaside 2.8 on Squeak3706.1748
Seaside 2.8 on VW 7.65829.779.4


Also relevant is this: In Seaside 2.7 on VW, pages per second started off at 34, and then dropped to 10 by the end of the one minute test. Squeak dropped from 50 to 45.5, which is pretty stable. VW 7.6 with Seaside 2.8 started at 81, and dropped to 79.5 - which is even more stable.

If you are looking at Ruby on Rails or a similar dynamic language web framework, there should be several reasons to put Seaside and VW Smalltalk on your list of options.

Update: James updates the tests and brings in Ruby on Rails for comparison. Seeing an "engineering shootout" shape up among these variations would be a fun time. Let various teams engineer the bits out of their own stacks on fairly similar functionality. (Of course a Ruby on Rails on the VW virtual machine would be fun too. :-)

Gemstone Seaside guru, Dale Henrichs, reminds me in the comments:

Patrick, don't forget that there's another mature dynamic runtime system out there that provides transparent persistence along with pretty good performance that even scales across multiple cores... benchmark.
Sorry, Dale!

Get hooked on Seaside, then get hooked on Smalltalk.

Resources and the Kimball

I have not brought out the Kimball in a while. Recently Bill de hÓra linked to the collection of Kimball articles, but without context. The thing about Kimball's design approach is I've found applications of it, at least significant aspects of it, to several systems beyond data warehouses. Understanding the essence of Kimball's approach should be a fundamental part of a software developer's education.

One reason for this is the Kimball approach is a form of domain-driven design. Another reason is the technical aspects are relatively simple. And so the result is not a be-all and end-all solution to everything, but it is a tool with legs the go beyond the original intent.

The Kimball approach has influenced how I think about objects, data, and (from this 2003 blog post) integration...

His keys to an effective virtual database (or data bus architecture in his words) is conformed dimensions, smallest grain facts, etc. Most of these principles apply to the largest data warehouses or the smallest databases, and would benefit any Enterprise Information Integration effort.
What is the web, but a kind of large "virtual database"?

At the time I was in the middle of a large project integrating several systems with a new installation of the SAP FI-CO general ledger module and those systems with a new Teradata data warehouse. We more directly applied these ideas to the warehouse, but generally to information exchange.

Today with groups looking at Restful web services centered around resources, representations, and relationships/links among them, the Kimball approach also applies. It is domain driven, concerned about the entities and activities of the enterprise.

Until we get more analysis patterns from real-world, machine-machine Restful web service applications, the Kimball approach to, and examples of, information management should probably be a key ingredient of this kind of design. We should, of course, expect some similarities in the domain view of information across systems integration, data analysis, and the web of documents and events.

Tuesday, November 13, 2007


Erik Johnson (via Stefan Tilkov) wonders what's so bad about the "process this" notion. Jim Webber, a coiner of Mest, spent time in his recent QCon talk on this idea before moving on to a nice exposition of HTTP hyperstuff.

I think I have a reasonable handle on what the HTTP methods generally mean. I've not applied them to a wide variety of situations, and I wonder sometimes where that line and whether some specific use is crossing it too far.

More than that though is I've been playing with XMPP a bit again. Where is that line between pushing HTTP too far and jumping over that line into XMPP for "Mest", ad hoc, "process this", I'm not counting on any of the benefits of HTTPness?

These lines are probably fairly blurry on close inspection, when the programmer does not have a clear set of constraints leaning one way (e.g. GET's benefits) or the other (e.g. presence/status). but I've certainly found running an XMPP server and connecting from various systems at least as easy as doing the same with an HTTP server. And XMPP (over HTTP) runs through firewalls, for better or worse.

Anyway... if you are interested in that "one method to process them all" then why not use XMPP for that, and leave HTTP for more specific duties? Or not. If you are in a Mest then are you saying you have no clear understanding of your resources and constraints and so HTTP POST *is* as good as anything?

Just noodling...

Sunday, November 11, 2007



Making It Real

James Snell's story highlights the point I just made in the previous post. The conversation about SOA / WS-* / REST should move forward by experience reports not from vendors or implementors but from people developing real applications...

Those who are familiar with my history with IBM should know that I was once a *major* proponent of the WS-* approach. I was one of the original members of the IBM Emerging Technologies Toolkit team, I wrote so many articles on the subject during my first year with IBM that I was able to pay a down payment on my house without touching a dime of savings or regular paycheck, and I was involved in most of the internal efforts to design and prototype nearly all of the WS-* specifications. However, over the last two years I haven’t written a single line of code that has anything to do with WS-*. The reason for this change is simple: when I was working on WS-*, I never once worked on an application that solved a real business need. Everything I wrote back then were demos.

Now that I’m working for IBM’s WebAhead group, building and supporting applications that are being used by tens of thousands of my fellow IBMers, I haven’t come across a single use case where WS-* would be a suitable fit. In contrast, during that same period of time, I’ve implemented no fewer than 10 Atom Publishing Protocol implementations, have helped a number of IBM products implement Atom and Atompub support, published thousands of Atom feeds within the firewall, etc. In every application we’re working on, there is an obvious need to apply the fundamental principles of the REST architectural style. The applications I build today are fundamentally based on HTTP, XML, Atom, JSON and XHTML...

Are average developers and architects able to design ANY system correctly? I think if you look at the history of software development as a whole, you’d really have to stop and wonder about the answer to this question. The fundamental challenge comes down to this: developers get paid for coming up with solutions that work; doing so means learning just enough about a technology so address the immediate need so they can move on to the next line item in their list in order to meet the deadline; this will quite often mean that average developers and architects aren’t even going to bother designing and implementing solutions “correctly”; nor should we ever actually think that they will. The best we can do as tool providers is educate users better and provide excellent tooling that makes it easier to do the right thing. Most of the time the tool developers can’t even get it right tho.


Steve Vinoski puts out a request for comments, of sorts...

I personally know of nobody who has ditched REST for WS like this, but if you have, or if you know of someone who has, I’d love to hear the whole tale, so feel free to leave it in a comment.
Maybe the next SOA / WS-* / REST confab should be restricted to experience reports about building real applications.

Saturday, November 10, 2007

More on QCon

The couple of posts about QCon so far were parts of my experiment with the email-to-the-blog capability. It's simple and it works, and I did not have to lug my laptop around the conference.

However blogging at a conference wound up with the my usual pattern: start out with some items on the first couple of sessions, then the inevitable happens. I meet people on the breaks and the conversations fill up available time, and are more interesting than anything I could blog about.

So to catch up on some thoughts...

Stefan Tilkov kept a pretty good log of the sessions he attended. He put together the agenda for Thursday's SOA/REST track. That whole day was great fun and engaging. The combination of speakers was pretty much a full complement of who you'd like to hear address the issues from any of the perspectives.

The RESTful presenters overlapped little in their content. Anyone attending (or viewing the videos after -- need a link here) should have come away with a good sense of what it means to work with HTTP rather than against it. I mentioned Steve Vinoski's introduction to REST that launched the day (and subsequent back-and-forth salvos with the WS-* folks.) Pete Lacey demo'd a RESTful expense report example using a browser, command-line scripts, Excel, and Word. Well done. He had a lot more to show -- his suite of examples would make a good hands-on tutorial.

Dan Diephouse (who really knows how to order wine, but doesn't quite know when to stop :-) talked about atompub in particular. The Q&A got into some back and forth on "batch" and other topics that kind of stretch the current atompub specification, leading me to wonder: what does anyone *mean* by "batch" -- I can think of several variations -- and why do we try to squeeze any of these activities into atompub per se? Certainly the feed format is a useful one for learning about the results of batch activity. Finding some RESTful but not necessarily atompub mechanisms useful for "batching" may make some good experiments.

Earlier in the day Sanjiva Weerawarana's session included valid explanations of the complexity of HTTP and atompub, and the effort expended getting atompub settled. My thoughts were that observation does not really speak well about the WS-* specifications. Sanjiva repeated that WS-* are mired in vendor politics, poor implementations, and that we end users should have to understand much about them. Arguments I've heard more than once before, but still make me uncomfortable. Are these supposed to put me at ease?

Well, we went out and had fun later anyway. The WSO2 folks are looking, learning, and supporting REST just like the rest of us. So to speak.

I like that atompub has not attempted to specify too much (yet), and fear things could get out of hand, but at least we have something far simpler and easy to use right now. Technical details HTTP and atompub were discussed all day because doing so is *practical*. They are fairly easy to understand even if there are devils in some of the details. There is little hope or encouragement for taking a similar approach to WS-* and best advice we are given is not to worry about them, let the implementors handle it for you, rely on the "tooling".

Which gets back to Pete's presentation: his use of Excel, Word, Firefox, and Ruby was a great demonstration of leveraging the web. The web has won, of course. Thinking back to the Inforworld SOA conference held a year and a half prior to and two blocks away from the QCon conference seems like light years ago, with vendors pushing all kinds of proprietary nonsense on the unsuspecting.

As Sanjiva joked, the new "RESTful Web Services" book is the New Testament. But REST is more of a science to WS-*'s religion. The WS-* leaders ask us application developers to take a leap of faith following their proprietary tooling. REST / HTTP is simple enough to prove to yourself with the tools at hand.

Oh and Jim Webber's presentation is not to be missed, when you can get to the video. I laughed throughout. (Paul Hammant was at QCon the day before but could not attend Thursday. What I'd give to catch Jim and Paul in the same room!)

Jim's presentation rounded out the REST / HTTP agenda -- his illustration of following a workflow via HTTP makes the whole "engines of hypermedia" or whatever you want to call it very clear.

Thanks, Stefan, for organizing a really great day. The audience discussions were great too, with Stu Charlton and others saying stuff.

Friday, November 09, 2007

Can't you just use the web?

Someone posted a link recently to an apparently interesting video. Clicking on the link, this is what I get. I'm sorry, I have to join facebook to see your interesting video?

What about just putting it on the web? Gaws.

Patrick Mueller adds in a comment...

Here's the sad part. The movie is already on the web; FaceBook doesn't actually store anything at it's site, it's all pass-through. The movie might well be at YouTube. All FaceBook does in this case is serve as a bottleneck.

Capability-Based Security and Javascript

Via Ted Leung...

Ben Laurie has posted some initial information about the Caja (Capability Javascript) project that he is leading at Google.
About a month ago I came across some information that Mark Miller is at Google working on capability-based security. Turns out he is on Ben's team. This will be useful stuff for moving web and application security forward.

And it is open, as you might expect.

Thursday, November 08, 2007

Quote of the day

Jim Webber to Sanjiva Weerawarana, after several expletives re: Sanjiva being an author of WSDL...

"I will hug you later... In a kind of Borat style."

Steve Vinoski's session

Steve is up in the QCon track on SOA and REST. His presentation is set up as a dialog between himself (as REST guy) with a hypothetical SOA guy. That guy's dialog is derived from actual correspondence Steve's had over the years. (He also stated that for the last 15 years he's been constrained by "buy my product" but he has no such ties today).

The session is good, a good intro to REST, but the interaction with real SOA guys in the audience is frightenly familiar to those we've all seen on the web over the years. As he said, he's taking some arrows for later REST speakers today. (Those SOA arrows are pretty weak thankfully. :-)

Lego or Negotiation

James Noble's keynote at QCon this morning. Well done performance on two theories of software around in 1968. The dominant one being software as building lego cathedrals. The other being software as accumulating small, negotiated victories. This is the one we've got and need to embrace. Hopefully there's video of the talk.

Wednesday, November 07, 2007

Made it to SFO

Meetings ended early. Got on standby flights. So much better than getting in at 11:30.

Testing with this to see if my blog-by-mail is working.

Tuesday, November 06, 2007

Global Security

From Douglas Crockford's The Department of Style...

There is one problem in JavaScript that is bigger than all of the others put together: The Global Object. All compilation units are thrown into a shared global container. This gives each unit full access to all of the other units. All units get exactly the same rights and privileges. This turns out to be a huge mistake. It is the root cause of most of the security problems in the browser...

In the long term, I want to replace JavaScript and the DOM with a smarter, safer design. In the medium term, I want to use something like Google Gears to give us vats with which we can have safe mashups. But in the short term, I recommend that you be using Firefox with No Script. Until we get things right, it seems to be the best we can do.

Understanding OpenSocial

Here's my initial, superficial, and brief, take on OpenSocial, as I understand it so far. There are several aspects to OpenSocial, the top two being that there is a Javascript, "gadgets" oriented part, and there is a server, atompub / gdata part.

The part that interests me most, and that has the wider implications in the long run, is the atompub / gdata part. Facebook can make a proprietary api "more truly open," as Mark Cuban says. But that just becomes the means for more easily having "third party" support for an OpenSocial "gateway", if you will, for Facebook.

In that same article Tim O'Reilly focuses too much (as I read it, solely) on the Javascript / "gadgets" part of OpenSocial. That's missing what will, or should, become the key to OpenSocial having a bigger impact on the web, in the long run, that either Facebook or MySpace. Combined, in my opinion.

There are all kinds of "innovation happens elsewhere" implications of the OpenSocial atompub / gdata part that overshadow "gadgetry". From the OpenSocial docs...

The OpenSocial API is a set of common APIs for building social applications on many websites. There are two ways to access the OpenSocial API: client-side using the JavaScript API and server-side using RESTful data APIs.
And so, there is no reason the OpenSocial RESTful APIs, even the Persistence API, have to be served by Google. It seems to me from these docs that Google assumes many sites will support these APIs.

DSL: Better than you think

Phil Windley writes about creating a domain-specific language...

I'm a big believer in notation. Using the right notation to describe and think about a problem is a powerful tool--one that we're too eager to give up it seems. People seem to believe that (a) all languages are pretty much the same and (b) the world has enough notations. While (a) is true in theory (they're all Turing complete, after all) the power of a notation isn't in what it can accomplish, but the ways in which it allows you to think. I'll deal with (b) in what follows.

Monday, November 05, 2007

Android and the Open Handset Alliance

Google continues their "open play" if you will, into the mobile world.

Despite all of the very interesting speculation over the last few months, we're not announcing a Gphone. However, we think what we are announcing -- the Open Handset Alliance and Android -- is more significant and ambitious than a single phone. In fact, through the joint efforts of the members of the Open Handset Alliance, we hope Android will be the foundation for many new phones and will create an entirely new mobile experience for users, with new applications and new capabilities we can’t imagine today.

Openness. Hmm. What a concept.

Saturday, November 03, 2007

The "Web or Facebook" Bet

I sure don't understand Facebook. I never even got up to the MySpace stage. Joe Wilcox watches Microsoft...

As I've repeatedly asserted, Facebook is more like an operating system in the cloud than a Web 2.0 service. No one should understand a "walled garden" better than Microsoft's GM of platform strategies. After all, what is Windows but a walled garden?
It's autumn. The garden is turning brown. Joe paraphrases Microsoft...
In a blog post earlier today, Fitzgerald asserts that the "sound and fury" of the OpenSocial announcement "actually underscores the weakness of the hand held by Google and their fellow travelers. While nominally about making it easier for developers to write widget applications that can be hosted across multiple sites, it really shows how few options Google has to try to deflate the twin nightmares that Facebook poses to Google."
Probably some truth is in there somewhere. The sound and fury is more Mircosoft's though. I think they're doing more bluffing and OpenSocial is more like calling the bluff.

But like I said... I don't understand Facebook. I know some people who use it reluctantly rather than enthusiastically. This also happens to be the characterization of most of the Windows users I know.

Don't bet against the web. Google has its own agenda, but it is simultaneously helping to build out the web.

Microsoft continues to make many more billions of USD than I can fathom. But I don't see how their walls and Facebooks can hold up in the long run.

Rest at QCon SFO Next Week

Pete Lacey will be there. If his talk is anywhere as good as some of his online presentations, that should be a highlight. The Rest agenda for next Thursday in SFO looks so good. I'm almost all packed. Well, that packing is for the trip to Indianapolis tomorrow through Wednesday.

So the flight to SFO has been routed through the midwest. Sigh. We'll get into SFO about 11:30pm on Wednesday.

Hopefully I will be awake for the 9am Thursday talk. I've been a fan of James Noble for years, esp. his classic Notes on Postmodern Programming.

Friday, November 02, 2007

Documents and Schemas

James Strachan writes about atom format and how to indicate schemas...

One idea is to use content types...

...another could be to add a new kind of link to the feed.

This has some appeal. We're building some feeds that will have specific kinds of content, and validation of the specific content will be beneficial.

However an aspect of RelaxNG that is appealing for our project is not tying a specific schema to a specific document. Certain consumers of a document want to validate just the parts of the document that is of interest to them. There may be more than one schema applied to a document.

Some publisher may simply want to ensure a document is a good Atom entry. Some other publisher may want that and to ensure the content meets some other schema and associated rules. Some consumer may just care about the content or a subset.

See "duck typing", below. :-S

I don't have any good ideas, but I would hesitate building in a 1:1 association out of fear some producers and consumers may become over-dependent on that association. That's my barely formed 2 cents.

Optional "Type" Declarations

I realize ActionScript, like some other dynamic languages, have optional type declarations.

I don't care for static types, optional or not, declared or not. If you want speed, there's other ways to get it. If you want safety, there's other ways to get better safety. If you want documentation, they document the wrong thing.

Alan Kay pointed out a long time ago the benefit of message passing and "duck typing" if you will. I am a huge fan.

Thursday, November 01, 2007

XMPP and Variegated Instant Messaging

Bill de hÓra...

XMPP itelf is a no brainer as the backbone protocol. I can't imagine it not being used for everything that HTTP is unsuited for a few years from now. Although we might have to go through a wasteful EDA-* cycle first before everyone "gets it", a la what has happened with WS-*.
cf. Erlang's supervisor hierarchy.


Oh dear...

Static typing will also allow compiler based unboxing, will it not? That would lead to increased performance along with safety.
The browser has a lot of problems. This is not one of them.

Reliabilty with Erlang

Steve Vinoski's announces his latest column...

My latest column, Reliability with Erlang, first describes some of the problems that highly-reliable systems face, and then explains some of Erlang’s core primitives that provide a solid foundation for reliable systems...

BTW, I can’t recommend enough that you pick up a copy of Joe Armstrong’s Programming Erlang. It’s a truly excellent book that has deeply and positively affected the way I think about designing, building, and implementing software systems. As I mentioned in my columns, my only disappointment with Erlang is that I didn’t discover it 10 years ago when it was first open-sourced, as it could have saved me a ton of time and trouble in my middleware development efforts over the years.

From the column itself...
Layered on top of the Erlang language is a framework called the Open Telecom Platform (OTP), which uses these features to help enable reliable systems. Despite the word “telecom” in its name, OTP is a general-purpose framework that’s useful for applications in a variety of domains.

I want to make it clear that Erlang and OTP aren’t magical — they won’t automatically make your software extremely reliable. Creating reliable systems with Erlang/OTP still requires knowledge, experience, solid code, thorough testing, and general attention to detail. Nevertheless, because the language was designed with reliability as a foremost concern, the combination of Erlang and OTP definitely has advantages over other common languages when it comes to reliable systems...

If you develop enterprise-integration or middleware applications that require high reliability, I’ll offer the same advice I gave last time: go get yourself a copy of Joe Armstrong’s book, Programming Erlang.1 This book is very readable and is suitable for both beginners and experts alike. It will open your eyes to a better way of building reliable software.

Wednesday, October 31, 2007

Help Me. I've Fallen.

An often repeated erroneous assumption is that static type checking is necessary for all the good IDE features like code completion and refactoring. Well, no.

The first and still best refactoring tool is for Smalltalk. Lisp and Smalltalk have had code completion mechanisms for decades.

James Robertson feeds off the recent "dynamic languages are too something or not enough something else" brouhaha and provides a Smalltalk Daily on code completion.

Tuesday, October 30, 2007

Mozilla Labs announces Prism

I told you so...

Flex / Apollo is useful for pushing the current browsers to the next level.

Air (nee Apollo) still appeals to me more than Prism. Neither are perfect, yet everyone is heading in a good direction.

Unlike Adobe AIR and Microsoft Silverlight, we’re not building a proprietary platform to replace the web. We think the web is a powerful and open platform for this sort of innovation, so our goal is to identify and facilitate the development of enhancements that bring the advantages of desktop apps to the web platform.
I don't know much about silverspoon, but air is not intended "to replace the web". You can build very-much-on-the-web systems with it.

Dynamic Languages, Compilation

A few interesting points from a recent post from Steve Vinoski...

On the dynamic language front, the worst code I have seen in my career has always, always, always been in compiled imperative languages, most often Java and C++.
This probably depends on the developers and their culture at least as much as the languages themselves. Here's my observations based on having worked directly with handfuls of programmers with hundreds of thousands of lines of Common Lisp, Smalltalk, Pascal (and Modula-like extensions), C, C++, and Java; tens of thousands of lines of Scheme; and thousands of lines of Python and Ruby.

How should we measure "worst"? I'm not sure. I've encountered large amounts of bad code in each of these languages. The bad code I see in each of these typically has these characteristics:

  • Hardly tested
  • Hardly factored - "run on" code that bleeds implementation details all over
  • Poorly named - "unreadable" code that does not communicate its purpose
  • Overly coded - way more code than is necessary to do its job
The worst of these, in each of these languages, has significantly lowered the rate of change, often down to nearly unchangeable.

All else being equal, I'd rather work with good or bad code in a dynamic language. The rate of change will almost always be faster. (But don't ask me to work with bad code: I am way past any level of tolerance to work with bad code. Don't even show it to me, I am no longer amused. Urp.)

I would much rather let an average developer loose with a dynamic language... they’ll fail way faster and thus allow much more time for recovery. The fact that dynamic language programs are usually smaller than their compiled counterparts means that they’re easier to read and review, and statistically, they’re likely to have fewer bugs.
Yeah, I would not disagree, as I said, all else being equal. I realize Steve's point is about languages and not the full spectrum of becoming a better programmer. However, my silver bullet would be having them pair with someone who can help improve tests (which in itself is learning to fail way faster), factoring, naming, and doing the simplest things first would help more than switching languages.

Along the way, sure, help them pick up a better language or two. Everything else just gets that much easier.

Furthermore, counting on the static language compiler to save you is simply wishful thinking. To paraphrase Tim Ewald from a conversation he and I had during lunch a week or so ago, compilation really amounts to just another unit test.
Oh, god. I am sure to get more comments from the static type checked folks. I will ignore you!!!

"Compilation amounts to a unit test" - I not sure about the point. Maybe this means that static checks are essentially redundant in well-tested code. Here's a twist on the unit testing thing I wrote some time ago... (Test-first is a command line interpreter for non-interactive languages)...

Test-driven design tools are "command line interpreters" for Java, C#, and other "non-interactive" programming languages.

Programmers using Lisp, Smalltalk, APL, and other "interpreted" languages have been doing test-driven design for decades. The pattern here is to type little snippets of code for an idea, get immediate feedback, then gradually incorporate those snippets into a whole program.

Lisp has its read-eval-print loop. So does Python. Smalltalk traditionally has "workspaces" which are text editors that also evaluate highlighted code.

In either case, the end of a programming episode results in a transcript of code that is then allocated to two destinations... some of the code in the transcript are the objects or functions that go into the program itself. The rest of the code is edited into regression tests.

Looking at a test-driven incremental programming session using junit or nunit you see the very same result... the shell is the command line and xUnit is the "interpreter". You get a small idea, test it, and repeat.

The tests themselves are the "transcript" of the session, and are also the preserved regression tests.

Monday, October 29, 2007

Viking Calls BS

Erik Onnen (rightly) calls BS on the Apache vs. Yaws comparison. maybe his call out will dig up more details or lead to better comparisons.

/But likening the Erlang community to Microsoft is going too far. :-/

Friday, October 26, 2007

Imagine That

Steve Loughran writes...

Slowly but surely, our applications are moving to a world where they persist state automatically, and can skip over a crash as a normal event.
Kind of like what Smalltalk provides for free using an "image". Um-hmm.
Most popular programming systems separate program code (in the form of class definitions, functions or procedures) from program state (such as objects or other forms of application data). They load the program code when an application is started, and any previous application state has to be recreated explicitly from configuration files or other data sources. Any settings the application programmer doesn't explicitly save, you have to set back up whenever you restart...

Smalltalk systems store the entire application state (including both Class and non-Class objects) in an image file. The image can then be loaded by the Smalltalk interpreter to restore a Smalltalk system to a previous state.

Imagine that.

Monday, October 22, 2007

1958 / 1964 - So What

Update: shmul recommends in the comments Bill Evan's "Sunday at the Village Vanguard". Oh. My. Yeah, that is a fantastic recording. It's probably another heavy link to Evans' addictions. Evans was always in search of his ideal bassist, and found Scott LaFaro. They developed their sound together, just as Evans did with Davis. A few days after these recordings, LaFaro was killed in a car accident. Evans was devastated.

Previously: Now with the correct links.

Compare a 1958 recording of So What (via Steve Dekorte) with a 1964 recording. What happened? Free jazz. And the Beatles.

Coltrane was on the edge of 1964 already in 1958. Miles Davis' sound for "Kind of Blue" was settling in. They could not have recorded it earlier or later than 1959. There's nothing else like it.

It could not have been recorded without Davis of course. Also indispensable: Bill Evans and Coltrane. Probably indispensable: Cannonball Adderley for the way he and Coltrane interact.

Bill Evans was still playing his sound he found around 1959, in 1979, just about dead from addictions that began in Mile's group, playing under incredible pressure from black audiences pissed off by a white pianist in the group. Davis was an icon in the black community in the 1950s. Incredibly successful and refusing to take any shit from powerful white men. By standing up for Evans, Davis showed he'd take no shit from black fans either: Davis was in awe of Evans' sound and made that sound the centerpiece of Kind of Blue even though the piano itself is not a centerpiece. The two of them came together perfectly influencing each other's sound.

(Aside: unlike Evans, Davis was able to hole himself up at his parent's place and kick his habit before this time. Evans would hole himself up at his family home subsequently, but never could kick all of his addictions. He slowly eroded his body's chance of ever recovering over 20 years. The pain from earlier damage fueled more drug use. Read his biography -- not as dramatic as other addicted musicians -- just incredibly slow, painful erosion.)

Anyway, Evans played on every piece on Kind of Blue except Freddie Freeloader. Listen to his notes on those pieces... unbelievably the right note in the right place at the right time. But Kind of Blue would be a snoozer if the only sounds were Davis' and Evans'. The saxophones and everything else keep the whole thing from falling over from inertia.

Kind of Blue is still my favorite album from any genre, any era. I bought my first copy as a teenager about 1977. I don't know how many hours I've listened to it, but it is a lot.


Fuzzy and I are all signed up for QCon in San Francisco early November. In particular we'll be at the track: "Connecting SOA and the Web: How much REST do we need?".

What a great line-up of people to talk with. Just in time with the work we're doing. Ed couldn't make it, something about a class for the baby on the way that Thursday.

Saturday, October 20, 2007

Programming with Streams

Michael Lucas-Smith illustrates programming with streams and a helpful "functional" protocol added by the ComputingStreams package from Cincom Smalltalk's public repository. Michael makes the case that more code should be based on streams instead of collections, and use a common protocol like ComputingStreams instead of custom, one-off methods that won't play well with others. Streams are a better abstraction than collections because they inherently "delay" and scale up to and including infinitely long streams.

Streams are interesting creatures, fundamental to programming if you've got them available. See Structure and Interpretation of Computer Programming: "Streams Are Delayed Lists" and Richard Waters' Series package for Common Lisp for more examples.

Also this Smalltalk package is similar to Avi Bryant's Relational Object Expressions. ROE uses objects using the standard Smalltalk collections protocol to delay "realizing" complete, in-memory collections, enabling the original collections to be enormous numbers of rows in tables on disk. In the case of ROE the objects are delaying the eventual composition and execution of SQL, including the conditional expressions that may have narrowed the size of the selection.

Tuesday, October 16, 2007

Closures and Objects

I am not sure what Steve Dekorte is saying...

A language that uses objects on the bottom can use them for everything, but a language with closures on the bottom needs other types or "atoms" for things like numbers, lists, etc and then scatter around functions for operating on those other types. If you care about simplicity, consistency and organization, this is a big difference.
If you have objects, you still need compiler support for closures.

But if you have closures, you need no compiler support for objects.

Either way you need compiler support for other literals like numbers.

Unless you want to program numbers in the lambda calculus. And then too closures win.

Your name is different, but you're really a Unix system too, aren't you?

Microsoft's Unix from back in the day, Xenix...

Sunday, October 14, 2007

Simplified Javascript: Cruft Reduced

A bunch of comments just showed up, June 21, 2008.

Like it or not (and FWIW, I like it quite a bit), JSON has made an impression on networked information exchange. Douglas Crockford, who started the whole JSON thing, has a chapter in Beautiful Code on Pratt-style parsing. His example parser is written in, and happens to parse, something he calls "simplified javascript".

Simplified JavaScript is just the good stuff, including:
  • Functions as first class objects. Functions in Simplified JavaScript are lambdas with lexical scoping.
  • Dynamic objects with prototypal inheritance. Objects are class-free. We can add a new member to any object by ordinary assignment. An object can inherit members from another object.
  • Object literals and array literals. This is a very convenient notation for creating new objects and arrays. JavaScript literals were the inspiration for the JSON data interchange format.
Implementing some of the currently popular "scripting" languages, in particular "full" Javascript, Python, and Ruby will make evident the amount of cruft they have for no apparent reason. Lisp traditionally has had some cruft, but, arguably, nothing like what can be found in these languages. Such cruft makes these languages significantly more difficult to implement than implementing Lisp or Smalltalk (two amazingly "cruft-reduced" languages given their age).

Simplified Javascript would make a decent base for a scripting language. I pointed this out to a couple people looking at full Javascript over the last couple of months for an Erlang-based server and now for Croquet.

People already in the more or less "end user scripting" space tend to have some Javascript knowledge. This subset has good functional and OO capabilities, comes with a simple parser implementation already(!), and from implementing a bit of interpretation code already, seems amenable to a simple elisp/emacs-level of performance, which is more than sufficient for an interactive GUI system. Compiling to a native or byte code would not be too difficult for better performance.

Simplified javascript not only eliminates the uglier and more difficult to implement bits. It also eliminates a lot of bad security problems.

And so on designing a new scripting capability, combining Simplified Javascript with a capability-based approach to authority as with the E programming language could have some benefits. And JSON for "data"... go ahead and use "eval" to implement the JSON parser in such an environment.

Well, that would be a heck of a lot better than the current browser/javascript situation which is horrendous. Now we've got a truly undesirable "legacy" situation(s) in the browser. Apparently Mark Miller or one of those E/Capability folks is at Google now and their looking at something like the browser those folks built for DARPA. (Found in this powerpoint if you don't mind opening those up in OpenOffice.)


From Business Week on improving what already works vs. finding new things that work better...

As once-bloated U.S. manufacturers have shaped up and become profitable global competitors, the onus shifts to growth and innovation, especially in today's idea-based, design-obsessed economy. While process excellence demands precision, consistency, and repetition, innovation calls for variation, failure, and serendipity.

Indeed, the very factors that make Six Sigma effective in one context can make it ineffective in another. Traditionally, it uses rigorous statistical analysis to produce unambiguous data that help produce better quality, lower costs, and more efficiency. That all sounds great when you know what outcomes you'd like to control. But what about when there are few facts to go on—or you don't even know the nature of the problem you're trying to define?

"New things look very bad on this scale," says MIT Sloan School of Management professor Eric von Hippel, who has worked with 3M on innovation projects that he says "took a backseat" once Six Sigma settled in. "The more you hardwire a company on total quality management, [the more] it is going to hurt breakthrough innovation," adds Vijay Govindarajan, a management professor at Dartmouth's Tuck School of Business. "The mindset that is needed, the capabilities that are needed, the metrics that are needed, the whole culture that is needed for discontinuous innovation, are fundamentally different."

Planned RESTful Services

Stu says stuff about reuse...

From this viewpoint, "build it and maybe people will use it later" is a bad thing. SOA proponents really dislike this approach, where one exposes thousands of services in hopes of serendipity -- because it never actually happens.

Yet, on the Web, we do this all the time. The Web architecture is all about serendipity, and letting a thousand information flowers bloom, regardless of whether it serves some greater, over arching, aligned need. We expose resources based on use, but the constraints on the architecture enables reuse without planning. Serendipity seems to result from good linking habits, stable URIs, a clear indication of the meaning of a particular resource, and good search algorithms to harvest & rank this meaning.

This difference is one major hurdle to overcome if we are to unify these two important schools of thought, and build better information systems out of it.

The early "mashups" were certainly serendipitous. I would expect most service providers on the web today are planning for machine-to-machine to a much greater extent than even a couple of years ago.

Is it really "serendipitous" when an organization follows those good web practices and enters formal-enough agreements with participants, say in an intra-enterprise-services situation? No, that seems very much "planned". The biggest hurdle may be the addition, perhaps, of a stronger demand to negotiate terms for support and upgrades.

This is necessary of any kind of service, networked or otherwise, that has significant risks to stakeholders. If you buy SAP systems and bring them in-house, or if you network out to Salesforce, you negotiate terms of service with in-house IT, with the vendors, with the ISPs, etc.

Saturday, October 13, 2007

Scalability Requirements

Stefan Tilkov and Dare Obasanjo debate the number of big sites that would benefit from giving up on the relational database as a centerpiece of their systems. The parameter used in the discussion is scalability.

A number of the lessons learned from these big web sites could apply to smaller data center designs as well. The biggest problem I've seen in the typical data center, and from comparing notes, seems fairly common, is the ability evolve components relatively independently.

Relational databases can be in the mix and still evolve, just as more supposedly "loosely coupled" mechanisms can actually tie components tightly together. For example some systems may use asynchronous messaging to decouple with respect to time, but then pass to each other data that exposes implementation details. And so these systems are coupled to each other with respect to maintenance and enhancements: changing the implementation of component C demands corresponding changes to every other component that consumes component C's implementation-specific messages.

But the lessons are worth following for more reasons than just scalability.

Content Unbecoming

Sam Ruby comments in his own blog...

I will merely point out that I’ve dealt with formats and protocols which are “explicitly not intended for human readability” and I’ve learned to avoid them.
And Bill de hÓra comments further down...
Atompub/Atom shines light on the fact we have serious issues around sharing and “understanding” structured content. Complaining about the source of light isn’t sensible.

So, to me, this is like the web services debates rehashed half a decade later. The only interesting difference in this thread is the level the argument is happening - around the content/payload/mediatype instead of wire/transfer...

Maybe it’s not as obviously random at this time to argue that per silo data access formats are a good idea...

Fwiw, “technically”, I don’t see why facebook don’t serve class laden XHTML and document the attributes.

And along the way James Snell compares to Yaron's example...
<entry xmlns="http://www.w3.org/2005/Atom">
  <title>Some User's Profile</title>

  <author><name>Profile System</name><author>
  <content type="xhtml">
    <div xmlns="http://www.w3.org/1999/xhtml">
      <div class="profile">
        <div class="section" id="professional">

        <div class="section" id="personal">
        <div class="section" id="clothingPreferences">

Looks good. Damn. Even *this* xml-hater (yours truly) has had trouble realizing all that one-off stuff is bogus.

Don't Fear the Programmer

Not to beat a dead horse, but Robert Cooper adds an important point to the enterprise development tool conversation...

And here is where it breaks down. All these unusable drag and drop tools, and “easy” XML programming languages aren’t targeted at programmers. They are targeted to suits who can buy into the idea that some non-techy is going to orchestrate these services and modify business rules. These products are unworkable because they are based on the idea that “You won’t need programmers anymore!” at least at a core level. Once you make that assumption you start building things that get in programmers way, and still include enough abstract programming concepts that no non-programmer is ever going to be able to work with it proficiently
I've not heard this lately, but in previous situations this point was widely held: BPM tools should be used to get the programmer out of the loop. I always enjoyed bringing testing into the conversation. The current crop of BPM tools I've seen up close are terrible at incremental development, testing, and release. And yet at least some IT shops have seen them as *the* way to get from under long software development cycles.

Those shops I know of also happen to be historically lacking good automated testing and release practices. Most of their causes for long development cycles have almost *nothing* to do with programmers, save for the fact that most programmers historically have not paid much attention to good testing either. Of course that's changed dramatically over the last five years or so.

Managers: if you have long development cycles, backlogs of feature requests and bugs, and "legacy code" that seems impenetrable, then please embrace your programmers' desires to be more productive and effective. There are many things you can do today to make big improvements. Buying tools that will reduce the need for programmers is a pipe dream for now.

Friday, October 12, 2007

Sad IP

So let me get this straight -- Novell and Redhat are being sued over a patent some mangy IP shop bought that originated at Xerox PARC (I suppose Rooms?). And the patent is about making the same window show up in multiple workspaces?

This is the best IP some mangy techno-lawyer can pin on *Linux*?

Em, wow. Crawl back under your rock until you can make it *really* interesting... Next.

And note to Steve Ballmer: stop being such a wuss. You're really just appearing pretty sad these days. Can't you get a ride in space or something?

Monday, October 08, 2007

Friday, October 05, 2007

Bungie Cuts the Cord, So to Speak

Wow. This is huge. This speaks volumes between the lines and in the lines themselves. Just as Halo 3's $300MM USD is ballyhooed all over the press.

Just when there's all this buzz about how even enterprise applications are should become more game-like, and just as Microsoft must be getting desperate to show they can innovate, Bungie obviously takes off in order to be more innovative with game development than they can be as part of Microsoft.

Todd Bishop's Q&A is great. Maybe he could interview U.S. presidential candidates?

Say it Once Again

Pete Lacey: What is SOA?. Nicely done.

The definition of SOA is still an argument after all these years. This is a darn good sign that we should not care to use the term at all.

Equivocate: To avoid making an explicit statement. See Synonyms at "lie".

Mark Baker wisely points out...

There are, of course, a very large number of automated solutions to a particular problem.

...if you don’t feel disdain towards a lot (97%?) of these solutions, then you simply don’t understand what it means to be a software architect.

There is too much equivocation in our field. It's hard enough building software when you've got good tools at hand.
e·quiv·o·cal (-kwv-kl)
  1. Open to two or more interpretations and often intended to mislead; ambiguous. See Synonyms at ambiguous.
  2. Of uncertain significance.
  3. Of a doubtful or uncertain nature.

Thursday, October 04, 2007

Oh Dear, Part VII

Arnon Rotem-Gal-Oz assumes the position...

One thing that is missing from "HTTP variety of REST" implementation is reliable messaging. Location transparency is harder to solve with HTTP etc...

The solution should match the problem, that's probably the primary reason why we need architects after all.

Please Don't Pick on the Donkey

More Steve...

REST is unproven. Sigh. I can’t decide if people say this because they’re just trying to stir up an argument, or they’re so heavily biased toward their non-REST approaches that they just can’t even consider that there might be viable alternatives, or they really have no clue about what REST is, how it works, and why it works, and they’re not interested in learning it, so they just react badly whenever they hear it, or all of the above. If you’re anti-REST or REST-ignorant, and you haven’t read RESTful Web Services, then don’t even talk to me about REST. The book is absolutely wonderful, and its explanations and answers are extremely clear. If you consider yourself informed and capable when it comes to distributed systems and integration, but you don’t know REST, then there’s simply no way you can read that book and not have it lead you to seriously question your core beliefs regarding how you think such systems should be built, unless you’re completely close-minded of course...

What do mono-language programmers have to do with ESBs? It’s all part of the same culture, the “one size fits all” approach, where you have answers looking for problems rather than the other way around, and where people intentionally wear blinders to less expensive, more productive, and far more flexible and agile approaches because “it’s just not the way we do things around here.”...

At the end of the day, if you want to ignore my advice on using REST and dynamic languages, that’s your own problem.

And where his previous post received a comment, apparently from a Neudesic employee (ever heard of Neudesic???)...
Sonic Software, BEA, IBM, IONA, TIBCO, webMethods, Cape Clear, Fiorano, Oracle, Software AG, Neudesic, And how many open source projects????

All deliver ESB’s. It seems highly unlikely that all of these organizations, with plenty of smart folks, could be incorrect. Moreover, Gartner, Forester, ZapThink, and Burton, all speak loudly about ESB’s being an enabling technology on the road to building workable SOA’s.

Oh, dear. Steve responds best...
Making a list of smart companies to prove a point, well, doesn’t actually prove anything. I bet there are some actual smart people out there who have locked themselves out of their car, for example, or deleted whole directories they really wanted to keep but never backed up, or even paid actual money for a Zune. What does that prove?...

Unlike most of the commenters on this posting, I’ve personally built at least one of everything I’m talking about, on the ESB side and on the REST side, so I’m extremely confident in my opinion. I’d therefore much rather get specifics from you.

The View From Here Is... Grisly?

Joe Wilcox on MSFT Vista, speaking of big software the world is leaving behind...

The results, while arguably anecdotal, are grisly...

"Everyone—every single person—that I put on Vista has switched back to XP," he said. "It's too complicated."

From an administrator's perspective, my VAR buddy likes some Vista deployment tools, but he viciously complained about poor driver support, networking changes and end user complaints about UAC (User Account Control) and Internet Explorer 7 security popups.

In this ringing Vista endorsement, one Microsoft Watch commenter claimed: "I'm an IT consultant and I'm proud to announce I've formatted 450 Windows Vista machines back to Windows XP to date. I have also prevented at least 1,000 Windows Vista sales."

...the relevance of Windows is in decline. Microsoft's desktop operating system is rapidly becoming a commodity.


Her royal pingdom lists...

OS: Linux 7 - Windows 2
Web server: Apache 7 - IIS 2 - Lighttpd 2
Scripting: PHP 4 - Perl 4 - ASP.NET 2 - Python 1 - Java 1
Database: MySQL 7 - SQL Server 1 (possibly 2)
And you should not be investing in some reasonable approximation to LAMP, again, because...?

Protected Game Preserves

That's how Bob Warfield refers to how Adobe, Microsoft, and SAP are approaching Software as a Service (SaaS). There's something different about Adobe relative to the other two.

Microsoft and SAP are figuring out how to transition gigantic cash cows into a world that is rapidly leaving them behind. Not so much for Adobe.

Adobe bought BuzzWord. They now have an early beta of an interesting word processor. They also have a few more apparently kickass flash/flex developers and creative software designers.

As opposed to MSFT's and SAP's situations, the fact that BuzzWord doesn't compete with other current Adobe products is probably an advantage. BuzzWord as a product may or may not ever matter.

Adobe's strengths in the new world appears to be (1) graphics, especially as they become more web-friendly, e.g. Thermo is hot.

And (2) their own decent runtime from Macromedia, something relatively easy to build upon compared to java or dotnet, from a company with decent developer relationships.

I think considering the company as a whole, Adobe has some good stuff to take into the new world. MSFT and SAP have bigger transitions to make, product-wise and company culture-wise. Not that I would bet much against them, because they are f'ing big and know how to sell software to vice presidents.

Properly Striking a Balance Between Shared Agreement and Decentralized Execution

Stu Charlton of BEA says stuff on the ESB brouhaha, but then he says some really crucial stuff we all need to consider...

I like ESBs if they're used properly -- to me, a good ESB is actually just a scripting language with supporting infrastructure. AquaLogic Service Bus, to me, is a Pipeline language, an XQuery language, and the supporting management & interoperability infrastructure, for example. But an ESB is not mandatory, and sometimes can slow you down if you're not looking to make legacy systems interoperate.
Wow. Equating Aqualogic with a scripting language. That's bold.
I agree with Steve Vinoski's The ESB Question. Don't use an ESB if it's going to slow you down. ESBs are a decent, productive tool for SOA (and even REST!) when applied properly, but so are Python or Ruby. ;-)
I want to like what Stu's saying, but his sprinkling of "properlies" just seems like equivocation.
BEA tries really hard to make Jython supported in WebLogic and AquaLogic, for example. The whole JMX object model is scriptable, and you can do some amazing things with it. And we have PHP running on WebLogic Server.
OK. I kicked the tires on Aqualogic over a year ago. I did not appreciate it any more than other ESBs, and found Jini/Javaspaces significantly more productive.

But I like the parts about Jython and PHP.

What gives? Perhaps two things:
  1. software has long term residual value but our investment decisions are always short term
  2. we intertwingle what changes from what doesn't change
We've fixed #2 with hardware platforms. SOA was supposed to fix it for software, but perhaps not (yet).
Yeah, across the industry we pretty much have #2 at the level of a high art.

As for SOA, well. It's well known...

SOA is the only thing Chuck Norris can't kill...

Its too bad you can't afford it.

Now we get to where I really like what Stu writes (and it has nothing to do with ESBs, unsurprisingly)...
I've seen many fast-moving IT departments, particularly in investment banking, but even some in telecom, continue to use the "build it to throw it away" mantra with reasonable success. This works when your organization is compartmentalized. But when information sharing is of primary importance, such as the intelligence or defense establishment, for example, we get to a case where these huge organizations need a different approach. Layers upon layers of bureaucracy make it hard to abandon any pet project.

This is partly why I think Enterprise Social Computing, the Web's architecture (or some successor of REST), etc. are very exciting; they seem much more likely to strike a balance between shared agreement and decentralized execution.

Every branch of software I know of must build an organization that is good at building software that is easy to throw away. If your organization can do this, it is at the master level and should be highly valued. Very few organizations can throw away much of anything in their data centers. This is *horrible*.

Stu nails that. But moreover he latches onto what should remain relatively constant in the data center: the information (web) architecture. That seems to be a good way, as he writes, "to strike a balance between shared agreement and decentralized execution". If you do that properly, you win.

That BEA Enterprise Social Computing thing looks interesting. It's encouraging to see BEA reaching out in that direction.

Making Great Decisions

From the Lean Development yahoo group...

Ford has spent the last thirty years moving all its factories out of the US, claiming they can't make money paying American wages. Toyota has spent the last thirty years building more than a dozen plants inside the US .

The last quarter's results: Toyota makes 4 billion in profits while Ford racked up 9 billion in losses. Ford folks are still scratching their heads

I viewed a Google Tech Talk recent on "Making Great Decisions". The presenters have a book by the same name. The talk and the book a very similar. The book has more content, of course, but the format is better as a book for me. It's the kind of book that I can pick up here and there and read a few of their stories. Nothing truly astounding has jumped out to grab me, which may be good: the stories are just common sense reminders for the most part, with a few fundamentals of analysis thrown in, e.g. considering your biased viewpoint when analyzing a trend.

By and large we tend to stop thinking even when we appear to be "doing".


I've enjoyed reading Steve's posts and articles for a while. His latest is somewhat of a shockwave. An entertaining shockwave, the best kind...

The ESB becomes like one of those tools you see on late-night TV, where it’s a screwdriver and a hammer and a wrench and fishing reel and a paint brush, and plus you can flip it over and it can make you a grilled cheese sandwich. Of course, such tools always end up not doing any of those things particularly well.
But more than that he's former Apollo, which is way up on my list of lost treasures. Aegis / Domain/OS was a joy...
AEGIS was distinctive mainly for being designed for the networked computer, as distinct from its competitors, which were essentially standalone systems with added network features. The prime examples of this were the file system, which was fully integrated across machines, as opposed to Unix which even now draws a distinction between file systems on the host system and on others, and the user administration system, which was fundamentally network-based.

Wednesday, October 03, 2007

Yes, IM

From "discipline and punish" this...

Now every application isn't complete until it can send/receive IMs.
For whatever reason this came to mind the other day...
Send an instant comment to me,
Initial it with loving care
Don't surround

'Cause it's time, it's time in time with your time and its news is captured...
I'll have to go back and listen to the recording. I always thought it was "instant comment" but most lyric sites have "send an Instant Karma to me". Hmm.

OK. So in case anyone is interested, I did listen to the music this morning, and it does sound like he's saying "send an Instant Karma to me". Oh well. That's probably more Yes anyway.

Delay Tactics

See over on OreillyNet...

Well-intentioned people often add risk to their projects when they make hard decisions too early.

Carry On

The comments to Cedric Otaku's "verdict" on Erlang are a better read than the post itself. A smattering of choice words...

It is amusing to watch people attempt to tear apart and deconstruct Erlang, trying to refute its benefits, while Erlang coders continue on their merry way creating incredible software. Carry on, Java programmers!
What you're saying in effect is that despite the fact that the Erlang guys have already proven this stuff in practice time and again with actual, working, reliable, and long-running telecommunication systems (and systems in other fields as well), your gut feelings are a much better proof and indicator of what actually works in practice and what doesn't, so the reader should believe you, not them. Wow.
The real difference here is that it is much simpler (i.e. you don't even have to think about it) to run the supervisor process on an entirely separate node in your cluster. This is paramount for real fault tolerance in a running system, and you don't even have to think about it in erlang. I don't know how easy or hard this would be to accomplish in Java, but it is stupid simple in erlang.
In fact, the "post message to mailbox" operation has historically been the most tightly tuned hand-written algorithm in the entire erlang VM, for exactly that reason... but go ahead and assume it sucks. :)
Someone suggested Java can't compute 4 digit factorials. Use BigInteger - and it comes back in a snap for the factorial of 1000.
(You gotta love *that* one, don't you!) And...
Man, I'm sooo over language wars.
Finally from the post itself...
The only way a process can modify the state of or talk to another process is by sending a message to that process. Regardless of how lightweight the implementation of this message passing is, there has to be a lock on all these inboxes to guarantee that when dozens of processes send messages to your own process, these messages are delivered and treated in the order they are received from.
Erlang makes no such ordering guarantees. The messages from any process A to any process B will arrive in order, but independent of the interleaving of messages to B from processes C to Z or beyond.

And like Mark Baker has said about designing Restful HTTP systems: nobody said it would be *easy* just more simple and rational than other approaches. I don't know anyone who said Erlang made complex, distributed programs *easy*. But Erlang makes some aspects simple, gets you thinking in the right way, and has a body of knowledge and community easily available to help you build large systems.

Earlier in the year the team I am on took on building the kernel of a potentially large system using Jini/Javaspaces. That is the most "erlang-like" tool for building large, distributed systems in Java. (No, ESBs are not even in the running.)

The software was fine. The mechanisms worked. The biggest lack was the community. It would take a good deal of effort to build up our own knowledge base. There were bits and fragments and people like Dan Creswell we could call on to get advice here and there. But Dan moved on from J/J for the most part into another full-time gig. The rest of the community just wasn't there.

That community and knowledge base seems to be pretty much in place for Erlang development. That in itself is a huge advantage. Carry on.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.