Update: Jon Stewart on the Daily Show made a similar statement (almost word for word) about the definition of an activist court. He apparently said it first in New York, but I didn't get the show until an hour+ later on the west coast. I was hoping his staff reads my blog. Obviously a vain idea, considering the timing factor. 8^)
BTW couldn't the claim realistically be made that the most "activist" decision any US court has made in recent memory is the Supreme Court decision to declare Bush the winner in 2000? Water under the bridge, right? But I don't recall this decision ever given countenance by consitutional scholars.
From one book on the matter: "Digging deeply into their earlier writings and rulings, Dershowitz proves beyond a reasonable doubt that the justices who gave George W. Bush the presidency contradicted their previous positions to do so."
Reinforcing the point. "Activism" is clearly in the eye of the beholder.
The Defense of Whitey...
Simply brilliant.
By the way, an "activist" court is one that makes decisions *you* don't like. And a "state's right" is a decision that you *do* like. (For any definition of "you", left or right.)
The automated french translation of "Making it stick" comes out something like "He's making the stick".
Well, that's not what I intended for the title of this blog. A better translation might be...
"Rendant quelque chose de dernier de mani?re permanente."
...which is not as short as "Making it stick" but means something like "Making something last permanently."
The challenge is expressed this way...
try explaining to someone how Smalltalk's ifTrue:ifFalse: works sometime
The message looks like this...someObject ifTrue: [someCode] ifFalse: [otherCode]
.
See if my attempt makes sense to you:
If someObject
is true, then someCode
is executed. Otherwise otherCode
is executed.
Is that basic enough for non-Smalltalkers?
Fred Brooks, via Chad Dickerson, displaying ancient industry wisdom...
I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared with the conceptual errors in most systems.
Yes!!! Consider the recent Mars Spirit Rover problem... it was an unconsidered scenario, not an ill-specified proof, that disabled the Rover.
We need *simple* tools that stay out of the way for us to have more time to consider problematic scenarios.
And we need better simulation tools and conceptual analysis tools. For example was Spirit's flash file system modeled at any level as a concurrent state machine? What kind of analysis would have caught the missing transition from a failed transmission to expunging the leftover files?
Note that this is also what Longhorn architects should fear the most... the unknown unknowns, as Rumsfeld would put it.
Has the nature of data modeling changed in the era of the Semantic Web?
All around the world, as I write this, developers are struggling to create models for invoices and other "simple" business documents into IT systems. All around the world, multiple efforts new and old continue to attempt to zoom in on a definitive model of what it is to be an invoice...
There is *no such thing as an invoice* in the classical data modelling sense... There seems to me to be a fundamental mismatch here between the classical software model of an invoice and the reality of real world invoices.
Think back to how you recognized that invoice as an invoice. You found a bunch of attributes which you associate with invoices. You found enough of them in your analysis of the piece of paper to conclude that it was statistically speaking, more likely than not to be an invoice. It if walks like a duck...
I think this is a case of both/and rather than either/or. Financial systems should not be expected to "scan a document looking for invoice-y-ness".
Yes, there are umpteen definitions of an invoice. No, formally modelling any one of these is not an endless excercise.
You see, any given accounts payable system only has to understand *one* of these. So the one I use understands SAP's definition of an invoice. The one you use may understand Great Plain's definition.
Will there ever be a universal definition of "Invoice" for the Internet? Or will the Semantic Web just allow us to send bits that resemble invoice-y-ness?
I'm not sure we'll need universality. I kind of anticipate the arrival of a marketplace for "adapters" and "translators" for documents, and some of those will employ fuzziness. There is room for formal documents to become more "semi-formal" making them more "social". But most of today's formal bits for formal systems, like subledgers of purchasing systems, must remain formal.
My AP system does not need a universal model, nor does it need to understand umpteen invoice models, but neither does it need a neural net or a rule base. It does need at least one formal definition of an invoice, which obviously is not insurmountable.
Maybe I'm a quack.
The good news is that people inside MSFT are considering the complexities of WinFS.
Dare Obasanjo writes...
things aren't as easy as one suspects at first glance at WinFS
Insert appropriate "I told you so" from here and here.
Unfortunately the notion that a solution could in any way resort to "even if it is just coming up with application design guidelines" is *completely* unacceptable for a 21st century file system or database.
There is a philisophical schizophrenia taking place inside the head of WinFS. Is it a file system? Is it a database? Is it cheap and simple? Is it transactional and robust?
I personally do not believe WinFS should be cast as a file system at all. Rather WinFS should be a complete rethinking of what a database is. WinFS should eat the lunch of SQL Server, not NTFS. But the technical problems are deep, and the business of introducing a replacement for SQL Server are enourmous compared to a replacement for NTFS.
However, any practical use for WinFS by *definition* makes it also a replacement for many current uses of SQL Server. Unless you just want a way to store pictures. That's trivial, and that alone is not the heart and soul WinFS being presented to us.
As for the security problems, the answer there is also obvious yet painful. A real innovation in Longhorn would be to use "trusted computing" in the hardware and software to implement a capability-based operating system.
XAML, camel. Don't just play at being engineers. Create something really useful.
Gordon Weakliem speculates on dynamic loading in dotnet...
I don't know of anything like this for .NET, though AppCenter may do that for you. On .NET, you can replace assemblies in a server application, but that forces an application restart, so you lose any state stored on the server. It sounds like in James' scenario, he relies on using loadable components along with an administrative API to get the server to reload components, which makes me wonder if you could do something similar for a .NET web application, where the application itself is just a driver that loads components from elsewhere. It certainly wouldn't be a standard design though.
I am sure I know less about dynamic loading in dotnet than almost anyone, and MSFT has the issues in hand. I wonder though how language model of dotnet complications may wreak havoc on a simple solution to the problem.
One issue that comes to mind is stack-allocated objects...
Imagine module A defines a stack-allocated and some system should load a newer version of A. Now imagine the newer version of A redefines the stack allocated object, e.g. gives it a new "shape". What does this do to the objects allocated on the stack and the code in other modules that manipulates them?
This is one of the key differences between the "everything is an object" as defined by dotnet, and the "everything is an object" as defined by Smalltalk.
Wesner Moise asks this question. Clearly MSFT is a force to be reckoned with. They are making a huge investment to bring even more developers more solidly into their API camp, which is not easily emulated.
What I hope happens to Linux is that significant developers continue to realize that the linux core is solid, and that good systems should still be built in layers. Microsoft is bundling so much into their "core" for business reasons, it's a bet that may not pay off as well as hoped.
It's not good software, but it is great business.
And speaking as a software developer, as dispassionately as I can speak about something I am passionate about, there are precious few gems in what I have seen so far.
The two real gems to me seem to be Indigo and the Business Framework class library. Ironically these are two components that are the most independent of the "core".
The two primary losers seem to be WinFS and the whole XAML, Avalon stuff. Not that they are not aimed in a worthy direction, I just don't see a lot of bang for the buck.
The Linux and the Java worlds (they are still separate from each other) have the benefit of more freedom to innovate on *top* of those platforms.
The Longhorn core is too big and will suck too much energy just to begin to grok the complexity of the singular vision.
But I have never been good at business math. The strategy from a business perspective is sound. It has obviously worked in the past.
Economic forces to get or keep a piece of the rest of the pie should remain strong enough to promote innovation on the non MSFT platforms. I don't see others being plowed under so easily. There is too much wrong with Longhorn and too much incentive to compete.
Phil Windley makes a case about SOA and complexity. I'll buy the conclusion, but the argument needs more realism. Here goes...
First, no one's IT architecture looks like the first figure. That figure is too neat. The real picture is somewhere between the first and the second, but the integration lines are supported by an evolution of technologies from previous-generation message queues to file transfer to re-entering data by hand from one application to another (or worse, by hand from a print out).
Second, a future SOA architecture is unlikely to look like the second figure, for several reasons. One reason Phil makes indirectly: that picture is so complex as to be unmaintainable. So while *someone's* IT architecture may end up this bad, the typical architecture almost certainly won't.
Other reasons that figure is unrealistic: business processes provide a natural organizing force for large grained services. I have trouble believing the architecture of the future will be made of hundreds of interacting components without some large grained (logical, if not physical) hubs of service organized by purchasing, inventory, manufacturing, sales, general ledger, etc.
A third reason I have not seen mentioned much in the past year, but I believe will hold even as the business cycle improves. That reason is vendor consolidation, two forms of it. On the IT side, even Phil's argument points out that an SOA benefits from organizing forces. One of those forces is for an IT architecture to drive toward fewer vendors supplying products into the architecture. On the vendor side, well, vendors will want to be on those select lists, and so will continue to merge and make overall sense of the resulting product strategy.
On to the three problems Phil enumerates for an SOA...
First, "No one team understands or controls all the moving parts". But that is true today without a doubt.
Second, "Change management is more complex". Maybe. But almost by definition a useful SOA has to evolve toward fully embracing the parameters of being "loosely coupled". Change management is complex today, to a large degree because there has been little enterprise architectural guidance and pieces thrown together have resulted from little attention to "loose coupling" principles.
Third, "Separation of concerns is more difficult". Unlikely, for a few reasons. Concurrent, even ahead, of the movement to an SOA, is the movement of IT organizations themselves to an Enterprise Architecture alignment. More attention than in the past is being paid to enterprise concerns prior to deployment, as far ahead as the capability roadmap, the project concept, or at least the project planning stages. The people who support the IT architecture, pay for it, and even those who design it, are sensitive already to the problems inherent in such a picture and are already showing signs of addressing "separation of concerns" at the enterprise level through architecture and through budget (Enterprise Program Management). A challenge is to find enough balance between avoiding constant interaction and indirect impacts of minimizing communication. (Plug goes here for new collaboration tools like Wikis and RSS to lower the effort of finding "just enough communication".)
Lastly, a default argument could be made that a large movement toward an SOA is inherently dependent on SOA vendors addressing these problems effectively. This leads to somewhat of a Catch-22, but we can expect the usual curve of early adopters to begin working out the kinks.
I don't believe many of these problems will be addressed by "web service intermediaries" per se. Although the infrastructure is still immature, the real problems for the enterprise are at the boundary where the services architecture meets the business architecture. The real problems of heading toward a world of rich services is how to get them to play at the business level. How do you keep them agreeing on a correct chart of values, a correctpart and product hierarchy, roles and responsibilities, up-to-date with engineering change notices, etc.
These are not insurmountable. To reiterate I believe the movement toward an SOA will be preceeded by a movement to realign IT for the enterprise architecture and go hand in hand with vendor and product consolidation, and less custom development. Without these steps, then yes, the future looks unmanageably complex. *With* these steps the future is still complex because the changes will be gradual and IT will continue to deal with the as-is and the to-be concurrently. But at least there is hope if the SOA products mature and rational principles are in place for their adoption.
WS-xxx is supposed to be composable. Great. How do we tell the potentially solid connections that *work* from the illogical, or simply broken, connections that don't work?
I suppose that is where Indigo will give you Microsoft's working combinations based on some declarative attrbutes. (And those attributes themselves may work in Microsoft's C#, but not in some other language provider's dotnet compiler.)
How do I figure out in some Java implementation of WS-xxx what to provide to line behaviors up with the Indigo implementation? This is probably more of a mess than necessary.
And where are all those web services, anyway? Does their apparent absence have anything to do with the WS-xxx soup? Shouldn't one just use HTTP[S] after all?