"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Wednesday, December 31, 2003

One last post for 2003... Happy New Year!

Signing off for 2003. One last post...

Dan writes about good news for organic beef farmers. "They're not allowed to feed animal remains to their cows."

Here in Portland, Oregon we get Painted Hills beef.

Tuesday, December 30, 2003

Thinking Good Thoughts

Another break in the holiday festivities to note that Don Box is asking us to think good thoughts...

Think HyperCard. Think VB 1.0. Think classic ASP.

I think it's going to matter big time going forward as the industry wakes up from its C++/Java-induced haze and starts thinking about making computers programmable again.

Monday, December 29, 2003

A break in holiday festivities for a technicalogy wish worth wishing...

Via Wired, Rael Dornfest, author of Google Hacks and the mobilewhack weblog, with a wish worth wishing for 2004...

"I'd like to see consumer mobile devices -- palmtops, hiptops and handsets --scriptable. It was scripting that drove the Web, taking it from a static online catalog of content to an operating system. Gaining simpler programmatic access to the contacts, calendars and other assorted user data; Bluetooth; messaging; image capture and manipulation on the phone will open up the mobile to the people prototyping the next generation of applications."

Wednesday, December 24, 2003

Thoughts heading into 2004

Whatever else Jesus was, he was almost certainly a radical.

He challenged the mainstream religious authority.
He challenged the mainstream governing authority.

He challenged the mainstream attitudes toward those who are not in the mainstream.

My wish for us all is peace and true prosperity.
May we all take one radical step in that direction in 2004 and we'll be more than a billion steps closer.

Tuesday, December 23, 2003

REST and Linda for Distributed Coordination: Elaboration vs. Layering

Mark Baker makes an interesting distinction in my wRESTling with tuple spaces...

Patrick seems stuck with how to reconcile his position that generic abstractions are a good thing, but that systems should be built independent of the protocol. Note to Patrick; this is all well and good for transport protocols, but application protocols define the abstraction; for them, protocol independence requires that you disregard that abstraction.

This distinction of transport protocols vs. application protocols is exactly what I am wondering about REST. As I read the definition of REST, the architectural style being described is for a transport protocol rather than an application protocol. Not much is said really about the behaviors of the client or the server. Even when you bring HTTP per se into the definition of REST, Fielding makes a somewhat confusing statement about transport protocols...

HTTP is not designed to be a transport protocol. It is a transfer protocol in which the messages reflect the semantics of the Web architecture by performing actions on resources through the transfer and manipulation of representations of those resources.

Is it a transport protocol or not? Let's ignore that and pursue the part about "performing actions on resources" because that *does* seem to be about an application protocol. Fielding continues...

It is possible to achieve a wide range of functionality using this very simple interface, but following the interface is required in order for HTTP semantics to remain visible to intermediaries.

And so this is where I begin to have problems with REST, as I read it, as an application protocol for distributed system coordination. The problem is not that it is inappropriate, but rather that it is too vague.

I don't mean "vague" in a derogatory manner. What I mean is exactly what Fielding writes, i.e. it is possible to implement a wide range of functionality using this very simple interface.

How is this different from the tuple space interface? I have written, and many others better than I have written, that it is possible to implement a wide range of features using the very simple tuple space interface.

The difference is this: the HTTP interface is vague and the Linda interface is specific. Linda has precise, simple semantics. The possible range of behaviors exhibited in Linda-based systems benefit from being layered on *top* of those precise, simple semantics.

HTTP, on the other hand, has to be *elaborated* into something more specific in order to have a useful meaning as an application protocol. WebDAV is an example of such an elaboration.

Every web site that implements custom behavior using forms with GET or POST is an example of the open ended nature of HTTP per se. The architectural style of REST supports the HTTP transport protocol underlying these forms moving across the web, but the application protocol, that is, the behavior of the forms on the client and especially on the server is defined (at least in code) by each specific instance.

Distributed systems wishing to use HTTP, or more generally REST, to perform coordinated work will therefore require some more specifically defined application interface than that provided by REST, or that provided by HTTP.

WebDAV is one option as stated already, and it is proven to be viable in some specific cases. I don't believe the full range of systems that can be usefully built with WebDAV has been exhausted. By the same token, neither do I see a lot of evidence of that range being nearly as broad as that of Linda tuple spaces.

Vanessa Williams provides an elaboration of HTTP for a tuple space application protocol. As I understand REST this should therefore provide the application protocol of a tuple space on the architectural style of REST using the HTTP transport/application protocol mix. In this case the advantage of using REST and HTTP is supposed to be found in the hardware and software that would already be in place between the client and the server.

I think and hope this is fairly accurate. I am eager to be clued in further by Mark and others. I am still unsure that this advantage is significant over a less pure elaboration of HTTP, as in XML-RPC or the arguably more RESTian SOAP. I think there is a lot to be said for something else altogether as a transport for tuple spaces, in particular Jabber or perhaps Spread. The bottom line is the usage models of distributed systems coordination would benefit from a well defined, simple, axiomatic application protocol, but the best transport protocols *will* have to evolve just because the usage models themselves will have to evolve. For all but a handful of services (e.g. Google), they just may act nothing like today's web.

What Kinds of Queries?

Queries: reportedly, Adam Bosworth said at the XML conference...

"I don't know how we're ever going to truly optimize these queries."

Before we worry about that, though, we should consider how the average user is going to form those queries. The last twenty years have made great strides in forming SQL queries for numerical data analysis. The models underlying those queries fortunately support both the relatively non-technical users who wish to form them as well as the highly technical adminstrators who wish to optimize them.

What kinds of queries do these users wish to form? What are we trying to optimize? What relationships will these have with current typical business analysis?

Will we wait twenty years for the evolution to settle into a widely used winner?

These Changeable Things Avalon Has

Update: I almost forgot Doug Lea's thoughtful design and implementation of a collection package, which predates the java.util collections. That specific link will take you directly to his wisdom on designing type checked immutability. (Which makes me wonder if Microsoft has hired him yet too?!)

Greg Schecter writes about the Changeable class. Not knowing much about the implementation this seems like a reasonable concept. (EmeddedChangeableReader sounds like it might be a bit over the top, but I don't know.)

Compare this to Object.freeze in Ruby. The Ruby implementation appears to be simpler, but the overall lesson I think is this: what we have in the Changeable class is more evidence of Java-like languages struggling with their underlying dynamic tendencies.

Inside every statically type checked language there is a dynamic language struggling to get out.

The simple statically checked approach is to create a class without mutation, e.g. Brush, and then add mutation in the subclass, e.g. MutableBrush. Another approach is to simply throw an exception in a method that would otherwise cause a mutation without any system-wide designation in the code. See the Java Collections API for example. In particular the unmodifiableCollection static method and its siblings for List, Set, etc.

Then again there is the option of just allowing mutation for all instances even when it's not expected. One of the first actions I took in my first Smalltalk system (the Tek 4404) was to change Black to White. Oops. Fortunately although Smalltalk has a persistent image, it's not difficult to back out of the change.

Wanted: Comparison of XML-based Query and Table-based SQL

Jon Udell writes about XML for the Rest of Us wherein Adam Bosworth writes...

"The relational database is designed to serve up rows and columns," said BEA's Adam Bosworth in his keynote talk. "But our model of the world is documents. It's 'Tell me everything I want to know about this person or this clinical trial.' And those things are not flat, they're complex.

I agree with the idea of semi-structured searching and manipulations. But I don't expect anyone would deny we still need traditional (e.g. business) calculations and those will be well served by more structure than less. I'd like to see a more direct XML Query and SQL comparison. As it stands, I'm being led to believe I'll should use XML Query (i.e. something with XPath-like stuff in it) for doing non-numerical property-tree searches over data that has been or could be expressed in an XML text; but I should use SQL for doing relationally flat table-column calculations over data that has been entered into a relational database.

There is a merger out there somewhere that I'm not seeing, or maybe just an example of what such a merger might look like.

Useful InfoPath Session at PDC

Watching this PDC session on InfoPath is worth the time. Better than previous demonstrations and white papers I've read, especially if you looking for somewhat technical information for InfoPath in the context of a modern Microsoft-based IT shop.

Monday, December 22, 2003

My First Computer: the IBM 5100

A collection of first computer stories relayed by Dan Gillmor.

My first computer was an IBM 5100. Flip the switch up, you're running BASIC. Flip the switch down, you're running APL.

I'd hardly call it a portable computer though. The box was not even as mobile as those "luggable" computers like the Kay Pro from a few years later.

What's On Public Radio

http://www.publicradiofan.com has a listing of, well, audio links to what's on public radio right now.

Teddy Bear's Picnic on DVD in February

Great news from Harry Shearer...

Now, some real Xmas cheer: many of you have been asking when you can buy my film "Teddy Bears' Picnic" on DVD. The answer, apparently, is this February. Be warned, you may have some trouble finding it, since, as the result of a monthlong battle lost this week, the cover art on the box will more closely resemble Porky's 4.

This is merely proof that, while it's now easier to make a movie outside the Hollywood mainstream, to get it seen one still has to run the usual gauntlet of Visigoths. These particular barbarians, in a nice twist, are Canadian, and they appear to believe that the target audience for this movie is teenage boys. I'd share the art with you, but I promised a low-impact blast, so take my word for it--the bikini-clad blonde who dominates the cover art appears nowhere in the film.

If you'd like to send a greeting to the primate who insists on this approach, his address is: rmanis@thinkfilmcompany.com. Yes, thinkfilm. When your business is irony, even that comes back to haunt you.

More on tuple spaces

Phil Windley expresses an interest in tuple spaces and points to an item that spells out in some detail why I am still unsure of what ReST really means at a deep semantic, implementation, and performance level. I wrote about this in a couple of (admittedly superficial and this is more of the same) items in April 2003.

Whether using "pure" HTTP, or SOAP, XML-RPC, or Jabber, (or SOAP over Jabber, XML-RPC over SMTP, SOAP over BEEP over HTTP, or... even RSS, RSS-Data, and polling or cloud APIs as part of the transport) the key is to distinguish transports from actions.

A tuple space (or XML space, where the tuples are represented as XML text) for distributed, asynchronous computing can be (and should be) implemented on top of multiple transports. The important aspect for applications are the actions (tuple space semantics). The implementation and performance of the transports can (and should) evolve independent of the simple semantics of tuple space actions.

Above the simple actions, independent of the transport implementations, the simple tuple space actions can be combined into various kinds of databases, queues, exchanges and marketplaces that we really want to focus on evolving in the first place.

Saturday, December 20, 2003

Where's IBM's Linux Desktop?

From consultingtimes...

Have you noticed that the most likely source of technology expertise, IBM has simply refused to provide a Linux Desktop? With all of their Lotus applications neatly running on their own UNIX products, they won't let you have them on Linux. Instead, they suggest you purchase Windows XP Professional.

The internal strife existing at IBM over producing a Linux desktop has the potential to hurt IBM's business model. A powerful internal software organization wants to grab server market share from Microsoft without disturbing Microsoft's desktop. Anyone inside IBM that mentions a Linux desktop has the potential for losing their job. While few people at IBM know what exists outside the company, the powerful software group may have top executives walking the same plank as the rest of us if Microsoft remains the only Intel desktop platform.

While the software people at IBM have their heels dug in, they may find out that their Web Services strategy based on Java has no place to go. Sun may not have put IBM in "Check," as Scott McNealy has put it, but Sun's Java Desktop System definitely places IBM in a Microsoft dilemma. IBM will have to decide if they'll continue to provide Java Web Services and find a desktop to accommodate it, or watch their Java developers transfer their code to Microsoft's Java Language Conversion Assistant.

I used Don Box

It's true, and in the morning I felt kind of bad. But it was for the good. I've done it in the past, but this time it was coldly calculated. Please read on...

Actually when it comes right down to it, I don't care what happens to Visual Studio. I also don't care what happens to Emacs. VS will continue to improve, but moreover it will continue to be hugely popular no matter what. Emacs has the audience it does, and will probably not improve beyond its current state, because it is in itself an axiom. Probably it's appeal and usage characterisitics will remain about where they've been for the last twenty years.

So why did I use Don Box?

More people than I could ever hope to draw on my own have had a chance at least to read the story about Emacs and the secretaries in the 1970s. Was this story really intended to promote Emacs and to benefit VS?

Not really. I saw the opening and ran for it. Here's the message: the Longhorn preview takes over five gigabytes to install. How much of that is for the typical user?

Very good arguments could be made that all of it will eventually trickle down to the non-technical user. There is no way I could or would argue against that.

But in the 1970s a few typical secretaries had a simple tool for helping themselves, the same tool most programmers have intimidated each other from using even as an influence. In the 1980s typical non-technical users were building multimedia applications using Hypercard. Emacs and Hypercard together take a miniscule fraction of the installation space and still a small fraction of the intellectual power required for computing with XML, DOMs, XAML, and WS-xxx. Are the secretaries going to be doing this in Info Path?

In all of these five plus gigabytes of impending computations, what are we doing for the typical user or even the non-technical MBA? Maybe this was an inappropriate way to use blogspace.

Friday, December 19, 2003

On just one difference between Emacs and Visual Studio Dot Net (besides the length of the name)

Don Box wishes he could be disagreeing with James Robertson's observations on Visual Studio Dot Net, but apparently can't. Here's what makes the difference.

I would consider any advice Don brings from Emacs an improvement for VSDN. But the fundamental difference is also the fundamental failing of not just VSDN but practically every IDE I have seen including the vaunted IDE for Java using Eclipse.

The irony is the term "Visual" because VSDN is a visual nightmare. The beauty of *most* uses of Emacs (remember Emacs is a flexible tool-building platform like Eclipse, except simpler and more expressive) is the visual simplicity and just-in-time functionality. Where VSDN gives you panels, panels, everywhere panels of things to do and be concerned about, Emacs gives you an editing buffer. Everything else is a keystroke or menu click away. All the power of VSDN and more is waiting for your call to action, but visually you are "just editing".

Let's recall this story from the 1970s about secretaries (as they were called then) using Emacs, essentially the same Emacs you're using today. (You *are* using Emacs, aren't you? For shame!)

...programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.

Would we ever read a similar story about VSDN? For want! Not by the 2070s.

One ring to bind them all. Emacs Semper Virens.

Wednesday, December 17, 2003

Game Programming with Python and PyUI

I just picked up Game Programming with Python by Sean Riley. Sean is also the author of the PyUI user interface framework. He explains and uses PyUI in the book as well.

Just thumbing through the book, I would give it a thumbs up. It looks good and I hope it pans out. No pun intended.

Sunday, December 14, 2003

Are we moral relativists?

Before we get too caught up in the capture of a dictator we (the U.S.) supported for decades, let's take a tally of the others still in our (the U.S.) favor.

In two fine speeches recently, President Bush made it clear that autocratic regimes in the Middle East, including U.S. allies Egypt and Saudi Arabia, need internal reforms to stop churning out terrorists. Somehow, though, he forgot to mention Azerbaijan and Uzbekistan.

If the president's ratings go up based on the recent capture of a former ally now out of favor, should that be considered a mandate to terminate relations with these others? Or does the administration itself suffer from "moral relativism"?

Thursday, December 04, 2003

Peer-to-Peer Sockets

From Brad Neuberg, OnJava, an insightful abstraction of sockets on p2p on sockets...

P2P Sockets effectively hides JXTA by creating a thin illusion that the peer-to-peer network is actually a standard TCP/IP network. If peers wish to become servers, they simply create a P2P server socket with the domain name they want, and the port other peers should use to contact them. P2P clients open socket connections to hosts that are running services on given ports. Hosts can be resolved either by domain name, such as www.nike.laborpolicy, or by IP address, such as 44.22.33.22. Behind the scenes, these resolve to JXTA primitives, rather than being resolved through DNS or TCP/IP....

The P2P Sockets project already includes a large amount of software ported to use the peer-to-peer network, including a web server (Jetty) that can receive requests and serve content over the peer-to-peer network; a servlet and JSP engine (Jetty and Jasper) that allows existing servlets and JSPs to serve P2P clients; an XML-RPC client and server (Apache XML-RPC) for accessing and exposing P2P XML-RPC endpoints; an HTTP/1.1 client (Apache Commons HTTP-Client) that can access P2P web servers; a gateway (Smart Cache) to make it possible for existing browsers to access P2P web sites; and a WikiWiki (JSPWiki) that can be used to host WikiWikis on your local machine that other peers can access and edit through the P2P network. Even better, all of this software works and looks exactly as it did before being ported. The P2P Sockets abstraction is so strong that porting each of these pieces of software took as little as 30 minutes...

The SOA Antidote

Russell Levine writes in the Business Integration Journal about the Myth of the Disappearing Interfaces. If you work in IT, have been involved in some EAI projects, and are a relatively critical thinker, then there probably is not a lot of new information for you. However the piece serves nicely as an antidote to the run-of-the-mill "Service Oriented Architecture", well, pablum.

More good information can be found at Doug Barry's site. Almost too much at once without a trail guide. Better than run-of-the-mill, without a doubt.

From Russell:

  • n^2 vs. n comparisons should be considered harmful.
  • A clue that there might be a problem with this argument is that these pictures often have applications with names such as A, B, and C.
  • Applications along a value chain often have many different connections.
  • Ultimately you need to understand data flows to assess the complexity of the integration challenge.
  • Data mapping requires intimate knowledge of the data and how it's used.
  • You must understand every data relationship. That hard work is unavoidable.
  • Any benefit must be balanced against the effort of creating an intermediate, or "canonical", model.
  • Portfolios with the critical mass to justify such efforts don't emerge overnight.
  • Focus on business benefits.
  • Estimate costs with and without the integration technology.
  • Be conservative!

There you go.

Tuesday, December 02, 2003

So what's all this got to do with XML?

Jon Udell...

So what's all this got to do with XML? If you buy the notion that we are projecting ourselves into networked information systems, then we can't only focus on how processes and data interact in these increasingly XML-based systems. The quality and transparency of our direct interaction with XML processes and data -- and with one another as mediated by those processes and data -- has to be a central concern too.

When I think of XML, two things come to mind. First, I think of the movie Brazil, because XML is still this grab bag of stuff that happens to share one thing in common, angle brackets.

Second, I think of David Letterman's bit he calls "Is this anything?"* --- We expect XML to be something, anything, more than a grab bag of stuff that shares something in common beyond angle brackets.

A third thing comes to mind: the black knight in the Holy Grail, after all his limbs have been cut off. XML is utterly helpless in and of itself. It's everything *around* XML that has value, most of which are hindered by XML per se, not aided.


*(David Letterman's latest zany recurring bit is something he calls "Is this anything?" It consists of a setup of the bit followed by the pulling open of a curtain where a performer or, sometimes, a nonperformer, is doing something that may or may not be worth seeing or even worth "anything." David and his band-leader cohort, Paul Schaeffer, then discuss what they've just seen and decide whether it amounts to "anything," They don't always agree, but if the action behind the curtain exhibits creativity and talent it's usually declared "something" and if it's showy but pointless it will garner a "not anything." When Letterman and Schaeffer disagree, it's because they have different perceptions of what constitutes "anything.")

OpenAugment

One of the more interesting projects I have come across in a while, OpenAugment...

The OpenAugment Consortium is a not-for-profit open source corporation dedicated to the preservation of the Augment legacy. Founded in 2002, the consortium is comprised of a small dedicated staff and a number of research partners and associates.

Created in the 1960's by Dr. Douglas Engelbart and his imaginative team at Stanford Research Labs (SRI), Augment is one of the most groundbreaking and important historical artifacts of the software industry. Many of today's desktop and network computing innovations can be traced back to the original Augment system.

Today, the OpenAugment Consortium is taking steps to ensure that future generations will have access to the Augment legacy through this open source initiative. Please explore the rest of this site to find out more about Augment, the OpenAugment Consortium and how you can play a part in preserving this vital piece of computing history.

Answer me this

Today I am listening to one of the local "classic rock" stations. The DJ announces it's "Two for Tuesday". He also announces "We're in the middle of a 25 song classic song salute."

So does that mean for the 13th artist "two-fer" they're gonna play a classic and then a flop?

I'm just thinkin'.

Wednesday, November 26, 2003

Why Unix?

Better Living's take on software the doesn't stink is fine, but this is a controversial paragraph that caught my attention...

For what its's worth, I think that open-source is no panacea, and in fact is one of the biggest black-holes sucking away human talent needlessly these days. How many man-hours have been spent building a clone of the 30 year-old Unix operating system? There are many better areas for us to be applying talent. And I don't mean to diminish the professionalism of Microsoft developers. The product teams here are some of the most well-tuned machines I have ever seen, but "best" is not the same as "perfect" or even "as good as possible".

(BTW --- What's the difference between "perfect" and "as good as possible"? Nevermind.)

I agree open source is no panacea, but I think it's a better Cambrian Explosion than the Procrustean Bed that is the Microsoft platform.

And why should one develop an open source Unix clone? For one thing, because one can! The ideas are well known and successful, which makes for the best patterns and lowest risk. That's what's known and now accepted as pattern-oriented software development, and so most software development in general should be like this. See Eric Raymond's book, The Art of Unix Programming.

There are many better areas for us to be applying talent.

But going all the way back to Stallman's instantiation of GNU, clearly (in hindsight!), there is a need for a platform for innovation. The platform should be well known and successful, but also unencumbered by proprietariness or the unspoken requirement to abide the vendor's cash cows.

Why try to innovate on a platform vendor whose not-so-implicit intention is ultimately to own any and every idea that succeeds? Enough said there, even so, "Why Unix?" should be *obvious* to a software developer for many reasons. None of which are, or need to be, "Because it rocks!"

Tuesday, November 25, 2003

Why PythonNet?

Gordon says Python is complete, so why PythonNet? I agree that Python 2.3 has a lot to offer out of the box. But I'm using PythonNet for the same reason I'm using Jython, as Gordon suspects, to get to the libraries in dotnet and the JVM, respectively.

In particular, in this moment, I want to get to SWT, Java2D, and GDI+ for drawing. I could just use the GDI+ DLL without dotnet, but the dotnet API is better. I also want to use other C# code from Python and vice versa.

By the way, I am also starting to use Cocoa via PyObjC for the Macintosh. Other than a thin layer of low level GUI and graphics, I have a growing capability to do "fully native" write once, run anywhere, from Python. Of course with Jython in the mix you have to be careful, since it is not up to the 2.3 definition of C Python. That's OK for now.

Why not just use WxPython? Because I want to use the native APIs and integrate with other native code and I don't want a layer as big as WxWindows in between. It's really not that hard. When you structure your system like this...

  1. Domain Model
  2. UI and Drawing Model
  3. Native UI and Drawing
...you can make the drawing model fairly rich because all the brushes, pens, clipping regions, and graphics contexts are at roughly the same level of detail and capability. But for the UI model I'm finding it's better to keep the abstract UI model very simple, essentially just a set of Commands, Tools, and basic Layouts that map to many more detailed UI objects at the lowest native layer.

That is, don't try to create a WxPython. That's too much work with little payoff. Who wants to program at that level of detail even if it is cross-platform? Let the high level UI objects *generate* all the low level GUI objects in a platform-specific way. Details to come, when I have more of it figured out!

Not on the Up and Up: Oil in the Caspian region

Also from Fresh Air on the 12th, listen to journalist Lutz Kleveman talk about his new book, The New Great Game. Pay attention to which side we're (the USA) on in the "game" for Caspian oil, then tell me we're in Iraq to liberate the people from an evil dictator.

Worse is to come: disgusted with the US's cynical alliances with their corrupt and despotic rulers, the region's impoverished populaces increasingly embrace virulent anti-Americanism and militant Islam. As in Iraq, America's brazen energy imperialism in Central Asia jeopardizes the few successes in the war on terror because the resentment it causes makes it ever easier for terrorist groups to recruit angry young men. It is all very well to pursue oil interests, but is it worth mortgaging our security to do so?

On the Up and Up: More evidence against the Semantic Web

Listen to the segment on this page with the linguist Geoff Nunberg. He addresses misunderstanding in conversations (the human-human kind). This is interesting unto itself, but throw in a machine and see what happpens.

Sunday, November 23, 2003

640k: This memory needs error correction

Update: Jon Udell addresses this today from another angle, i.e. the query language instead of the data model. Here's his conclusion: It's about smooth interop between the next-gen Windows filesystem and the larger ecosystem it will play in. If Microsoft will be getting into the role of "schema provider" they'll have to do better than their recent Office XML provisions. The rest of us want multiple platforms and an unencumbered standard.

Ray Ozzie writes glowingly about the officially years-away WinFS file system. ...

Microsoft will obviously drive the initial schemas required by the core system - such as Contact - but where will it go from there?

Nothing two years away in the computer industry is obvious to me. One thing that is not obvious at all to me is why wait two years?

For one thing, Macromedia has a pretty good story with Central, which is based on Flash, is in beta already, and already includes a growing list of initial schemas, such as Contact. A second example is Chandler, which will also have a repository of arbitrary types of Items, including an initial set for the functionality that will come out of the box. Both Central and Chandler are attempts at a multi-application framework for "rich clients" bases on semi-structured information.

I could guess how such a set of schemas could be aided by WinFS, but is Microsoft making this point themselves? How do we know they'll get it right? Is there a convincing reason to wait for WinFS to begin?

Consider instead Google, Google Sets, and Syncato. These are all ways of structuring, searching, and organizing information in various bits and pieces. None of them require a new kind of file system.

I think we have what we need for the back end and front end of a new ecosystem of semi-structured information. I don't see how waiting two years or so for Microsoft is going to help. The key will be evolving these multiple attempts to be more aware of each other, not to wait for Microsoft to eventually get around to duplicating these ideas in some singular vision.

Wednesday, November 19, 2003

From James Robertson's Smalltalk blog...

Java mostly lives on the server - it's been a roaring success there, but it's failed on the client for the same reason that our product, VisualWorks didn't get that much traction on the desktop - end users really, really want apps to look and feel the same. We are addressing this by moving towards Pollock

The irony in this is that with Longhorn, the GUI is becoming more variable in appearance and layout, as well as lighter weight. The Longhorn GUI is going to be *more* like the emulated (i.e. "drawn") widgets of Visual Works, but with a more sophisticated drawing model.

By and large people have been successful with the variety offered by DHTML user interfaces and game interfaces. One of the most appealing user interfaces that I am aware of (Hypercard) is also notorious for breaking the user interface guidelines established by the same vendor.

Still, Pollock is a good thing for Smalltalk, hopefully providing the flexibility to access all of Avalon when or if Longhorn finally arrives.

Sunday, November 16, 2003

My day just got a little brighter: Write Once, Run Anywhere!

The rain has let up and I can see the sun is out over the coastal range, but that's not the real source of sunshine in my day. I had been working on translating a little bit of Java into C#. I gave the automatic translator a quick try, but a strange error was not motivating enough to push through. Hand translation soon gave way last weekend to another project.

I had started to doubt the value of my time vs. the effort of getting up to speed at all. I installed NUnit (not nearly as much documentation as with jUnit) and NAnt (ditto vs. Ant). I managed to piece together enough of a .build file to compile a partially translated .DLL and run a few tests. (Using the unobvious NUnit2 task as opposed to the NUnit task!)

Then the light came from above. What's the state of C Python for .Net scripting? Production ready. Great news! What does it take to run it? Download and click on python.exe. Greater news! I can program using the simple, interactive, (and familiar to me) Python environment.

In just a couple of minutes my whole day, and project, turned from dread to desire. Rather than translating from Java to C# and maintaining two source paths, I'm translating from Java to Python and maintianing one source path that has a little glue into the Java library via Jython and a little glue into the dotnet library via CPython.Net.

Every indication is the CLR will eventually support Python and other dynamic languages much better as first class members of the CLR the way Jython works in the JVM. Meanwhile CPython.Net will do fine, consider this example of handling events...

          def handler(source, args):
              print 'my_handler called!'

          # register event handler
          object.SomeEvent += handler

          # unregister event handler
          object.SomeEvent -= handler

          # fire the event
          result = object.SomeEvent(...)

C Python is the best way to program in dotnet, and Jython is (one of) the best ways to program the JVM. (The JVM still rules on my personal list of interesting languages.)

Friday, November 14, 2003

Even on the desktop, what's changed?

I would expect that Microsoft will be finding ways to move out from under IIS. I believe Indigo is a path for that, but I am not thoroughly versed in this stuff yet. I am less certain they even see a need out from under SQL Server. I am less certain of that myself. It's getting better, it's competitive, and I expect it makes some money. I don't know much about that either. I'd expect some people somewhere inside that large company are finding alternate paths, though.

But even on the desktop, what's changed?

I have a friend who developed some significant client/server software ten years ago, that had a long fruitful life. He's been considering a rewrite, and lately wondered if much of Longhorn obviated the basis for that rewrite. Was there any value in this software above Longhorn, or would it be trivialized?

We discussed Avalon primarily, but considered most of what was presented at the PDC. And we looked at a picture of what his software does. Clearly Longhorn is not a step backward for his kind of application. But just as clearly it was not a giant leap forward.

A few of the grungiest bits would be easier with Longhorn. The bulk of the application would still require good engineering, and with that would be not significantly more difficult to build, nor less functional to run, on any other major OS platform.

So even on the desktop, what has changed? Not as much as might first appear if you're already building significantly sized applications in a "managed runtime", i.e Java, Smalltalk, or Python.

This one is a bit of a ramble...

We need to find a way to lead, not just follow.

Yeah. I am not convinced that what Linux needs is a WinFS-like file system.

But one of the open source world's strengths is the number of programmers who are free to pursue many ideas. A large corporation with a significant cash cow like Windows or SQL Server will always make decisions that leverage those legacies.

I am convinced the hardware, wireless, and Internet platforms are slowly entering their own new level. And there are already object models that run on the most significant platforms. One is Java, which is as good as it needs to be. The innovation needed for the future is above the basic object model.

The next most significant object models are Python and Smalltalk. I don't think we need one object model. And above a narrow band of componentry, the future is almost certainly not to be based in an object model. A simple message passing model will do fine.

I don't think the competition or the worldwide customer base will capitulate completely to the news out of Redmond. But to be sure the dominant platform is finally taking a few significant steps forward.

Tuesday, November 11, 2003

A Heap O'Wha?

Reading about Reaps reminds me just a little about this cool thing unfortunately called Stalin...

Stalin also does global static life-time analysis for all allocated data. This allows much temporary allocated storage to be reclaimed without garbage collection.

Stalin does "whole program optimization". One of the results is that complete lifetimes of allocated objects very often are known ahead of time. Rather than force manual allocation, and rather than use a general garbage collector for all objects, Stalin can explicitly free objects at the point where it can prove the object is no longer referenced.

Because this is "whole program optimization" the software become inflexible to updates without re-analyzing the whole program. That's not too bad in many cases, especially considering the "whole program" may just be one or a set of components but not everything.

Stalin can also do interesting representation analyses, "unbox" values like numbers, and generally eliminate run-time type checking and dispatching. Perhaps the GNU Java Compiler can move toward these optimizations, and perhaps researchers will continue to look at ideas beyond the currently popular virtual machines; ideas that balance the need for dynamic programming and updates with transparent optimizations that get the most out of the hardware.

Saturday, November 01, 2003

MIT World Videos

When you get tired of TV (can't every show just be called "Crime Scene Law and Orderly, Miami style"?), you can turn to a new source of video fun and education at MIT World. A friend turned me on to this site this week, and I've been enjoying several very different sessions.

Oh, and they have an RSS feed.

Snow

OK... it was cold enough to snow here, at least over night. But we haven't had a decent (sleddable) snow in the valley for several years. Will this be the year we get to spend the day with the neighborhood closed, everyone on their sleds and other contraptions gliding downhill then trudging back up for someone else's turn?

*Nothing* would make this winter better. We can drive the hour to the mountains anytime, but snow in the valley is what makes a winter successful.

Favorite time of year

Fall and spring are my favorite seasons because of the activity in the weather the change brings. My birthday is in March, but Halloween marks the beginning of the transition to a new year for me. Halloween, Thanksgiving (in the U.S., end of November), and then the winter solstice mark an active stretch of weather for the Pacific Northwest in North America.

January and February are relatively steady in their own ways. January can be steadily wet, and if we're lucky, February can steadily dry. Then the bulbs and buds emerge in the warmth of the February sun. (As opposed to the northeast coast of North America, where winter lasts well into March and April, even May! Been there, done that.)

Then around my birthday the winds pick up again, but more often drier than in the wet fall. More change on the way toward the mostly sunny yet mild summers of the northwest that we don't like to advertise to outsiders. (Yet you come anyway.)

Those pesky little buggers

My cable Internet was out for a couple of days, and my cable TV was out on several channels as well as poor picture quality on many more.

Squirrels ate through the cable up on the pole.

Drool

I am sensing some drool in the corner of my mouth for Linux tablets. Although a friend recently bought a Zaurus and the drool per buck is very enticing.

Monday, October 27, 2003

Don't Fidget With Widgets, Draw!

If I had to pick just one system as a candidate for a modern, rich, smart client, service-oriented GUI, then Joel Bartlett's would be the one, from 1991, that runs on X/Unix in his wonderful Scheme->C system.

Some updating of the programming model would be necessary. The current model is much like NeWS but uses Scheme and X, where there is a single application somewhere using EZDraw as a GUI server.

XUL, SVG, and XAML are kind of in this space today, but like an updated EZDraw, they also need a more "conglomerate" approach to being an integration-oriented GUI device.

EZDraw, like SVG, offers a 2+ dimensional interactive drawing environment and so offer the graphical freedom of a web page or PDF with the interactivity of a traditional GUI. I'm hoping that XAML will be just as interactive, since it may become the de facto standard.

Friday, October 24, 2003

The outsiders win again. Ha Ha.

From Harry Shearer's Le Show's Le Blast email list (subscribe)...

Arnold S's first appointee, his chief of staff, is currently a lobbyist for the HMO industry, and was deputy chief of staff to former Republican gov. Pete Wilson. The same PW who finally came out of hiding and gave media interviews after the election. The outsiders win again.

Thursday, October 23, 2003

How Magnetic RAM Will Work

I love the site How Stuff Works. And I saw they have a section on MRAM, which will change everything about computing, hardware and software, over the next few years.

Should Elementary Schools Teach Keyboarding? (was: Computers in Schools)

James writes about computers in schools.

My sense is computers should begin to show up in schools in middle school (grades 6-7, ages 12-14). The applications should be highly interactive, multi-media, and constructive (i.e. use computers to build "things", even ideas).

What should younger kids be doing? Why, keyboarding of course!

No, not computer keyboarding. Piano keyboarding. This would be far better for connecting their brains for future study of math and science than any software I know of.

Java becoming more static, er, dynamic, um, both!

I did not realize this EventHandler capability is in Java 1.4.

This is kind of ugly, since the API is essentially interpreting strings for names. (Think Class.forName only funner.)

This is kind of confusing, since on the one hand Java is heading toward more static notations with generics. Yet on the other, this EventHandler feature is capitulating to the need for dynamic reflection without a bunch of new syntax.

Mobilized Software

Speaking of moblized software (aka Occasionally Connected Computing), I will be attending Intel's "Mobilized Software Occasion" on Nov. 4th in San Francisco. I'm not sure yet if I get the shirt (employee) or the PDA (attendee), though I know what I'd like.

Rob Pike's Things to Build

From Rob Pike (Bell Labs/Plan9, now at Google)... Things to Build (in Systems Software Research is Irrelevant (PDF), or in Postscript format):

  • Only one GUI has ever been seriously tried, and its best ideas date from the 1970s...
  • There has been much talk about component architectures, but only one true success: Unix pipes...
  • The future is distributed computation, but the language community has done very little...
  • The Web model... is forced interaction; the user must go get it. Let's go back to having the data come to the user instead.
  • System administration remains a deeply difficult problem...
Obvious connections to recent conversations about rich GUIs, and mobilized software, etc. I think he hits several nails square on the head.

Tuesday, October 21, 2003

It Always Comes Down To This

Erik Meijer writes on his blog:

In principle there is nothing that prevents special list transformers and comprehensions from being introduced into imperative languages as well. We know how to do it. In fact, as is the case for many other features, Python has already taken the lead in this.

Python has limited syntax for iteration and unlimited classes that behave as iterators.

The less obvious, more expressive example is Smalltalk, which eschews special iteration syntax. This so called "pure" object-oriented language does not have any syntax for iteration. Everything is a message send, even conditionals and iterations.

A simple notation for a "block of code" object and a simple notation for keyword-based parameters/messages give you what you need without hidden machinery or a fixed syntax. Any of Smalltalk's flow of control mechanisms can be defined in Smalltalk itself. And more, to your heart's content.

BTW, Ruby is closer to Smalltalk than Python is to either in this regard.

Joe Morgan: Best Color Commentator... Ever?

Is Joe Morgan the best color commentator (ESPN Radio) in baseball? In any sport? Ever?

I'm leaning in that direction. He's that good, and I am not saying that because my youth was spent in southern Ohio with the Big Red Machine.

Monday, October 20, 2003

Give Aways

The federal government is essentially giving the resources of Alaska away for free...

On Sept. 24, amid the hubbub of Mike Leavitt's confirmation hearings, few journalists and policy makers stopped to notice that the DOI's Minerals Management Service put 9.4 million acres in Alaska's Beaufort Sea on the chopping block at unusually low royalty rates. The area in question is not far from the Arctic Refuge, off the northern shore of Alaska—land of polar bears, bowhead whales and Inupiat Eskimos who still practice maritime hunts...

There are, of course, likely environmental side effects: Last spring, a report by the National Academy of Sciences warned that seismic exploration and offshore drilling in the area would threaten endangered bowhead whales as well as the livelihoods of traditional Inupiat hunters. Needless to say, that report was overlooked.

Although the Beaufort sale troubles many Alaskan wildlife experts, they say it's merely one of many concerns in the region, some of them potentially far more serious. "This is just a small piece of a larger picture in which the federal government is essentially giving the resources of Alaska away for free," said Eleanor Huffines, Alaska regional director for the Wilderness Society.

Huffines says she is realistic about the need to expand drilling, and the Wilderness Society has identified areas in Prudhoe Bay and western Alaska where it is not opposing increased development. "What concerns me is that no matter how reasonable we try to be in balancing commercial and environmental concerns, [the Bush administration's] plans show no balance at all...

Saturday, October 18, 2003

Craving the Rich GUI

Thoughts on Jon Udell's Infoworld column on Rich GUIs... How rich, really, is that rich GUI we nostalgically crave?

What I crave, seeing developers on projects recreating significant portions of windowing systems in HTML and Javascript, what I crave is the ease of developing those rich GUIs.

When tabs appear in multiple rows, they create a problem that Jeff Johnson, author of the wonderful book GUI Bloopers (infoworld.com/453), calls "dancing tabs." Clicking on a tab not in the front row disconcertingly reorders the rows. On a Web page controlled by rows of links, that doesn't happen.

These complex interactions I think are as much of an indictment of the model as of the view. Our systems require a lot of dials, or seem to.

A GUI that doesn't embrace linking can never be truly rich.

When I think of truly rich GUIs, two classics come to mind, Hypercard and Emacs. Each exceed the restrictions of the Mac 128k GUI which we are otherwise pretty much bound to for now, albiet on 17 inch monitors.

Both of these systems featured hypertext and flexible navigation well before HTML and HTTP. Both creatively avoid the keyhole problem.

Thursday, October 16, 2003

Thoughts on creating a .NET language targeted at "occupational" programmers

Any thoughts on creating a .NET language targeted at "occupational" programmers i.e. folks that aren't programmers but need to essentially encode/automate an algorithm, run it repeatedly and store the results? I would think this could be a great addition to VSA where an admin assistants with no formal programming training might be able to write a small program to automate a task in an Office app.

Here's a suggestion from the 1970s...

Multics Emacs proved to be a great success -- programming new editing commands was so convenient that even the secretaries in his office started learning how to use it. They used a manual someone had written which showed how to extend Emacs, but didn't say it was a programming. So the secretaries, who believed they couldn't do programming, weren't scared off. They read the manual, discovered they could do useful things and they learned to program.

Here's another idea that was demonstrated in the 1970s...

In 1972 Kay took a job at Xerox's Palo Alto Research Center (Xerox PARC) and began using Smalltalk in an educational context. Young children were exposed to computers and their reactions were analyzed. Kay concluded that children learned more through images and sounds than through plain text and, along with other researchers at PARC, Kay developed a simple computer system which made heavy use of graphics and animation. Some of the children became very adept at using this system; in fact, some developed complicated programs of their own with it!

Yes, I think (no surprise) it would be great if the newest "managed runtime" could manage decent support for the longest-used and most proven "managed runtimes"!

Metadata

Don Park wants a standard place, er, places plural, for a site's metadata. Since I seem to be grouchy already this morning, I will apply my bad mood to this idea...

I think it is absurd to try to agree on where to put "metadata" before agreeing on what the "metadata" is.

Please lead the way to save me the headache. And get the tool builders to build it for me. And then use it for a couple years so I can determine the ROI.

Thanks.

My feeling is if "metadata" is not self-generating then it has relatively little value. For example RSS is valuable "metadata".

Curiously, though, Don writes in the comments section...

I don't think .w3c should know anything about RSS. It should just be a bag of links or inlined metadata along with some useful info to help agents find what they are looking for without wasting bandwidth unnecessarily.

I am all for standardizing worthwhile metadata structure and values, where "worthwhile" is the crux of the matter. So I am more than a little surprised that this vague proposal apparently is to remain ignorant of the most significant self-generated "metadata" on the web today. Not to mention that this kind of metadata (RSS) is fractured already into multiple names, locations, and contents.

IMHO any such metadata effort should be *primarily* focused around RSS and its cousins.

Wednesday, October 15, 2003

Rain from Intel Research: Continuation-based Distributed Messaging in Java

I'm looking through the javadocs for the Rain project from Intel Research. What might be of interest to the growing discussion about continuation-based web servers?

Here's what... Continuation.java

This class provides a mechanism to continue a computation after a reply to a message has arrived. This works by associating a continuation with an outbound message using the associate method. When a reply to this message arrives, the service calls the invoke method on the continuation object. If the continuation was constructed with a timeout and a reply does not arrive within that time, the timeout method will be called. While this class is not abstract, it does very little of interest until either invoke , timeout or both are overridden.

By default, continuations are "single shot". This means that once a reply has been received, the continuation is removed from the continuation system. To create a continuation that can be invoked by multiple replies to a message, call setMultiShot with true. (Multi-shot continuations are useful in situations where multiple replies may come back for a single sent message.) Note that a multi-shot continuation with no timeout won't ever expire, so do not forget to remove them with the delete method.

Tuesday, October 14, 2003

Four Schools of Software Testing

I'm just back from Brett Pettichord's talk on Four Schools of Software Testing at the Portland SPIN meeting.

I don't know much about the context-driven school, but I am curious since Ward Cunningham is enthusiastic about these "young turks of testing". Brian Marick was there and participating in the Q&A too.

I think there is a good book to be written on "Planning Context-Driven Testing". I can see that one could come away from an initial discussion with impressions (prejudice) from the "old school" testers that context-driven means "unplanned". This is based on my observations of "old school" reactions to the agile movement of software development.

After Brett's talk, Brian explained in a small conversation that a way he plans for testing is to look ahead at all the potential hand-offs of software between people. (There are many kinds of hand-offs: from programmer-to-programmer, and so on, including handing off to the end user.) I really liked this approach.

Monday, October 13, 2003

Top 5 OO Books

I was asked for my Top 5 books on object-oriented programming. In no particular order, this is what I came up with...

  • Robert Martin's DOOCAUTBM
  • Design Patterns
  • Refactoring: Improving the Design of Existing Code
  • Kent Beck's Guide to Better Smalltalk: A Sorted Collection
  • Concurrency: State Models & Java Programs

But ask me some other day and there might be other titles in the list.

Sunday, October 12, 2003

Distributed Computing Standards

Phillip Windley writes about early web service adopters, hits many right notes, and some wisdom for moving forward. The only note that doesn't really sound right is about the past:

I'm still trying to figure out how this is different from the world of 3-4 years ago when there were no standards for decentralized computing.

Three to four years ago, CORBA was a standard for decentralized computing. The core had matured, interoperability was well underway, firewalls were being tunneled, and less expensive and open source implementations were coming to market. Real-time issues were being overcome.

Moreover, the core remote procedure call mechanism was being enhanced with asynchronous messaging and 3.0's improvements were on the drawing boards if not in the implementations, including a cross-language improvement on the Java-only EJB server architecture.

CORBA is not web services, but they are so much more alike than they are different.

Except when it comes to maturity, the early adopters are now grey beards.

Two Sides of the SOAP Coin: Is something missing?

Jon Udell writes about the front-end and Phillip Windley writes about the back-end of web services.

Adam Bosworth also writes about the front end.

There is a client/server feel to this new web services architecture. I don't believe that is anyone's intention, just my perception, because of what's not being described.

Is something missing? I have not quite put my finger on what that is. What is the "middle tier" of a web services architecture?

Here are some thoughts on a component architecture, where components are pieced together in many-many-many relationships for some solution...

  • Front - not a browser but browsing/transacting purpose-based component(s), e.g. outline, calendar, form.
  • Middle - application-specific aggregating component(s), e.g. work flows (state machines), sorting, searching, tallying.
  • Back - OLTP and OLAP data service(s), e.g. the transaction engines for and historical sources of domain knowledge.

The other pieces that are still missing for me are even more central... the priority use-cases and funded evolutionary paths that would drive these visions.

Saturday, October 11, 2003

This is a curious position from Sean McGrath. Unfortunately I see no substance for debate, so all I can say is I think I disagree. What are the implications of the "yin/yang" tilting too far in either direction? What if a machine could read such out of balance XML and do more for the people? How can I evaluate a technical issue based on passion alone?

Compromisiming on standalone readability by *people* is, unfortunately one of the classic premature optimisations in the XML world. The essence of XML is the yin/yang between people and machines. Tilt too far in either direction and the noosphere weeps.

The rss-data proposal tilts too far away from people in favour of machines. There is no point in teeing up a data structure for the agregations in terms of dates, strings and integers and calling it XML. It is XML is syntax only, not in spirit. If you really want to do that, use ASN.1 or something.

What is XML if it is not an aggregation of dates, strings, and integers, etc.? What is the "spirit" of XML? Is there just one "spirit" of XML that should bind us all? I really want to know more about his position.

Wednesday, October 08, 2003

Skiplist Cookbook

The skiplist is my favorite data structure (PDF). So simple and efficient.

I have implemented it several times in Smalltalk, Java, and Scheme. The linked cookbook is very straightforward, but even without it, the concept is what allows the implementation this quality.

The idea begins with a linked list. But each node may have more than one forwarding pointer. The number of pointers in a node is based on a random number and they form a tree shape. So you end up with easy code to write for searches, adds, and deletes, but you also end up with tree shaped performance, i.e. O(log n).

Moreover since the tree shape is based on a random number instead of the data in the nodes, there is no need to "rebalance" the tree on adds or deletes.

The paper does a good job of illustrating this concept and has all the source, so I won't even try to cover the details.

Tuesday, October 07, 2003

A Simple (One Hopes) View of Continuation-Based Web Serving

Daniel von Fange wonders about continuation-based web serving. He adds "treating a 'page' as a method of an object makes more sense to me for 97% of things I do.".

Without getting into how continuations work, you can look at it this way: continuations allow a web page to behave just as you prefer, kind of like a method of an object.

What if methods in your favorite language required you to do the following: every time the method itself calls another method, the return value does not go back to the caller. Instead what if you had to name some other method to receive the return value?

In this case you have to "name the continuation", that is, provide an explicit place (method or page) for the computation to "continue" following the call (or the user interation, in the web scenario).

But in a continuation-based web server, just as in most programming languages, the system handles "where to go next" implicitly. That is, in most programming, you do not need to provide an explicit "continuation". The system is happy to return the value right back to the point of the call.

And so just as your method calls another method and the results are returned right there for you to use, your web page "calls the user" and the results are returned right there on the page for you to use. You do not have to name another page to receive the results.

Hope this helps. The mechanics are less important for understanding the benefits.

If you think about it, "continuation-based web servers" should be the expected behavior since they handle continuations for you, just like programming languages. More typical web servers should be considere the odd ball. They make you do explicit "continuation passing". No one likes to program in "continuation passing style". It's just, we've gotten used to the burden with most web servers.

Monday, October 06, 2003

Continuous Curiosity

C. Keith Ray explains the benefits of institutionalizing continuous learning in his software development team...

The effort paid off within weeks as we incorporated the new knowledge back into our product. For example, we used our newly acquired JavaMail knowledge to start sending HTML email.

Statice

Ted Leung writes... about notations for programming, databases, and otherwise...

Languages need a way to assimilate these paradigms in a way that makes them look seamless.

I would point out two of my favorite examples.

  • One is Gemstone Smalltalk's OODB "select blocks" (PDF) which use a notation similar to Smalltalk blocks, but actually implements a declarative query language with structural access to graphs of collections of objects. Although it was very useful, the feature was never developed nearly as much as it could have been.
  • The other is Symbolics' Statice OODB. Being based on Lisp, the syntax is much richer and fits more seamlessly into the host language. Unfortunately the Lisp market was already heading for the doldrums, and so a lot of the experience from Statice went into the C++ OODB rather than into Lisp. But apparently Statice is still available.

(defun transfer-between-accounts (from-name to-name amount)
  (with-database (db *bank-pathname*)
    (with-transaction ()
      (decf (account-balance (account-named from-name)) amount)
      (incf (account-balance (account-named to-name)) amount)
      (when (minusp (account-balance (account-named from-name)))
        (error "Insufficient funds in ~A's account" from-name)))))

Friday, October 03, 2003

What *is* XML anyway?

My somewhat random thoughts, reading through "The Impedance Imperative Tuples + Objects + Infosets = Too Much Stuff!" from Dave Thomas (the ex-OTI/IBM Smalltalk guy, not the Pragmatic Programmer guy) being discussed on Lambda the Ultimate...

"SQL is quite good for simple CRUD applications on normalized tables."

This seems to speak to OLTP. For OLAP, denormalized tables (3NF fact tables and 2NF dimension tables in a star schema) would be preferred. Still standard SQL does not support all the expressions you'd like in OLAP such as time series expressions.

For OLTP I am not convinced you want SQL at all. Something like Prevayler might be preferred. When we get large, battery-backed RAMs in a few years, we won't even care about writing transactions to disk.

"SQL programming often requires an alternative interface using cursors"

This is becoming somewhat less necessary in situations where set-based expressions are the ideal. Some databases like Teradata and Sybase IQ support set-based expressions efficiently. Even SQL Server is better at this than in previous versions.

"after many years of engineering, the relational databases can finally claim the performance and flexibility of keyed files...; network databases..."

Henry Baker has some great thoughts about this. I am kind of in the middle. One thing seems to be true, that funding for any kind of database other than relational is almost nothing. Object databases have had commercial funding, but they've been miniscule compared to the commercial relational database R&D.

What, for example, could have been done at Gemstone where indexing, query, and reporting for its OODB had well under one person year R&D during it's 20 years of development?

This has some applicability to XML too. Is XML a "random access database"? Or a "serialization" (with "includes"? with "pointers")?

"Third Generation Database Manifesto... objects... were syntactic extensions on Blobs"

Another approach in PostgreSQL and other DBs is to make tables like a "class" (whatever that is!) and one class/table can inherit from another. This is actually fairly useful for O/R mapping.

"Object databases, it was claimed, solved the impedence mismatch..."

Another note on star schemas, they simplify data models relative to 3NF models, and they partition data into dimensions, facts, and many-many relationships. Dimensions map fairly well into objects, facts map into observations or measurements among networks of objects. If you design your objects and your data with this in mind, the O/R mapping problem can be reduced for many common business (and other) scenarios.

"while there are some solutions (AS/400 and Gemstone persistent stores) that have been very successful..."

Dave gave a keynote at a Gemstone company retreat. He tried to marry Gemstone with AS/400, suggesting we could ignore the Java industry and make more money. I tend to believe him since AS/400 was already a successful niche with persistent data as a feature, and Dave was at the time with IBM (via his OTI subsidiary) and so had to have had some inside understanding of the economics.

This was the point where Gemstone in "the hopes of becoming the next Oracle" all but abandoned Smalltalk for Java/J2EE. For the next several years the Smalltalk market funded the Java development with about 3x developers for Java than ever worked on Smalltalk. I doubt the Java investment ever broke even, while Smalltalk continued to bring in revenue (at least as of a year or so ago).

As mentioned above, Gemstone hardly invested anything in query, indexing, and reporting for either Smalltalk or Java OODBs. Had the numbers assigned to Java been put into this, and perhaps the AS/400 port, not to mention the replication mechanism and servlet-like multiplexor which had just been developed on a shoestring, what could have been the result?

What if these had been developed and Gemstone purchased by IBM, which had been discussed many times even on Gerstner's floor in IBM?

"the brave new world of XML schemas and Infosets"

We'll see. Not too many business systems have been built on these yet. As mentioned above, it is not clear that XML is a random access database or a serialization or something else altogether. Nor is it clear where "includes" and "pointers" fit in. And what is a "relationship" in XML as in the relational database sense? Not entirely clear.

"It can be argued that given the ability to directly query both relational and XML data one can handle lots of problems without needing objects."

Objects are for abstractions. So are functions. So the comprehensiveness of the above statement depends on what "query" means and it depends on the query language.

"the lack of explicit XML values..."

This gets back to what is XML vs. some use of XML. Should there be one "data model" for XML? I doubt it.

"The impedence of incompatible type systems imposes..."

Everything is incompatible (e.g. "computation" and "data model" as well as "type"). An approach to some of the concerns in this paper may be better off *ignoring* XML(!), and going more into left field for potential solutions. Then those solutions may be able to be mapped back into XML for some purposes.

What *is* XML anyway? We have some relatively primitive yet widespread tools "for XML". But should this suggest our future data model, search, and computation problems are best solved "using XML", whatever myriad of mechanisms that means?

Wednesday, October 01, 2003

The Principle of Stability vs. the Principle of Release Early and Often

Over at Hamish's MishMash...

Patrick raises the flag for Smalltalk, and notest that it took ten years to get to the current version, Smalltalk-80. Which is now over 20 years old an substantially unchanged. There's an interesting question in there about how much of that stability is from it being "just right", and how much from the fact that once it's out there, it's harder to change. The balance is well over in favour of the former in Smalltalk's case. So "release early, release often" isn't necessarily the right way to go with language development?

Robert Martin developed the idea of "stability" in OO designs many years ago. The subsequent years brought battles in comp.object around whether this idea of "stability" is good or bad. In fact it is neither, just an observation that if many things depend on X then X is unlikely to change in ways that affects it's dependents. X may be "good" or "bad" by some other measures.

The same ideas can be applied to other kinds of design, e.g. language design as well as entire frameworks, such as the topic of the original messages, dotnet f/w. In my argument I am just assuming the Smalltalk system is "good" then I am attempting to explain how low stability for 10 years allowed it to become "good" before it became "stable". This is rare.

The original Smalltalk team did release early and often. But they were in the advantageous position to radically alter their design between revisions.

A commercial product like Hamish says, once it is out there is hard to change. So it depends on what "release early" means.

One product released to a small number of customers can change more easily than a suite of products (like the dotnet f/w) released to the entire world. If you are designing the ultimate in reusable platforms, this is a concern.

RSS-Data

I just started chewing on Jeremy Allaire's item on RSS-Data. This is from a very constructive comments thread, followed by some very preliminary thoughts...

I'd love to see an example showing how RSS-Data is a Good Thing compared to a similar RSS 2.0 w/namespace example. It just seems like we're losing some precious semantic information when we drop down to datatypes in the document.

  • I like the idea, because I like XML-RPC's data definitions, more or less, especially how uncomplicated they are for programmers.
  • I don't like the idea for the same reason, it does not result in domain-specific XML tags and document definitions.
  • The difference between these two points is that in XML-RPC this "tagged data" document is represented as a struct

As per Greg and Eric... whether you use this approach or an XML namespace approach, you still have the same need for an out-of-band agreement. In either case you will have nested values and name/value pairs that only "mean" something to the people who write the code that makes it useful.

In short, I could get code working with either approach (and will probably have to). There will be thrash, but congratulations for getting a very important ball rolling.

Thursday, September 25, 2003

Smalltalk Reports

For many years in the early to mid 1990s Smalltalk seemed destined to become the enterprise application programming language. IBM was behind it. Many enterprises were successfully implementing projects in finance, insurance, healthcare, and elsewhere. (Aside: Martin Fowler pointed out recently at the JAOO, Smalltalk and Gemstone still appears to be a more technically facile approach to enterprise development than, say, Java or C# frameworks. But enough of that here.) A healthy and knowledgable consulting community was growing.

On to the point... during Smalltalk's relative heyday (so far), the Smalltalk Report was an indispensable journal of monthly knowledge. A lot of that experience is just fun reading now, but some of it makes up a body of knowledge ("early" writing on patterns and agile methods for example) for programmers today that are more likely than not to be using some other language and tools. (Although if you read the right blogs you get the sense of a mild-to-vibrant Smalltalk resurgence.)

Now really on to the point.... the Smalltalk Journal is now on-line. Note that the initial PDFs on the page are the tables of contents. But the PDFs TOCs are linked to the PDFs for the actual articles. You have to click through in Acrobat reader or viewer that supports PDF hypertext links. (From James' blog.)

Wednesday, September 24, 2003

IM in the Enterprise: Why?

Phil writes about IM in the enterprise.

I've not been able to determine the advantage of IM over email in the enterprise. More than 9 times out of ten, email performs as quickly as IM, and more than 9 times out of ten the message I want to send can be treated asynchrnously anyway.

What's the critical scenario for the IM argument over email? The conversation is taking place at Phil's blog.

Tuesday, September 23, 2003

Remember the Momenta? Remember Dynabook?

From Scoble, via James, comes the question of who innovated the tablet PC. (Depends on what the meaning of "is" is, probably. 8*)

Certainly the idea goes back at least to Alan Kay's Dynabook in the 1960s.

Going back 12 years (oh---my---god) we will find the Momenta product, a Smalltalk-based tablet PC (1991, mind you) that was a little ahead of its time (and from a horribly mismanaged start-up as I recall).

Saturday, September 20, 2003

A simple proof that -1*-1 = 1

Carlos Scheidegger sent this proof which is better than my attempt at why -1 * -1 = 1...

The definition of multiplication for whole numbers is:

x * y = y + y + y + ... + y + y, where y appears x times.

Using this, it is easy to prove that, being (succ x) the successor of x, 

if x * y = z, then (succ x) * y = z + y, and vice-versa.

By definition, 0 is the successor of -1. Also by definition,

0 * x = 0, 

and so, 0 * -1 = 0.

(succ -1) * -1 = 0
(succ -1) * -1 = 1 + -1

Now, we apply the property:

(succ -1) * -1 = 1 + -1 ->

-1 * -1 = 1

----

This proof's only assumption is that -n + n = 0, which is easily
provable. (Very easy using peano arithmetic)

Thursday, September 18, 2003

Testing considered harmful?

A number of wonderful (literally) thoughts are wrapped up in Ian's item on testing, Dijkstra, Turing, and beautiful code.

One at a time...

  1. Dijkstra... would certainly have bristled at the notion that "once your tests pass your code is done"
  2. [By] rereading, I may feel more certain of the correctness of the program, as well as the conceptual integrity. Unit tests are good, but unit tests do not make code *beautiful*.
  3. I believe the Turing Machine is not applicable to real programming, because no useful programs can be reduced to a Turing Machine.

On point 1, I would say that developing in a test-driven way is kind of like developing proofs incrementally. Once your code passes the tests, you have not proven the absence of all bugs, but you have at least proven the absence of all bugs you thought were important at the time. So you are not done, but you are significantly closer to being done.

On point 2, I would say that rereading (and refactoring) to achieve "conceptual integrity" is kind of like making a proof more comprehensible. You have done the messy work, now make sure it is presentable so others can use it, extend it, or carry the "proof-in-progress" further with more tests.

On point 3, I would point out that the lambda calculus is not talked about enough in computer science education generally. Developed by Alonzo Church, a peer of Turing, the calculus is both equivalent to the Turing Machine and closer to modern programming languages than is the Turing Machine. (Of course, it doesn't have I/O either, so I am not sure what to do about that argument.) Programming language semanticists as well as compiler writers actually use properties of the lambda calculus all the time. I am not sure the same could be said for the Turing Machine.

Negative x Negative = Positive

Sean asks for an explanation of why a negative times a negative is a positive.

Here we go...

-x * -y = (-1 * x) * (-1 * y) = (-1 * -1) * (x * y) = 1 * (x * y) = x * y

Obsessive Update: The above assumes -1 * -1 = 1. Why? The only thing I thought of, which really becomes a general rule is to create a contradiction. If -1 * -1 = -1 then...

-1 * (1 - 1) = (-1 * 1) + (-1 * -1)
-1 * (0) = (-1) + (-1)
0 = -2
...which is the contradiction we're hoping for.

Update: maybe Sean is looking for an applied explanation rather than a proof?

Hmm... that would be a good one. When do we do -x * -y in the real world?

Well, I do x * -y in the real world when I pay my mortgage (-y) for some number of months (x). So maybe I could ask how much I would have had had I not paid my mortgage (-y) the previous number of months (-x).

Clearly the result of these two formulas should be the opposite. If I paid -$N USD then if I didn't I would have saved $N USD.

Tuesday, September 16, 2003

Car Purchase - SUV = Photovoltaic System

A Maryland resident, who installed a photovoltaic system and now contributes more juice to the grid than he draws, made the point on CSPAN that, at current PV prices, someone making a car purchase could buy a less expensive car instead of an SUV and have enough left over to install a PV for their house.

Given current events at home and abroad, would you recommend to your representatives to increase incentives and funding for making each of us net contributors to the grid?

Independence and self-sufficiency is the American way, no? This should be a federal priority at least as important as the federal highway system. A solar system in New York produces 80% of the output as in the Southwest, so this is more than a regional option, especially combined with insulation and other improvements.

Conversion would make a healthy contribution to the employment situation as well as the environment.

Wednesday, September 10, 2003

Data Warehouse Review has an RSS feed

Data Warehouse Review has an RSS feed. Have at it.

Update: Thanks to Karl Lewin for pointing out a typo in the URL.

Battery-backed RAM closer on the horizon

With both Motorola and IBM firmly lined up behind a single contender, the five-year search for a "universal RAM" technology offering a combination of non-volatility and high-speed random access appears to be all but over.

Among other things, MRAM is designed to eliminate several of the most infuriating artifacts of the computer age: the interminable wait for devices to boot up and power down, and those irritating operating system messages about "loading" and "saving your settings."

"Currently computers need to load information into local memory from the hard disk when the power is turned on, and that data transfer can't even start until after the hard drive has spun up to speed," Way said. "Whenever you shut down, data has to flow back in the other direction from the volatile memory to the hard drive.

"MRAM is designed to allow programs and data to remain in the local memory and may even, someday, allow us to simply reach out and touch an on/off button to turn off Windows in lieu of going through a ritualized shut-down procedure."

BTW, opportunities for software entrepreneuers abound.

Monday, September 08, 2003

Paper models

http://www.nedbatchelder.com/blog/200309.html#e20030905T073200

Wednesday, September 03, 2003

"The Factory" vs. "The Studio"

How many of you are software developers working in a location which is referred to as "the factory" by senior management, sales, and marketing? Maybe even by engineering management, since they tend to align more with the rest of the company?

This label always (and I have been hearing it and commenting on it for the last 20 years) struck me as odd and indicated more of a misunderstanding if not a misguided wish on the part of management. The last time I had a sit down discussion with someone on it was 10 years ago. Since then I just shrug it off and work on the bigger issue from other angles.

When I had that discussion I was talking with people who came from Tektronix, a manufacturer of oscilloscopes among other things, including then X Window terminals. (Does anyone remember those? Are they still sold?)

The analogy I made went like this...

TV designers get together in the lab and iterate over ideas. Those ideas take shape, and eventually are ready to be produced on an assembly line. The activities in the lab are design-time. The assembly is factory-time.

Software designers get together in the, well, cubicles, actually. Those ideas take shape, and eventually are ready to be copied to tape. (Remember shipping software on tape?)

The cubicle activities are design-time. The copying to tape is factory-time.

What's the significance? Factory-time is something step-by-step repeatable and can be treated and "optimized" as such. The cubicle time is repeatable in the sense that a carpenter uses basically the same tools and materials every time he builds a kitchen cabinet. But each time the kitchen is a different shape, the wood has different features, and the customer has varying tastes. What gets repeated each time is a creative activity.

The TV lab and the software cubicle are really studios. The only organization I know of that treats software as a studio process is Ken Auer's Role Model Software. The XP bull pens aren't talked about in these terms, but they should be.

Do I need to add that Ken comes from the creative Smalltalk community of the 1980s where software has long been seen as a collaborative, creative set of activities?

Software Factory at ASU

Although the name is horrible, the idea is commendable. Arizona State University has an organization called the "Software Factory". Their mission...

Here, as at other universities, software development is becoming ever more crucial to research. Researchers typically develop software by “ad hoc” methods, hiring students to do their programming. This works out in some cases, but more often than not the students lack experience in software development and the faculty lack experience in software engineering. This leads to a sub-optimal learning experience for the students and poor software products for the researchers.

The software factory idea, then, is simple: Gather these part-time student programmers in a common facility, put them under professional management and mentorship, and use sound software engineering techniques in the development process.

Tuesday, September 02, 2003

How to win friends and...

Mark Pilgrim writes of the launching of a new MSFT web service...

I’m going to repeat that, in case you missed it: the documentation for the web service is wrapped in a Windows installer which will only install over Visual Studio .NET.

I'd cry if I wasn't laughing so hard that tears are already streaming down my face.

Straight talk on EAI adapters

Sean McGrath gives us the business on EAI adapters. What a breath of fresh air. A truly useful economics of integration that lines up with the EAI vision if your intent on taking proprietariness out of the picture.

Here's how I interpret his hints toward an alternative for 80% of the cases a proprietary EAI solution might be purchased:

The hard part is already proprietary to your legacy applications and ERPs. Sometimes the term "adapter" seems intended to imply "easy to implement". Unfortunately customization, vendor quality and the "stovepipe" nature of EAI products themselves get in the way.

Distributed (and evolutionary) version control

The build instructions make this system seem a bit fresh yet, but Ted Leung points today to a very interesting version control system, especially for distributed open source evolutionary systems. In particular...

  • defining different "acceptance criteria". monotone's update algorithm permits sorting and filtering by certificate; this means you can tell it to ignore changes until someone you trust is willing to attest to their quality, either by code review or test results. simply committing code does not force any other end-users to run it. (this feature is only partly finished -- the UI for it is still wired to one setting).
  • decentralizing trust. monotone's operations are all based on checking RSA certificates. there is no central authority to which you must appeal to participate, or to be granted "commit access". on the other hand, nobody has to trust or accept your certificates, unless they happen to like you.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.