"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Saturday, October 08, 2005

Innovation or Standard: Choose One

On BPEL standardization...

the market is fairly immature as to what is the right way to take something like that. What it also shows then is you're getting a lot of innovation in the standards body. Innovations in standards bodies typically aren't good things.

Comments on Processes, Etc.

Le Roux Bodenstein writes in email...

Some comments on some of your recent posts: You were talking a lot about scaling to more processors, app servers becoming the new operating systems, etc.

Aren't app servers just big processes that manage large amounts of threads? Don't multiple processes scale more easily because of "share nothing"? What's wrong with operating systems? They are basically (among a few other things) very efficient process schedulers. (modern unixes, at least)

"Shared nothing" is important. If the multiplexor does not enforce shared nothing then convention should. The new multiplexor (e.g. the web/app server) is lighterweight than the older OS multiplexor. Shared nothing lightweight processes are the best of both worlds.
I think if anything we should work on making inter-process communication easier. Processes should be tied closer into languages and inter-process communication should be abstracted. Even more than some newer languages did with threads.
I agree. But we need to lose the distinction between local and remote processes. e.g. Erlang processes us the same simple communication abstraction whether they are co-located or remote. So do HTTP processes of course, and XMPP, and... Perhaps the OS is the last bastion of the local/remote distinction? (The now defunct Apollo Domain OS being one of the rare exceptions.)
I think "lightweight processes and threads" will lose popularity. Multiple cores and processors effectively make the "heavy-weight overhead of processes" a mute point. There's a lot less context switching when things run on their own processors and once a process is started (which is as far as I know quite expensive compared to starting a thread, but I am not an expert) it can just keep running until it is needed.
I agree heavyweight processes will perform better, but lightweight processes will perform that much better. The OS process approach can and I am sure will be improved but making improvements there is probably a longer road than taking the Erlang approach in a specific web/app server or language runtime implementation.
Anyway... my argument is I don't think app servers will help much on operating systems with decent kernels. In fact - they will just be in the way. Or an extra layer. Or what little use they have can be added to the os.
I think this is a fine way to approach the problem but as I said I think it is a longer road. Hopefully the OS designers will move in this direction, having taken some best practices from the web/app server and language runtime designers. (For that matter hopefully the web/app server and language runtime designers will take some best practices from their peers.)
I don't think we need new programming languages either. I think Python, Ruby, etc. can probably just be "fixed" to make multiple-process development a lot easier...
We continue to be in agreement.

Wednesday, October 05, 2005

The New Ribbon

I have not tried the new MSFT Office Ribbon, so I will withhold judgement for now. Others are noting the amount of real estate the ribbon requires.

The only thing I want to point out here is we have gone full circle from ribbons to menus and back. The first GUI applications I developed were CAD tools. Back in the day, those tools were expensive and ran on expensive hardware. One had to go to a special (usually dimly lit) room to use the CAD systems.

Those systems had very expensive, and large, screens. The GUI was essentially an area to present as much of the schematic as possible surrounded by, yes, ribbons. We developers had ongoing discussions to determine how to minimize the ribbon space and maximize the capabilities offered in the ribbons. CAD designers did not want to waste time clicking through all kinds of icons to get to the right function.

Soon after the Mac caught on, then X and Motif. Menus and dialog boxes became the thing. Ask me sometime how this specific transition almost brought down one of the top 3 CAD vendors.

Tuesday, October 04, 2005

Language Wars

What do language wars mean in the age of services and post-modern software development?


Tyler Close has been contributing some useful exposition in a series of messages on capability-based URLs to implement authority in the REST Yahoo group.

More on the Future of Multiprocessing

ACM Queue has good explanations of where hardware is going and how to scale software into the multiprocessing era.

The introduction of CMP systems represents a significant opportunity to scale systems in a new dimension. The most significant impact of CMP systems is that the degree of scaling is being increased by an order of magnitude: what was a low-end one- to two-processor entry-level system should now be viewed as a 16- to 32-way system, and soon even midrange systems will be scaling to several hundred ways.

Monday, October 03, 2005

Software Education

Phil Windley writes about software education...

I've determined that I'm no longer convinced that software engineering, at least as it's commonly discussed and taught, is what I want to prepare students to do. I try to focus them on being innovative, entrepreneurial, and working with dynamic languages on networked applications.

The New Application Architecture

Phil Windley looks at new hardware architectures and their forces on future software systems...

Application servers (like jBoss or Weblogic) support a development model in which programmers develop threadless code and the app server manages the threads. That simplifies the point too much, perhaps, but I think having that much parallel processing power on a single chip might make app servers much more important for developing applications. There are continuing debates about whether app servers add more complexity than their worth, but that might be because we haven't met many problems large enough to require them–yet. In the early 60's programmers scoffed at the idea of operating systems as being "needlessly complex;" that idea is ludicrous today.
I think application servers are the new operating system. (Ten years ago they were the new transaction monitor, but now we know we all need one. We all need more than one.) We need to be able to write smaller "applications" and have them interact more easily. Look at the advice for programming web applications, web services, EJB's, as well as applications in Erlang. They are all very similar. Erlang applications tend to be more simple than the rest because the available framework takes this point to the extreme.

Those other systems are built by and large on heavy-weight process technology. Maybe Erlang is the new Lisp. We need a Lisp for lighter-weight process interaction, as Jon Udell pointed out.

I should point out that I equate "web server" and "app server". That there is a distinction is an artifact rather than an inherent reason. An application server is inherently a "process monitor" with various drivers. HTTP drivers, SMTP drivers, XMPP drivers, etc. I don't think the new "superplatform" belongs to Microsoft, IBM, or BEA. The superplatform is one based on the standard application protocols. The best of these platforms will combine support for these application protocols (and more... IMAP? WebDAV?) and abstract the complexity. Most of us should be writing sequential code, rule-based code, constraint-based declarative code, etc. But we'll have to plug into these protocols.

Phil continues...

It's tough to see how you'll use that much parallelism on the desktop. I just counted the number of processes running on my Powerbook: 76... there are things we've hardly been able to imagine.
This is the result of the languages we have been using to think with. As Phil points out, these new hardware architectures will give us more reasons to think differently.


Roger Sessions is a bit of a character. Witness his back-and-forth with Terry Coatta in ACM Queue around CORBA and Web Services. Sessions continues to point out that CORBA worked when CORBA was on both ends of the wire. Well, what else would you expect? Does HTTP work when HTTP is not on both ends? For some reason Sessions seems to believe that WS-* does not require WS-* on both ends.

For years now Sessions has promoted DCOM and now he promotes WS-*. A continuing theme of these promotions is that CORBA has failed. Certainly DCOM failed worse than CORBA. Moreover, as Coatta points out...

It looks to me like CORBA is more of a success than Web services.
The CORBA community learned many lessons and would have been even more of a success had that organization incorporated the web sooner and more naturally. WS-* usurped CORBA on the premise that the result would be simpler and better, yet years later both claims remain suspect.

Later in the exchange Sessions says something incredible. Upon admitting the WS-* specs are *more* complex than CORBA's, Sessions suggests that is OK...

I agree that the Web services standards are harder to understand than most of the CORBA specifications, but there’s one fundamental difference between these specifications and the CORBA ones. The CORBA specifications had to be understood by developers. The Web services standards don’t. Nobody needs to understand the Web services standards except for Microsoft and IBM because these standards are about how Microsoft and IBM are going to talk together, not about how the developer is going to do anything... These standards have no relevance to Joe or Jane Developer, none whatsoever.
This is ironic since a few minutes previously Sessions complained that these vendors are in danger of making WS-* too transparent...
In some sense, the transparent ability to make something a Web service is not really a good thing, because making an effective Web service requires a much more in-depth understanding of what it means to be a Web service.
Coatta catches this apparent contradiction...
It sounds like what you’re saying is that the tools that automatically supply Web services interfaces are, in fact, absolutely necessary because they’re that insulation between the developers and the underlying protocols. At the same time, they’re the downfall that’s making it possible to generate poorly architected systems. Two-edged sword?
Sessions has little to defend himself with other than to suggest that although these tools don't prevent bad architectures, just think how bad the architectures would be without those tools.

I'd give Coatta the victory in this debate. Too much confusion from Sessions who has not made up his mind whether tools, protocols, or APIs are good or bad...

The big difference between Web services and CORBA is that the Web services people said right from the beginning: there is no API.
So there is no API, because those are bad. But the protocols are damn near impossible to understand. But that is good, because we only need two vendors and they can provide tools. Well, as long as they don't provide APIs, 'cause that would be bad.

The only thing more confusing than the mess of WS-* specs is the explanation Sessions is trying to proffer.

Spiral Staircase: Going Up or Down?

Jon Udell asks a great question...

In the realm of service-oriented design and business-process modeling, what are the modern counterparts to Lisp and Smalltalk?
First, who says Lisp and Smalltalk are not "modern"?

Second, I don't know the answer. Neither Lisp nor Smalltalk offer more than any other tools for these purposes. I don't think we have much in these spaces yet.

An argument could be made that for service-oriented design the counterpart seems to be HTTP. Whether or not HTTP is the best we can do is moot. Lisp and Smalltalk were allowed to evolve in elite laboratories for a decade or more before widespread adoption. HTTP is simple with demonstrated success, but I'm not sure a successor will emerge from an MIT or a PARC anytime soon.

As for process modeling... we are in worse shape. I developed electronic design and manufacturing software (e-CAD, CAM) for many years, 1983-1993. This work included developing schematic editors as well as software that consumed schematic data. In those days there were few standards for this kind of software, interchange formats, or protocols.

I see several parallels to todays BPM software. Standards like BPEL are a start, but there is a long way to go on the front and back ends for these systems.

Back to services... I think we have a long way to go here as well. Most services I am aware of (whether they are REST or WS-*) are built on languages that emphasize the "inward view" of the system (e.g. the arrangement of code and data within the process) rather than the "outward view" (e.g. the service interface and contract). Sure there are "declarative metadata" schemes in various languages for denoting some function should be a web service, etc. But these schemes are bolted on to previous generation languages. Accessing semi-formal data from multiple sources and "mashing" these together appear to be another capability in demand.

What other capabilities should such a language provide? What about features for security? Availability? Distribution?

Sunday, October 02, 2005

Comments are gone

I have republished the blog without comments. There will be no more comments until I can prevent the spam attacks. I have been all but unaffected until now. The losers lose for us all.

Feel free to send email directly to me in lieu of comments on any topic.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.