"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Saturday, May 03, 2003

A quote from FDR via Mark Johnson's weblog...

For out of this modern civilization economic royalists carved new dynasties. New kingdoms were built upon concentration of control over material things.

From the Vistorian Age to the present it could be argued that dynasties were necessary to build the steel mills, shipping, the railroads, the telecommunications systems, and so on. Perhaps the benefits for society outweigh the cost of establishing the industrial dynasties.

But now we're in an age where extreme decentralization is at hand. It will be more of a challenge to get dynastical forces out of the way of progress than it will be to actually establish a decentralized age itself. Are dynasties necessary for societies to decentralize in the way they were necessary to centralize in the first place? I don't think so.

Dylan told the senators and congressmen not to stand in the doorways or block up the halls. The times are changing. Will the dynasties allow it? Will the people know enough to demand it?

This from Matt Gerrans ties in with the posts on structure and interpretation of computer programs I wrote about this week (and almost every week it seems).

OK. Another blogger to follow.

BTW when I was writing electronic design automation software way back when Matt's approach was the popular (and correct) technique. Scheme was the scripting language of choice for DA tools.

More Notable Bloggers - Distributed Systems

Jon Udell points out Bob Martin, Ward, and Guido van Rossum have entered the blog world. I'll pay particular attention to Ward's, he has a singular style and an uncommonly sensible approach to software.

Moreover two notable people in distributed systems have blogs as well: Ken Arnold and Jim Waldo.

Both have been instrumental with JINI and JavaSpaces, the two best satellites in the Java universe.

The blogsphere has achieved another level of benefit for software developers with the addition of this new handful. I'll have to check out the others in their sidebars.

Friday, May 02, 2003

Dynamite!

Objective-C has advantages of simplicity and optional dynamic types, neither of which is an attribute of C++. A lot of people don't realize that Objective-C is part of the GCC. So the compiler and library are available for Linux, Windows, and other Unix in addition to Mac OS X.

The language also has a reflection interface in pure C, so integrating scripting languages into Objective-C applications is a snap compared to C++ or even C. You don't have to generate stubs per API. Just call the reflection API dynamically.

Thursday, May 01, 2003

Bob Martin's Weblog and Dynamic Languages

First, I am thrilled to have come across Bob Martin's weblog. His first book is still required reading as far as I am concerned, it is not a C++ book, it is a *design* book. His second is right up there as well, having won a 2002 Jolt award. He's co-authored, edited, etc. many others too.

Second, this weblog topic asks, are dynamic languages replacing static languages? He goes on to confirm what dynamic language programmers have been confirming to themselves for forty years: dynamic languages and their iterative, incremental approach to design obviates the need for static type checking.

The next interesting question addresses the future of type systems and will a more formal yet expressive static type system be more productive than either dynamic languages or the more primitive type systems of Java and C#?

My take on it is this having done a lot of dynamic language programming and a little modern functional language programming...

There is a convergence out there, maybe ten years out on the road to The Hundred Year Language, where better (less intrusive, more expressive) formal type systems and dynamic programming meet. After all, incrementally building a dynamic system of objects is not unlike incrementally building an expressive system of typed combinators!

Essentially type systems are theorem provers. The more expressive they become, the close they are to the problem domain, just as the more expressive dynamic languages allow us to speak more about the problem domain than the "compiling domain". In the future I expect to be able to dynamically build semi-formal systems and then have that system tell me things, i.e. prove theorems about itself. From there I can make adjustments not just through more tests and code, but through direct manipulation of the "theorems" I have stumbled upon.

Talk about refactoring! Organizations with interacting systems will be able to refactor the "theorems" of how they do business. It's one thing to share tests and dynamically update them, but it's another thing to share theorems and dynamically update those.

Wednesday, April 30, 2003

Post-modern Programming 100 Years from Now?

Phil Windley now has a great quote from Paul Graham's "The Hundred Year Language"... (The permalink to the message is broken for April 30 at this point.)

[Paul] Object-oriented programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches. Large organizations always tend to develop software this way, and I expect this to be as true in a hundred years as it is today.

[Phil] The phrase accrete programs as a series of patches is so deliciously visual that it makes me smile.

So many juicy bits in Paul's keynote. I guess that's what makes a good keynote.

Since Phil will be sending some traffic my way, this is a chance to redirect you to a piece from OOPSLA 2002 related to the above quote...

Noble and Biddle's Notes on Postmodern Programming.

Tuesday, April 29, 2003

Parallel Computing

Paul Graham, in his by now famous "The Hundred Year Language" keynote at Pycon 2003, questions the future of parallel computing. I am a little more optimistic than he is, see if you agree...

I expect that, as with the stupendous speed of the underlying hardware, parallelism will be something that is available if you ask for it explicitly, but ordinarily not used.

In a hundred years I expect languages to express opportunities for parallelism better than they do today. I expect language compilers to plan for parallelism more than programmers using those languages.

I would use pH as an example of where this could go more maintstream.

This implies that the kind of parallelism we have in a hundred years will not, except in special applications, be massive parallelism. I expect for ordinary programmers it will be more like being able to fork off processes that all end up running in parallel.

  • Certainly programming over the next five years, not to mention a hundred, will be more event driven and concurrent.
  • Languages like pH, mentioned above, will enable some of these concurrent processes to have some implicitly parallel computations.
  • Other set-oriented languages like SQL (ignoring sequential stored procedures) also imply parallel computation.
  • The Teradata DB architecture is already parallel and performs best on set-oriented SQL.
  • As concurrent hardware becomes significantly more affordable we will see parallel implementations of open source databases for OLAP-oriented analysis.

Except in special kinds of applications, parallelism won't pervade the programs that are written in a hundred years. It would be premature optimization if it did.

On the contrary I think we'll be accustomed to parallel and concurrent computing through the use of simpler mechanisms that we have today, by and large.

  • OLTP computing will take place through event-driven concurrent processes coordinated by simple message exchange patterns using simple "autonomic meeting places".
  • OLAP computing will take place through simple notations with mostly implicit parallel capabilities.
  • Inherently parallel structures in either of these scenarios will also take place through simple notations and a compiler's parallel planning capability analgous to a SQL compiler's query planning component.

So there will be much more parallel computing than we have today, but it will be simpler and mostly implicit. I think Paul Graham's OLTP/Workflow orientation is blinding him to the amount of analysis that takes place in the business world, and how much more can be automated in the next hundred years.

The Benefits of Computer Science for Non-CS Majors

This video taped lecture demonstrates the benefits of computer science for non-majors, just people who want to improve their ability to think.

But, as I have pointed out: Computer Science is not a science, and its ultimate significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. I will defend this viewpoint with examples and demonstrations. -Gerald Sussman

A Computer Science Education for Thinking

If you are interested in computing and live near Salt Lake or Provo then Phil Windley will teach you how to think in this course that by all appearances is excellent.

I was fortunate to have a similar course with an emphasis on interpretation. Too many comparative courses are presented as a survey of disparate landscapes as opposed to a grounding in the fundamentals.

If you not in Utah, then look for similar courses near you. Indiana, MIT, Northeastern, Brown, and many others will have them.

If you are too far away then there are several on-line resources, e.g. on-line texts, Ars Digita University and some video taped courses like these from Berkeley that include Structure and Interpretation of Computer Programs.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.