A quote from FDR via Mark Johnson's weblog...
For out of this modern civilization economic royalists carved new dynasties. New kingdoms were built upon concentration of control over material things.
From the Vistorian Age to the present it could be argued that dynasties were necessary to build the steel mills, shipping, the railroads, the telecommunications systems, and so on. Perhaps the benefits for society outweigh the cost of establishing the industrial dynasties.
But now we're in an age where extreme decentralization is at hand. It will be more of a challenge to get dynastical forces out of the way of progress than it will be to actually establish a decentralized age itself. Are dynasties necessary for societies to decentralize in the way they were necessary to centralize in the first place? I don't think so.
Dylan told the senators and congressmen not to stand in the doorways or block up the halls. The times are changing. Will the dynasties allow it? Will the people know enough to demand it?
OK. Another blogger to follow.
Jon Udell points out Bob Martin, Ward, and Guido van Rossum have entered the blog world. I'll pay particular attention to Ward's, he has a singular style and an uncommonly sensible approach to software.
Both have been instrumental with JINI and JavaSpaces, the two best satellites in the Java universe.
The blogsphere has achieved another level of benefit for software developers with the addition of this new handful. I'll have to check out the others in their sidebars.
Objective-C has advantages of simplicity and optional dynamic types, neither of which is an attribute of C++. A lot of people don't realize that Objective-C is part of the GCC. So the compiler and library are available for Linux, Windows, and other Unix in addition to Mac OS X.
The language also has a reflection interface in pure C, so integrating scripting languages into Objective-C applications is a snap compared to C++ or even C. You don't have to generate stubs per API. Just call the reflection API dynamically.
We examine the effects of various language design decisions on the programming styles available to a user of the language, with particular emphasis on the ability to incrementally construct modular systems. At each step we exhibit an interactive... interpreter for the language under consideration.
First, I am thrilled to have come across Bob Martin's weblog. His first book is still required reading as far as I am concerned, it is not a C++ book, it is a *design* book. His second is right up there as well, having won a 2002 Jolt award. He's co-authored, edited, etc. many others too.
Second, this weblog topic asks, are dynamic languages replacing static languages? He goes on to confirm what dynamic language programmers have been confirming to themselves for forty years: dynamic languages and their iterative, incremental approach to design obviates the need for static type checking.
The next interesting question addresses the future of type systems and will a more formal yet expressive static type system be more productive than either dynamic languages or the more primitive type systems of Java and C#?
My take on it is this having done a lot of dynamic language programming and a little modern functional language programming...
There is a convergence out there, maybe ten years out on the road to The Hundred Year Language, where better (less intrusive, more expressive) formal type systems and dynamic programming meet. After all, incrementally building a dynamic system of objects is not unlike incrementally building an expressive system of typed combinators!
Essentially type systems are theorem provers. The more expressive they become, the close they are to the problem domain, just as the more expressive dynamic languages allow us to speak more about the problem domain than the "compiling domain". In the future I expect to be able to dynamically build semi-formal systems and then have that system tell me things, i.e. prove theorems about itself. From there I can make adjustments not just through more tests and code, but through direct manipulation of the "theorems" I have stumbled upon.
Talk about refactoring! Organizations with interacting systems will be able to refactor the "theorems" of how they do business. It's one thing to share tests and dynamically update them, but it's another thing to share theorems and dynamically update those.
Phil Windley now has a great quote from Paul Graham's "The Hundred Year Language"... (The permalink to the message is broken for April 30 at this point.)
[Paul] Object-oriented programming offers a sustainable way to write spaghetti code. It lets you accrete programs as a series of patches. Large organizations always tend to develop software this way, and I expect this to be as true in a hundred years as it is today.
[Phil] The phrase accrete programs as a series of patches is so deliciously visual that it makes me smile.
So many juicy bits in Paul's keynote. I guess that's what makes a good keynote.
Since Phil will be sending some traffic my way, this is a chance to redirect you to a piece from OOPSLA 2002 related to the above quote...
Noble and Biddle's Notes on Postmodern Programming.
I expect that, as with the stupendous speed of the underlying hardware, parallelism will be something that is available if you ask for it explicitly, but ordinarily not used.
In a hundred years I expect languages to express opportunities for parallelism better than they do today. I expect language compilers to plan for parallelism more than programmers using those languages.
I would use pH as an example of where this could go more maintstream.
This implies that the kind of parallelism we have in a hundred years will not, except in special applications, be massive parallelism. I expect for ordinary programmers it will be more like being able to fork off processes that all end up running in parallel.
Except in special kinds of applications, parallelism won't pervade the programs that are written in a hundred years. It would be premature optimization if it did.
On the contrary I think we'll be accustomed to parallel and concurrent computing through the use of simpler mechanisms that we have today, by and large.
So there will be much more parallel computing than we have today, but it will be simpler and mostly implicit. I think Paul Graham's OLTP/Workflow orientation is blinding him to the amount of analysis that takes place in the business world, and how much more can be automated in the next hundred years.
This video taped lecture demonstrates the benefits of computer science for non-majors, just people who want to improve their ability to think.
But, as I have pointed out: Computer Science is not a science, and its ultimate significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. I will defend this viewpoint with examples and demonstrations. -Gerald Sussman
If you are interested in computing and live near Salt Lake or Provo then Phil Windley will teach you how to think in this course that by all appearances is excellent.
I was fortunate to have a similar course with an emphasis on interpretation. Too many comparative courses are presented as a survey of disparate landscapes as opposed to a grounding in the fundamentals.
If you not in Utah, then look for similar courses near you. Indiana, MIT, Northeastern, Brown, and many others will have them.
If you are too far away then there are several on-line resources, e.g. on-line texts, Ars Digita University and some video taped courses like these from Berkeley that include Structure and Interpretation of Computer Programs.