Dave Winer believes...
[U]nless I'm mistaken, [Wikis] don't have a concept of user identity...
Some do. Some don't. Sometimes it's an admin option.
One of the masters of simplicity in software, Graham Glass, writes...
another awesome bit of software technology is Linda, the tuple spaces architecture by David Gelernter. like Lisp, it provides a simple set of primitives with great expressive power. but i don't think that the Linda approach scales well. the remove() primitive requires some degree of locking that is not suited for distributed environments.
Voyager itself was a simple and awesome software technology. Congratulations to Graham for that as well as Glue and the upcoming Gaia. I'd be interested in his thoughts on Linda-like systems for the Internet.
The most significant adjustment in scaling Linda to the Internet that comes to my mind is to redefine blocking operations to be non-blocking. So 'in' (i.e. remove) would not block until a match is found. Maybe there would be a timeout, but the window would be fairly short and so may just be meaningless at Internet scale.
For these non-blocking polling operations, some rationale has to be assigned to an ethernet-like (but larger distance and time scale) backoff policy. Not unlike the RSS situation with polling. But the surprise with Ethernet is that the pipes continue to get faster and conflicts are not the problem people envisioned.
The alternative is to use a cloud-like mechanism for scalable trigger/notification. Eric Hanson has been designing these for relational databases at UFL.
Another aspect of this is the number of simultaneous users of a single tuple space. Scaling out a tuple space is fairly simple... create more spaces. Work has been done to replicate spaces, but in general I'd look for opportunities to simply partition scenarios and participants across distinct spaces. That must be the 80 percent or greater situation at least.
Reliability of messaging would have to take into account that the connection itself does not (and probably should not) implement guaranteed delivery of just one copy of the message. This is where some wisdom has to go into message design and higher-level domain-specific protocols. Less of an issue as the Internet evolves into its next incarnation, but this is the simplest approach for the next N years.
You can add your thoughts to the TupleSpaceScalability wiki page.
Sam Ruby writes, referring to the original Smalltalk refactoring browser...
[The] key question in my mind is not when such things converge in the laboratory, but when will such things converge in the mainstream?
I know it's a relatively small market, but Lisp and Smalltalk have been out of the lab for years. IBM's direction before Sun's introduction of Java was clearly aimed at Smalltalk and they were having a good deal of success with it.
There is no doubt that the Smalltalk refactoring browser was first. (Reading Martin Fowler's book will back the claim.)
But when the RB came out, it was aimed at, used, and ultimately made the de facto choice for commercial Smalltalk development.
A great site for Lisp programmers and other sorts. Not just Lisp-specific wisdom, from Chris Riesbeck's programming notes, via Kevin Rosenberg's blog, via Lemonodor.
While Smalltalk doesn't have the macro system Lisp has, generics are a non-issue in ST, as is boxing/unboxing. Ditto the "simplification" things. One could summarize this whole thing as adding complexity to a language to make up for the glaring flaws.
While Smalltalk doesn't have syntax macros like Lisp, Smalltalk *does* have simpler block syntax and keyword arguments.
The effect is that new keyword messages can take block closures, which essentially delay evaluation until the method behind the message decides to evaluate them, and the message send looks an awful lot like new syntax, very clean.
Many uses of macros in Lisp, frankly, are simply created to hide the appearance of the lambda keyword. Of course there are more "legitimate" uses of macros, but these are less common.
So Smalltalk and Lisp are fairly evenly weighted in the syntax category in my book.
Ruby is pretty close, too, but has some wierd quirks where they didn't get closures quite right and very bad silent mistakes can be made. Python is further back in this category.
A strongly typed language merely creates an environment so that the type-checker can automatically check the type constraints for you at the cost of restricting some genuinely useful things you might want to do.
One thing you want to do that is easier with dynamically type checking is testing. Only time spent developing significant systems in dynamic languages will bear this out. Illustrations don't capture the reality.
People see that static types can catch things early. They don't see that those things aren't important since they get caught early in dynamic tests anyway. Yes, you have to write good tests. Can anyone tell me when that's not the case?
Neither is it apparent on paper that refactoring large systems, or just experimenting with large systems, goes much more quickly when you have to change less code to get a small test to run. You trust that the complete set of tests will catch everything else when the experiment is ready to be integrated back into a complete working system.
But during spike experiments you want to ignore the irrelevent, create mock objects, even entire "mock databases", and play what if on isolated bits of code, even within large currently running systems. Doesn't happen easily in Java-like languages. Compilers that have no clue of what you really care about in the moment. They make you dot that 'i' over there and cross this 't' over here before you can even get to the bits of interest.
However, I think that it makes the student's choice crystal clear: become a computer scientist, not just a programmer, because if the IT locomotive does decide next year that some new thing is better, you'd better be as adaptable or you'll get left in the dust.
You want to learn computer science, or anything really, from people with advice like this. You may end up even contributing to the next new thing.
Stefan Tilkov writes about me...
Patrick Logan has a blog. So what?
Exactly.
Oh, he continues...
Apart from some very interesting posts, I somehow recall the name, and have positive assocations with it ... digging ... I found a patricklogan@gemstone.com
Well, thanks. Gemstone went through an upheaval at the tail end of the dot com fiasco. I left before the implosion...
"In less than a year, a 20-year-old [sic] company, with revenues and profits, (though not brilliant, it was a going concern), has been destroyed by another company that is itself only a few years old. That is sad."
So I am wondering how much should be put into specifications and what should be left for emergence?
I have had bad experiences participating in the market-driven rush to unproven and premature standardization. So I lean toward the minimal necessary and look for patterns and principles to emerge from experience.
Giulio Piancastelli is very kind and also on the money speculating...
I quoted Patrick's words because I think they are the most original and interesting from the whole bunch. I see Prolog and tuple centers in them, and I am glad for what I see.
Nice connection and thanks for the references to these projects. Indeed I think the future is bright for logic/constraint languages and coordination through blackboard/tuplespace architectures.
There is nothing like a controversial statement to draw attention to an issue, or at least traffic to a little-known blog.
Apparently, Patrick took a peek at WS-Transaction, didn't get any further than the author list and decided it wasn't any good. I agree that two phase commit is traditional... I hold out more hope for compensating transactions via asynchronous messages.
I did read further than the list of authors. But nothing like a controversial statement to get people interested in a topic.
I do think the compensating approach is better than the atomic approach at the distributed business level of "transaction". But I am not sure either should be a part of the connection architecture. Compensation "meta-data" if you will should be a part of the documents exchanged among business partners, no matter how those documents are exchanged.
A distributed system across trust boundaries is where I believe we are collectively heading. The internet shows us the beginnings of what is possible... for the subset of things that are generally always online and publically accessible. Expanding this to mobile and access controlled is the next big thing.
I love the NeuroTransmitters story. But Sean McGrath illustrates the problems that emerge when you overload your connection architecture over time.
The network is not the computer. My computer is over here and yours is over there. All we need is a simple way to exchange documents.
The documents are the messages. The computers are the cells. (Fortresses in Roger Session's metaphor.)
The connections should be as simple as possible and get out of the way.
Sam Ruby points to WS-Transaction. But look at the list of authors. The problem is IBM and Microsoft have a lot to preserve by maintaining more or less traditional databases and transaction protocols.
Jon Udell wonders about the LAMP vs. J2EE/dotNET inflection point. But LAMP is made in the same image. This architecture goes back thirty years or more.
I've written an essay on what I think the next database architectures should look like. If you read through my blogs you'll know it has something to do with tuple spaces and star schemas. But Jon goes on to ask, "How do you preserve the fluidity that the agile enterprise requires?" There is a little bit more to say, since enterprises are not nearly as agile as they need to be.
WS-Transaction is intended to preserve the status quo rather than improve agility. I'll publish more on this in the near future.
Lambda the Ultimate has this piece on a Scheme program that analyzes Excel spreadsheets.
What is most notable about this? The essence of this system represents the future of software development and the need for more dynamic analysis than "compilers" traditionally give.
Note:
This represents the convergence of dynamic model checking and infered static type checking. The future is getting more dynamic and so "type checkers" will become more sophisticated (and background) theorem provers than they are now.
Additional features will include symbolic execution including concurrency model checking and semi-symbolic partial evaluation.
Over the last 10-15 years electronics design has taken more of a software design flavor. Over the next 10-15 years software design will take on more of a traditional hardware simulation and model checking flavor.