"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Saturday, December 13, 2008

Multi Core Programming

The presentation and discussion at pdxfunc this week was all about bytecodes, JITs, and performance of dynamic languages, in particular Javascript, in particular for Mozilla. Igal took good notes. The presenter, Jim Blandy of Mozilla, made a few corrections to the notes.

Jim's presentation was well done. Future meetings of pdxfunc may very well be video recorded. Too bad this was not... people with various levels of experience with compiler internals each expressed satisfaction.

Michael Lucas-Smith was at pdxfunc and arranged for an Industry Misinterpretations podcast with Jim. James Robertson has that up on his blog now. (Aside: MLS is a whirlwind of facts on sci-fi, astro-physics, and other brainy (literally) topics... if you ever have a chance to sit down with him at a pub.)

One of the interesting highlights of the discussion during Jim's presentation was a brief segue into language compiler and runtime characteristics that do or do not play well with modern CPU architectures, especially regarding caching and off-chip accesses. This led to a brief touch on upcoming many-core chips, and MLS mentioned Intel's Larrabee chip, which will have many cores, each essentially a simple, small-but-64-bit, Pentium plus vector processor.

Programming languages and implementations that support small, dynamic, shared-nothing (or at least shared-little), asynchronous processes stand a chance.

Friday, December 12, 2008

InexpressivenessYet World Wide Webiness

Joe Gregorio writes...

One of the reasons that I have this feeling is that after programming for the past 25 years the field hasn't really changed.
I also have the feeling nothing about programming has changed. Except when I have the feeling that a *lot* about programming has changed.

I'm now in the frame of mind that 25 or 50 years is really not that long, and already programmers have created a world-wide web of information and computation, used by a significant number of people, that seems capable of outlasting any of its current parts. That's something.

On the other hand programming is still too hard (inexpressive), still too much like it was 25 or so years ago.

a-DOH-bee *doesn't* hurt the web (mostly)

Via Phil Windley, someone (not sure who) continues to get it wrong...

The Web is under attack by proprietary platforms like Air, Silverlight, etc. They have clear and obvious advantages but they have one big disadvantage: their lack of openness. Nick says that we need to accelerate the inclusion of the capabilities of these closed platforms in open standards and browsers. "We need more stuff in the browser faster." Mike says that the competition of Web browsers has led to improvements like Javascript being 10x faster now. Joshua thinks that stories about the threat to the Web are highly overrated. The Web has made great strides over the last few years.
Adobe AIR runs webkit, javascript, etc. just like many other applications. And it has the general ability to connect to HTTP servers, mail servers, XMPP servers, etc. So to claim that "the web is under attack by proprietary platforms like AIR" makes no sense to me.

Sure, AIR is not "a browser" out of the box. That's a good thing. It is an application development platform. Browsers (in the traditional sense) are not good at that.

Clearly the browser implementers are finding ways to become more like AIR, out of necessity. Browsers are not the end of evolution, neither is AIR. They are each points along the way to whatever comes next.

Thursday, December 11, 2008

Everything Becomes Lisp

"Find out how closures and lambda functions make programming easier by
letting you define throwaway functions that can be used in different
contexts. This article details how useful closures are as a functional
programming construct within PHP V5.3 code."


Wednesday, December 10, 2008

That's some damn hobby you've got there

Via Isaac Ropp...

a-DOH-bee: Please Don't Hurt The Web

From a-DOH-bee...

"Adobe(R) Wave™ is an Adobe AIR application and Adobe hosted service
that work together to enable desktop notifications. It helps
publishers stay connected to your customers and lets users avoid the
email clutter of dozens of newsletters and social network update
messages. Adobe Wave is a single web service call that lets publishers
reach users directly on their desktop: there's no need to make them
download a custom application or build it yourself."


Em, why not use atom format and atom pub?

Monday, December 08, 2008

Sucks - one media downsize too many

Tim Riley is hilarious. And the best newsman in Portland, if not the world.


Sunday, December 07, 2008

Programming Languages and Concurrency

This is in reponse to some questions from an earlier post...

I remember reading posts of yours criticizing STM in the past. Given some of the link love towards Clojure lately, have you softened on STM a bit? What are your opinions on the state of concurrency and the current crop of language out there now?
I don't know if I have softened on STM. My main concern with STM is that it is experimental, and moving it into widely used languages like Java and C# would be premature, at best. I am happy to see it in languages like Clojure and Haskell, where it can be experimented with.

Look at Clojure's ref mechanisms: there are several kinds, and now "atoms" are a kind of ref that seems intended to be used *instead* of STM. (Or maybe it is considered an alternate, atomic form of STM isolated from the previous, transactional STM?) That's all fine and good - Clojure is an experiment in concurrent, mostly-functional programming.

But where will STM bottom out? Will we end up with five kinds of refs? Ten? How many rules and exceptions to rules will we need to use them all effectively?

Having more people program more real applications with Clojure, Scala, Haskell, Erlang, and so on is great. Clearly we are just at the very beginning of a long evolution of concurrency mechanisms and how they are expressed in programming languages.

Thank goodness this is happening in widely varying languages, and not just in Java and C#. We'll certainly need multiple languages and mechanisms for addressing different kinds of concurrency problems. And many more than that to explore with enough variety to determine the better ones.

I am personally interested currently in how simple a language and its concurrency mechanisms can be for the widest variety of problems. While Haskell, Scala, Clojure, etc. are much better than Java or C#, they are still more complex than necessary for many applications.

Moreover, the "client" aspects of concurrency problems and opportunities continue to be largely negelected. However small and mobile or rich and graphical, we have to get away from the awful browser/ajax model. Yet even more clearly, there is no reason to retreat to the old desktop model per se. I'm afraid we're going to be stuck with crap for a long time here.


Clojure takes another step away from Lisp. Clojure now as "atoms" which are not to be confused with Lisp atoms. Not necessarily a bad thing. I'm just saying.

Since they are a kind of "shared reference" maybe they could have been called "arefs". Oops. No. That is a ubiquitous Common Lisp function for arrays. I guess there are only so many terms to be applied. So to speak.

Lisp is a great vehicle for language experiments. Clojure is a vehicle for that on the JVM, specifically in the area of transactional memory and coordination via shared references. I'm not a big, big fan of this scheme so far, but I'm interested to see where it goes.

Down On Main St.(ream)

Mark Watson lines them up...

Gambit-C Scheme does have the Termite package for concurrency but something more main-stream like Scala or Haskell seemed like a better idea.
Ouch. Maybe lisp won't be our overlord.

Actually Gambit does not support multiple cores in one OS process either. It has Erlang-levels for threading, and almost as good for mailboxing, but just on one CPU.

So then Clojure? Good bit simpler than Scala or Haskell. Much more expressive for symbolic programming as well.

Mark points to an impressive comparison favoring Haskell over Scala. (Assuming all things being equal.) I wonder what the learning curve is to get really good numbers for various kinds of programs in Haskell, with its laziness, etc. Especially good memory usage numbers.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.