From Shahin Khan in the Register...
A major shift is coming. Over the next few years, your ordinary applications will be able to tap into systems with, say, 7,000 CPUs, 50 tera bytes of memory, and 20 peta bytes of storage. In 2005, Azul Systems will ship compute pools with as many as 1,200 CPUs per a single standard rack...It may take more than a few years to get to these numbers for most of us, but I think at that point the change will be even more dramatic. Unless once again the software industry (you know, the "agile" industry) once again is too slow to keep up with hardware innovation.
What would change about application design if you could do this? Well, think back to what applications were like when you had just 128K of memory in your PC and a 512KB hard drive. The difference between the capabilities and flexibility of applications in those days and now is the level of improvement that we are talking about.
One reason why I'm getting back into Erlang after tinkering with it several years ago. I want to understand how to think in terms of processes being about as cheap to create as objects are in other languages. One neat thing about Erlang is the sequential aspects of the language are so spartan, the programmer is forced to think about processes.
A single Erlang node on a single CPU today can comfortably get into the tens of thousands of dynamic processes. What would your system look like running hundreds of thousands or a million dynamic processes and lot of activity is spent collaborating with other systems also running at that scale?
Yaws is still functioning at over 80,000 parallel connections.The Java community is discussing a new
protected-privatekeyword. This reminds me of the saying about rearranging Titanic's deck chairs.