"I have a mind like a steel... uh... thingy." Patrick Logan's weblog.

Search This Blog

Loading...

Friday, December 22, 2006

Intelligent Design?

Jerzy points in a comment to an earlier post an Economist article on why Windows, and particularly Vista, is continuing to win the OS war. An excerpt from the Economist...

But unlike Windows, downloading applications to run on Linux and ensuring all the necessary “libraries” are in place is most certainly not for novices.

But the real difference between Unix-like operating systems and Windows is their design philosophies. Windows may squander computing power through its clumsy architecture. But by favouring simplicity of use over simplicity of design, Microsoft has been able to leverage cheap but powerful commodity hardware, to provide cost-effective software solutions.

On the first point about dependencies, I have found recent releases of Suse and Ubuntu, particularly the later, to handle this well, including updates to previously installed software. I have certainly encountered at least my share of problems with Windows components, but so have I used Windows for a wider variety of purposes, e.g. I have not yet used desktop Linux with cameras, digital audio devices, etc. What little experience I have with Macs and these purposes (external DVD recorder, digital camera, and printers) tells me that it works a good bit better than Windows.

The second point that troubles me in this quote is that Microsoft's operating system as a whole is more efficient and cost-effective from a hardware-cost perspective. I can offer strong evidence against that regarding Windows XP and Windows Vista. I have hardware that is long paid off that continues to run every version of Linux I throw at it. However on the Windows partition it cannot run anything more recent than Windows 2000, i.e. XP (through experience) and Vista (I am told) will not even install, let alone run as efficiently as the newest Linux distributions.

I will also touch on their notion that Microsoft has given Windows "simplicity of use" at the cost of design complexity. "That's funny ha-ha!" is the only reaction that comes to me. By what measure is Windows easier to use than MacOSX or Ubuntu Linux?

Update: (via James Robertson, "Load the Stupidity Module") an analysis of the cost of Vista's "content protection" mechanisms. If so, then Vista will raise everyone's hardware costs, not just Vista users. Sheesh. Thanks.

As a user, there is simply no escape. Whether you use Windows Vista, Windows XP, Windows 95, Linux, FreeBSD, OS X, Solaris (on x86), or almost any other OS, Windows content protection will make your hardware more expensive, less reliable, more difficult to program for, more difficult to support, more vulnerable to hostile code, and with more compatibility problems.
Miguel de Icaza's take on this ("Content, Restriction, Annulment and Protection (CRAP)") is classic...
Microsoft: Shooting itself in the foot. One toe at a time.

Inversion of Containment

Update: Guy Nirpaz addresses the issue further. I'm in the middle of the USC/Michigan game and can't concentrate yet.

End

Guy Nirpaz talks about all the containers available for Java applications. Some are heavier than others. Some provide more lifecycle control than others. Co-workers have looked for several reasons at OSGi and JBoss' use of JMX/MBeans.

My Jini Goggles are on right now for related reasons. While Jini per se is not a "container" there are mechanism built in and on Jini that provide some lifecycle support. In-the-large: e.g. Rio. In-the-small: Leases. And the Jini mobile object mechanism is a kind of runtime "injection" mechanism with security.

What is a "container" and are there pieces of Jini that provide an "inversion of containment" to meet similar objectives? I'm just thinking. Do you need something like Spring to abstract Jini, when Jini itself is an abstraction of lookup, injection, etc.?

The Movement You Need

Interesting movements in the Jini/Javaspaces world. Looks like Apache will likely accept it as a full-fledged project and there is a newly published book from Apress.

A prediction or at least a suggestion for 2007... the various Apache and other open source ESB's should look at Jini/Javaspaces to either/both (1) consider it for under-the-hood capabilities, or (2) exposing it's legitimate capabilities alongside the ESB's.

Gigaspaces has taken a step in this direction with Mule. The open source combination of ESB and J/JS would have many of the same advantages, in a completely Apache/OSS fashion.

Kudos

Although I rag on MSFT 99% of the time (and think they deserve it, but more on that later :-)...

I was just thinking they do deserve one big nod for seeking out people to help get them into the new world, like Ozzie, Cunningham, and Udell. Hopefully the organization will pay attention.

Wednesday, December 20, 2006

Update In Place vs. Coordination Space

Data caches are good. Even distributed, shared, clustered data caches are good. There are many applications where this is the right thing to use.

Comparing them to Javaspaces is kind of nagging at me. In some cases one product can serve both purposes. E.g. in Gigaspaces, the developer can choose the Javaspace API or the Map/Cache API. They can also do funky update-in-place things within a space kind of too, but that is explicitly not in the Javaspaces API per se.

In any case I think it is imperative that a developer chooses what kind of mechanism they need for specific situations. And I think there are critical differences between the Javaspace mechanism and a data cache mechanism.

In particular, certain data caches run *in* the address space of the client JVMs, while a Javaspace is *never* in the address space of the client JVMs. (There are caches that run outside the address space, e.g. memcache, and that's another angle on the topic.) When these caches use the java.util.Map API there are potentially funky goblins at play. A regular java Map has certain expectations.

Consider an object V1 with a direct object reference to object O1. Now consider another object K1. Put V1 in a shared, clustered cache using K1 as the key. Update object O1 in that JVM. In another JVM in the cluster (JVM'), do cache.get(K1') to get a copy of V1' referencing a copy of O1'.

In this second JVM', update that O1'. Then from there do cache.put(K1', V1'). Note: a cache is *not* a transparent transactional memory OODB like Gemstone/S or Gemstone/J. Not that you'd want one of those anymore, but they do go to pains to maintain referential integrity, which is the cliff this example is heading off of. So the put in JVM' will soon update the first JVM so that the key K1' leads to the value V1' with a reference to the object O1'.

Meanwhile in the first JVM there is still, outside the cache, the objects K1, V1, and O1, with V1 referencing O1. When this JVM does cache.get(K1) it gets back V1' with a reference to O1'.

Question: In the first JVM what are the identities of the objects K1, K1', V1, V1', O1, and O1'?

Answer: I believe the answer is dependent on the cache implementation. I am not sure what the JCache spec says. Either there is leeway or not all caches do the same thing, and none of these things may be what the developer expects based on experience using the out of the box java.util.Map classes in single or concurrent threads.

In JBoss Cache the developer has the choice of a "tree cache" or a "pojo cache" (maybe others). The behavior will be different based on this choice. A pojo map will patch up object references *within* the stream used to cluster the distributed caches. The tree cache does not. And I think this behavior can vary based on the use of their AOP mechanism.

Even the pojo cache though does not patch up references as far as I can tell, ever, among cached objects and their former references in a JVM outside the cache per se. I.e. the deserializer does not do a sweep of the entire JVM address space to fix identity problems.

This is not necessarily a bad thing when the cache is used as a read-mostly, shared data cache backing some external data with a well-controlled update convention.

A Javaspace does not have this problem because of a simplifying specification -- Javaspaces do not deal with object identity in JVMs. You always get a new object. If you want to deal with identity then code it yourself in the Entry objects. But really don't do that very much!

That is different, and maybe not what you'd want at first. But that may indicate your not using the best mechanism for your problem or you're not thinking about the mechanism the best way yet. This is one of those "architectural constraints" that seem to get in your way, but actually can simplify the solution to a problem for certain classes of problems. E.g. when the problem is "coordination" of processing rather than "clustering" of read-mostly data.

Yes?

Tuesday, December 19, 2006

Deliberately Misguiding Windows Me

From Information Week. I have a computer at home my wife uses. She needs Windows, the PC is fine, and won't even run XP. So it runs Windows 2000. (And every version of Linux up to the most recent Ubuntu and Suse! But she needs to boot into Windows.) We all know its "VersionNT" under the hood. Not that I am looking to run MSFT Defender, but this is just silly. $10 billion USD at a minimum spent on Vista! Unlikely I'll ever have it in the house. At some point I'll move her Windows 2000 and some other XP systems my kids run to virtual images on Linux and/or MacOSX, where they will remain in their formaldehyde virtually forever.

As other new products emerge from Microsoft in 2007 and beyond, more and more of them are likely to leave Windows 2000 out of the party. Which of these installation restrictions are caused by a real lack of capabilities in Windows 2000, however? Are any of them merely a "squeeze play" by Microsoft to convince buyers that it's necessary to immediately upgrade all PCs to Vista and all servers to Server 2003 or the forthcoming Longhorn Server?

One example of this conundrum is Microsoft's Windows Defender program. This antispyware program can be downloaded for free, but it will only install on Windows XP, Server 2003, and higher. The application won't install on Windows 2000, according to Microsoft's own product documentation.

Users have reported, however, that this is simply an artificial rule built into the Installshield package that copies Defender files to disk.

The installer contains a condition defined as VersionNT > 500. (Windows 2000 is technically considered version 5.0 of Windows NT.) Admins who've removed this condition using Orca, an Installshield editor, say Defender then installs and runs fine on Windows 2000.

Blog Archive

About Me

Portland, Oregon, United States
I'm usually writing from my favorite location on the planet, the pacific northwest of the u.s. I write for myself only and unless otherwise specified my posts here should not be taken as representing an official position of my employer. Contact me at my gee mail account, username patrickdlogan.