I attended a lecture by Maurice Herlihy on the "Transaction Manifesto". The paper on his site called "Software Transactional Memory for Dynamic-sized Data Structures" reflects the same ideas. This was an interesting talk and the rest of this post explains my interest. Moreover I've now added Herlihy to my list of interesting CS thinkers based on a number of papers and software systems available at his site.
The talk was more of a plea to implement a transaction mechanism in hardware. The argument is that "compare and swap" and similar read-modify-write instructions are too low level.
"Too low level" means there is a lot of complexity in the software between the CAS and the mechanisms that programmers desire. The suggested instructions would operate on more than two memory locations and the locations would not necessarily be close to each other.
The ideas are derived from database concurrency mechanisms, i.e. simple transactions with a begin, some instructions, and a commit or rollback. Essentially this treats RAM like a simple database.
A couple of thoughts came to mind:
- Combine these kinds of instructions with battery-backed RAM and you've significantly simplified the implementation and improved the performance of database mechanisms.
- Next, going back to the discussions around virtual machines this past week, this level of concurrency control should at least show up at the virtual machine level. Why emulate a 1970's instruction set?
Much effort is spent ignoring or recreating lessons learned in the database community. Apparently in hardware as well as software. Alan Kay has wondered why Moore's Law, although wildly accurate and beneficial, has not translated more proportionately into software improvements. Here is an example.