On the topic of representing persistent formats for RDF, I came across another favorite paper from some years ago. Reading through this paper in 2005 (Adaptive Framework for the REA Model), there would appear to be some correspondence between RDF triples and REA, which also has a kind of "triple" model: Resources, Events, and Agents. I can't say what that correspondence is except at my superficial level of understanding of either.
The paper's approach to incremental computation results in a kind of graph model that would correspond to a graph of RDF triples representing the same information. i.e. I think RDF could be used in a straightforward manner to represent the fine-grained elements of REA models. Both models have some scenarios where connected graphs would be useful and this paper points out one of them... incremental computation, esp. for aggregating information and computing results not represented directly in the model itself.
This is an interesting approach and I wonder if the authors or others have pursued it since 1998. (Aside: the creative approach is yet another to come out of the Smalltalk community, in particular Ralph Johnson and his U.Illinois squadron.) So if you are not familiar with the history of ideas that have streamed out of the Lisp and Smalltalk communities, start your search now because the list is long.
Also -- this would seem to be an interesting solution for throwing hardware at a problem rather than the typical way low level engineering of computation. (Disclaimer -- I work for a hardware vendor, so you might conclude I have a personal interest in this approach. Actually I am generally interested in simple software approaches that can be accelerated with the evolution of hardware. I recognized this several times over the last 20+ years working for varying employers on varying problems. Through it all, reasonable solutions in Lisp, Smalltalk, etc. became faster simply by migrating the hardware. Meanwhile the industry generally migrated their languages to take on more features of these earliest dynamic languages born in the laboratories of the 1960's.
On another aspect of hardware, unfortunately for soloutions like the one in this paper, I was (now unrealistically) thrilled by the advances in Magnetic RAM (MRAM). This hardware seemed to be on the cusp of a price and density breakthrough which would have virtually eliminated the need for secondary disc-based storage for even very large persistent data sets. That does not seem to be panning out on the rosy schedule presented a few years ago. There appear to be some real limitations that will be difficult to overcome. The run-time / persistence mapping problem will be with us for a while. All the more reason to strive for simple run-time models as well as simple persistence models.