Stu says stuff about reuse...
From this viewpoint, "build it and maybe people will use it later" is a bad thing. SOA proponents really dislike this approach, where one exposes thousands of services in hopes of serendipity -- because it never actually happens.The early "mashups" were certainly serendipitous. I would expect most service providers on the web today are planning for machine-to-machine to a much greater extent than even a couple of years ago.Yet, on the Web, we do this all the time. The Web architecture is all about serendipity, and letting a thousand information flowers bloom, regardless of whether it serves some greater, over arching, aligned need. We expose resources based on use, but the constraints on the architecture enables reuse without planning. Serendipity seems to result from good linking habits, stable URIs, a clear indication of the meaning of a particular resource, and good search algorithms to harvest & rank this meaning.
This difference is one major hurdle to overcome if we are to unify these two important schools of thought, and build better information systems out of it.
Is it really "serendipitous" when an organization follows those good web practices and enters formal-enough agreements with participants, say in an intra-enterprise-services situation? No, that seems very much "planned". The biggest hurdle may be the addition, perhaps, of a stronger demand to negotiate terms for support and upgrades.
This is necessary of any kind of service, networked or otherwise, that has significant risks to stakeholders. If you buy SAP systems and bring them in-house, or if you network out to Salesforce, you negotiate terms of service with in-house IT, with the vendors, with the ISPs, etc.
1 comment:
I was referring to "reuse", not so much "use" and SLAs. Though I your entry highlights an important observation: Enterprise IT cannot live by network effects alone.
Post a Comment