The Real Economics of SOA

The title of the InfoQ article, “Economics of Service Orientation” caught my interest, but unfortunately I was left disappointed. I felt that the article missed the mark somewhat and also missed some of the important costs of Service Orientation in its eagerness to concentrate on the benefits.

In my view, the real economics of SOA is about how many times you pay for the same function.

In the world of packaged applications you have very coarse control over the number and bundling of the functions that you have available. If you buy a packaged application, then you typically buy a bundle of functions or capabilities – some of these hopefully will be functions that you need, yet others will be functions that you don’t need.

An example of this is if you buy a billing application which has inbuilt functionality for handling customer contact details. You may not need customer contacts in your billing application because it already exists in your CRM (for example). However, as delivered, the billing application requires its own proprietary customer contact details module because it is incapable of working with a 3rd party’s CRM data. So there are a bunch of costs associated with purchasing a packaged application:

  1. The cost of the functionality that you require.
  2. The cost of the functionality that you don’t require (the vendor charges licence and support fees for the whole package, not just the bits you really want to use).
  3. The infrastructure cost of hosting all this functionality – required and not required.
  4. The cost of integrating the functionality you require into your business processes.
  5. The cost of integrating and potentially mitigating the functionality that you don’t require.

If we consider cost 5) in the context of my example. Even though you have a perfectly good CRM, the billing system requires contact details to be managed within its own application database, so you have to integrate your CRM with the billing application. There may be side-effects of this integration which need to be mitigated. You may need to invent whole new business processes to manage duplication between two or more systems and the inevitable synchronization problems that arise.

In an SOA utopia, there is more fine grained control over the functions that you acquire from a vendor. In particular you can buy only the functions that you need and then have them plug into other services in your information infrastructure. This saves a number of costs:

  1. You only pay for the function that you require
  2. You don’t pay for the functions that you don’t require
  3. You only pay to host the functionality you have bought – not a bunch of tag-alongs. And if your function is remotely hosted, the costs of hosting the new functionality may be very low.
  4. You pay only the cost of integrating the required functionality into your business processes
  5. You don’t have to pay to integrate and mitigate functionality you don’t require

So the economic value of SOA is in the finer granularity with which you can manage the cost of your IT functionality. The job of the enterprise architect moves from managing a portfolio of applications – complete with duplicates and overlaps – to one of managing services or functions, with hopefully minimal duplicates and overlaps.

But this picture wouldn’t be complete without considering the additional overheads required for a well functioning SOA. Architectural standards and governance are required to ensure that you buy the right services at the right time and that the services can be integrated properly into your business processes. You should then add in costs associated with:

  • There is a cost for running an enterprise architecture function – managing standards and governance.
  • The cost of acquiring a specific function may be higher because of the additional planning and research required to get the “right” function for your business processes and to fit into your enterprise services framework.
  • The integration cost of exposing functions as services is generally going to be higher than just building a point integration. This is because of the greater degree of planning involved with providing a service in a reliable and re-usable manner.

But hopefully the additional cost of architectural governance and standards should be offset by the savings in managing services as opposed to packaged applications.

As a final note, it’s interesting to note that the economic model is similar for the vendors as it is for their customers. This is why many packaged application vendors are hopping on the bandwagon of SOA. These vendors have acquired applications with overlapping functionality and require a more fine-grained approach for managing and selling their functionality.

The Power of Later

Steve Vinoski nominates RPC as an historically bad idea, yet the synchronous request reply message pattern is undeniably the most common pattern out there in our client-server world. Steve puts this down to “convenience” but I actually think it goes deeper than that. Because RPC is actually not convenient – it causes an awful lot of problems.

In addition to the usual problems cited by Steve, RPC makes heavy demands on scalability. Consider the SLA requirements for an RPC service provider. An RPC request must respond in a reasonable time interval – typically a few tens of seconds at most. But what happens when the service is under heavy load? What strategies can we use to ensure a reasonable level of service availability?

  1. Keep adding capacity to the service so as to maintain the required responsiveness…damn the torpedoes and the budget!
  2. The client times-out, leaving the request in an indeterminate state. In the worst case, the service may continue working only to return to find the client has given up. Under continued assault, the availability of both the client and the server continues to degrade.
  3. The service stops accepting requests beyond a given threshold. Clients which have submitted a request are responded to within the SLA. Later clients are out of luck until the traffic drops back to manageable levels. They will need to resubmit later (if it is still relevant).
  4. The client submits the request to a proxy (such as a message queue) and then carries on with other work. The service responds when it can and hopefully the response is still relevant at that point.

Out of all these coping strategies, it seems that Option 2 is the most common, even when in many cases one of the other strategies is more efficient or cost-effective. Option 1 might be the preferred option given unlimited funds (and ignoring the fact it is often technically infeasible). In the real world Option 2 more often becomes the default.

The best choice depends on what the service consumer represents and what is the cost of any of the side-effects when the service fails to meet its SLA.

When the client is a human – say ordering something at our web site:

  • Option 2 means that we get a pissed off user. That may represent a high, medium or low cost to the organization depending on the value of that user. In addition there is the cost of indeterminate requests. What if a request was executed after the client timed-out? There may be a cost of cleaning up or reversing those requests.
  • Option 3 means that we also get a pissed off user – with the associated costs. We may lose a lot of potential customers who visit us during the “outage”. On the positive side, we minimise the risk/cost of indeterminate outcomes.
  • Option 4 is often acceptable to users – they know we have received their request and are happy to wait for a notification in the future. But there are some situations where immediate gratification is paramount.

On the other hand, if the client is a system participating in a long-running BPM flow, then we have a different cost/benefit equation.

  • For Option 2, we don’t have a “pissed off” user. But the transaction times out into an “error bucket” and is left in an indeterminate state. We must spend time and effort (usually costly human effort) to determine where that request got to and remediate that particular process. This can be very costly.
  • Option 3¬† once again has no user impact, and we minimise the risk of indeterminate requests. But what happens to the halted processes? Either they error out and must be restarted – which is expensive. Alternatively they must be queued up in some way – in which case Option 3 becomes equivalent to Option 4.
  • In the BPM scenario, option 4 represents the smoothest path. Requests are queued up and acted upon when the service can get to it. All we need is patience and the process will eventually complete without the need for unusual process rollbacks or error handling. If the queue is persistent then we can even handle a complete outage and restoration of the service.

So if I am a service designer planning to handle service capacity constraints, for human clients I would probably choose (in order) Option 3, 4 and consider the costs of option 2. For BPM processes where clients are “machines” then I would prefer Option 4 every time. Why make work for myself handling timeouts?

One problem I see so often is that solution designers go for Option 2 by default – the worst of all the options available to them.

Ian Robinson on Coupling

In my opinion, coupling is the most fundamental attribute of a system architecture and tight coupling is probably the most common architectural problem I see in distributed systems. The manner in which system components interact can be a chief determinant of the scalability and reliability of the final system.

So I really like Ian Robinson’s post on Temporal and Behavioural Coupling where he uses two coupling dimensions and the inevitable magic quadrant to classify systems based on their degree of temporal and behavioural coupling.

See Ian’s post for the slick professional graphics, but to summarise – event-oriented systems with low coupling¬† occupy the “virtuous” third quadrant of the matrix. Conversely the brittle “3-tier” applications that many of us struggle with, occupy the “evil” first quadrant where coupling in both dimensions is high.

However I’m a little miffed to see no mention of my favourite “document-oriented message” in Ian’s diagram. As Bill Poole writes; document messages have lower behavioural coupling than command messages, but more than event messages. So would you put document-oriented messages near the middle top of the matrix between command-oriented and event-oriented messages? Unfortunately that would break the symmetry. But it also highlights another problem.

Any type of message – document, command or event-oriented could temporally be tightly or loosely coupled. Temporal coupling is more a property of the message transport than of the message type. So I suggest that the two coupling dimensions are characterised as follows:

  • Temporal coupling – characterised by message transport from RPC (tight coupling) through to MOM (loose coupling).
  • Behavioural coupling – characterised by the message type from event-oriented (tight) through document-oriented to event-oriented (loose).

It so happens that distributed 3-tier systems generally employ both command-oriented messages and RPC transports – hence making them inherently “evil”. Whereas events (being asynchronous)¬† are naturally virtuous by typically being carried over MOM transports (it’s difficult to request an event notification).

Between heaven and hell, it is in the murky mortal realms of SOA where we need to be constantly mindful of the interactions between message type and transport – lest our system ends up in limbo.

Martin Fowler on Humane Registry

I have long held the belief that governance is more about effective communication than anything else. To quoth my own horn:

…what is needed is a shared understanding of the architecture principles, frameworks, standards and best practices used within your organisation for providing and consuming Services.

How do you ensure that this shared understanding is
a) communicated effectively,
b) understood and
c) implemented by all.

A corollary of this view is that any governance tools ought to support this communication as their primary function. I’ve often thought that a wiki would make a pretty good governance tool. Good to see that Martin Fowler agrees. He describes an interesting “Human centric registry” based on a wiki with just enough automation to remove the drudgery. Martin describes the principles of the registry as:

  • People develop and use services, so orient it around people
  • Don’t expect people to enter stuff to keep it up to date, people are busy enough as it is.
  • Make it easy for people to read and contribute.

(sorry UDDI, thank you for playing).

Gartner 2008 SOA User Survey

The Gartner 2008 SOA User Survey is a good read with some surprising insights into SOA adoption.

One interesting development is that the rate of SOA adoption has slowed in 2008. About half of respondents last year who were planning SOA adoption, now have no plans for SOA adoption. The two main reasons for not pursuing SOA are a lack of SOA expertise, and the perceived lack of a business case. These reasons may be correlated in the sense that lack of SOA expertise makes it difficult to build an SOA business case. But the fundamental conclusion is that SOA doesn’t make sense for everybody.

SOA Adoption shows considerable geographic disparity. Europe has almost universal adoption (70% currently using), followed by North America (55% currently using) then Asia (25% currently using). The majority of organizations in Asia have no plans to adopt SOA. The report doesn’t really analyse why Asia has such low SOA adoption. My guess would be a combination of factors including lack of SOA expertise in the region, the characteristics of Asian companies being late technology adopters and the preponderance of manufacturing in the region which the survey shows has overall low SOA adoption compared with other sectors.

Organisation size correlates strongly with SOA adoption and the range of SOA deployment. There is a sweet spot for mid-size companies with current SOA adoption high in companies with employees between 1000 and 10,000. Large companies obviously struggle with the governance processes required to adopt SOA enterprise wide.

A big surprise for me was the correlation between SOA adoption and primary development language. Forty percent of current SOA adopters use Microsoft .NET. There is also a clear trend over the last 3 years away from Java toward Microsoft .NET and “other” languages such as dynamic languages. Correlation doesn’t mean causality so there is a lot of wiggle room in how you interpret this but clearly there is a move away from Java for SOA development. Harkening back to the COM/CORBA wars of the 90’s one of the key factors was that the Microsoft development environment made COM so easy to develop versus the complexities and diversities of CORBA that eventually COM came to dominate the component world. Is history repeating itself?

Web Services are the dominant SOA model, but a significant minority uses POX and REST approaches. About one third of existing SOA adopters already use or are planning to adopt EDA. The report also claims significant plans for WOA adoption, but I’m not convinced by the data. An eyeball comparison between Figure 14 (current WOA adoption) and Figure 15 (planned WOA adoption) doesn’t show a great deal of difference to me, except for Figure 15 looking a little more “peaky” around the 50% mark. So WOA adoption will increase, but I’m not convinced the data shows this is “dramatic” as stated in the key findings of the report.