Characterising Architectural Styles II – State

My last post explored some distinguishing characteristics of three common architectural styles in an attempt to understand better how they differ and therefore how they may apply in different contexts. A fourth distinguishing characteristic of these architectural styles is the way in which state is managed during the execution of a business process.

There are three aspects of state that I want to consider:

  • Management – how state change is initiated or managed through a business process.
  • Monitoring – how state is monitored or accessed or derived during the execution of a business process.
  • Consistency – how state is made consistent across different systems involved in a business process.

A summary of the state characteristics of the different architectural styles is listed in the following table along with the other characteristics discussed in my last post.

architectural_characteristics_table_2

In EAI, state is managed within one application and then synchronised to other applications, usually after the business process has completed. This means that in some cases state may be inconsistent across the organisation or its systems. That may or may not be a problem (see ‘Eventually Consistent‘) but is a common side effect when a business process is executed within one system rather than across systems in an independent process layer. During the execution of an EAI process, there is often no monitoring of the state. I.e. the process may simply replicate data changes to other systems without explicily tracking state. A limited form of state monitoring may exist in the sense that the local application or associated middleware may check the status of data synchronization and error out in the event of an exception. I refer to this as ‘local’ state monitoring. So under the EAI architectural style, state is managed locally, monitored locally (if at all) and is eventually consistent.

Under SOA, the driver of a business process is BPM orchestration in an independent process layer. In this case, we can say that state is managed centrally (in the BPM layer) and monitored centrally (also in the BPM layer). State in end-systems is updated progressively through the business process (via service calls) and so we could say that state is ‘progressively’ consistent as opposed to ‘eventually’ consistent.

Under EDA, there is no central or even local management of state. Instead, events signify distributed actions which together may be used to infer the state of a system. To the extent that any ‘management’ occurs, we could say that state is managed in a distributed fashion – one or more agents each acting on their own. Perhaps  it is more accurate to say that state is manifested globally. Converesely, state is monitored centrally within an Event Processing (CEP) layer which correlates events to infer system state. Under EDA, state is progressively consistent because the system is progressively reacting to events which are a by-product of a hidden or implicit business process.

Progressive Data Constraints

As a follow up to my last post, Richard Veryard described the concept of Post Before Processing whereby rules are applied once you have safely captured the initial information. This is a good way of managing unstructured or semi- structured data and it reminded me of other cases where data is progressively constrained and or enriched during processing.

Content management systems use this technique at the unstructured end of the spectrum where source and draft content is pulled into the system from the “jungle” and then progressively edited, enriched, reviewed etc until the content is published. Programmers will be familiar with this in the way that source code is progressively authored, compiled, unit tested and integrated via source code control (plus a bunch of QA rules and processes).

Many computer-aided design applications also provide this ability to impose different rules through the lifecycle of a process. Many years ago I worked on a CAD application for outside plant management for Telcos which had a very interesting and powerful long transaction facility. Normal mode representing the current state of the network enforced an array of data integrity and business rules – such as what cables could be connected to each other and via what type of openable joints etc.

In the design mode, different rules are in place so that model items can be edited and temporarily placed into invalid states. This is necessary because of the connected nature of the model (a Telco network) and the fact that the design environment reflects the future state of the network which may not correctly “interface” to the current network state. The general mode of operation was multiple design projects created by different network designers who managed the planning and evolution of the network from it’s current state to some future state. And multiple future states could potentially exist within the system at any point in time. Design projects follow a lifecycle from draft through proposed and into “as built”. This progression is accompanied by various rules governing data visibility, completeness and consistency.

This is a useful model of how to manage data which may be complex, distributed or collaboratively obtained and managed. Effectively building a process around a long transaction which manages transition of data between states of completion or consistency.

Master Data Management is another topical example of this type of pattern. In this scenario data is distributed across systems or organizations. Local changes trigger processes which manage the dissemination of changes to bring the various parts of the data into consistency. During this process different business rules may be applied to manage the data in transition.

I think these concepts can be more generally applicable to SOA design-time governance. For example in the collaborative design of enterprise schemas or services contracts.