November 30th, 2010 — cep
A simple use-case:
1. Unhappy customer publishes Tweet containing negative sentiment.
2. Correlate Tweeter with frequent flyer membership.
3. Eight hours later, Tweeter receives feedback offer from vendor.
Is this automated marketing reaction, or is it just a coincidence?
If the former, it’s a simple example of Event Processing – a single event detected, classified (sentiment), correlated with standing data and a response dispatched.
Did they need to operate in realtime? Probably not…same day is good enough (but no longer!)
Did they need to use a CEP engine for this? Probably not (perhaps they did it in PHP).
Did they get the right result?
November 30th, 2010 — cep
An interesting post by Colin Clark lamenting the inability of CEP to live up to earlier growth expectations. The article is definitely worth reading in full, but if I can pull out a few cogent points, I believe Colin ascribes lack of CEP growth to:
- CEP is mostly a marketing phrase.
- Vendors have focussed tightly on High Frequency Trading and neglected other areas of opportunity.
- Vendors have diss’ed developers by forcing them to learn new and arcane language(s).
- Vendors have neglected business users by neglecting visualization requirements.
In broad terms I agree – although I’m not sure languages are an impediment given the explosion of new language interests in the mainstream. I think the fundamental problems are two-fold:
CEP products haven’t yet crossed the chasm from toolbox to platform. They are still very technical and incomplete. Most CEP products concentrate on the “engine” and neglect two really important areas – visualization (as pointed out by Colin) and context. Event processing relies on understanding events within the broader business context which requires no barriers between the event stream and other systems of record or operational data stores. This is an extremely challenging technical problem – how to marry real-time data streams with large volumes of data “at rest”.
The business value for CEP is not always obvious. Unless you’re involved in a high stakes, low latency arms race moving at ever more relativistic velocities, batch will usually work for you. Most organizations don’t yet operate in realtime. Those outside of HFT that do or plan to operate in real-time are doing some work with CEP (e.g. Telcos, Utilities and Logistics) but there the challenges are around my first point – integrating existing/legacy network optimization applications with the event stream. In such situations, it’s the optimization technology that drives the implementation, not the event processing technology.
So whither CEP?
Ultimately CEP has three pre-requisites: the business need to operate in real-time, the IT infrastructure to support this and the ability to analyse events within the context of all relevant data assets. The CEP “product” comes at the end of a long line of dependencies.
January 11th, 2010 — architecture
Happy New Year! Asynchronicity is busting out all over the web and my prediction is that 2010 will be the year of “events”:
- Of course Twitter has brought The concept of publish/subscribe messaging to the masses and we enjoyed their journey of discovery to the heights of scalability in 2009.
- XMPP has been embraced by the real-time web crowd, most publicly in Google Wave but also in other “back-web” contexts such as Gnip.
- Web sockets is an experimental feature of HTML 5 which enables push messages directly to web pages.
- New frameworks for event-driven programming are emerging such as EventMachine, Twisted, Node.js.
- In 2009 every major software vendor had a CEP product.
In the meantime, SOA has become so damn synchronous. But it doesn’t have to be!
One of the fundamental tennets of SOA is that reducing coupling between systems makes them more scalable, reliable and agile (easier to change). SOA goes a long way to reducing coupling by providing a contract-based, platform independent mechanism for service providers and consumers to cooperate. However I still think we can improve on current SOA practices in further reducing coupling.
Coupling still intrudes into many aspects of how SOA is practiced today:
- HTTP transports tie us to a regimen of synchronous request-reply with timeouts which creates tight couplings between provider and consumer. Even though one-way MEPs were an original feature of SOAP, message-oriented transports remain the forgotten orphan of web-services standards.
- Many SOA services are conceived, implemented and maintained as point-to-point entities…providers and consumers forced into lock-step due to inadequate versioning and lifecycle management.
- Process orchestration layers often form a bridge between service providers and consumers, which on the face of it provides some level of indirection. But in many cases orchestration provides limited value and may actually serve to increase the overall system coupling.
In many cases we can achieve the benefits of service orientation to much greater effect by exercising a little scepticism toward some of these shibboleths of the web services world and embracing a more asynchronous, event-oriented way of building processes. So this year, embrace your asynchronous side and do something to reduce your system coupling: build some pub/sub services, learn about Event Processing or Event-Driven Architecture, try one of the technologies I pointed to above.
Just as developers should embrace multiple languages to broaden their skills, so should architects embrace and be fluent in multiple architectural styles.
November 11th, 2009 — architecture
In a recent comment on my Architectural Characteristics posts, Andy astutely observes that I may be “shoe-horning”. By this I assume he means that I’m taking a large and rather lumpy concept and trying to squeeze it into a smaller and more uniformly shaped container while risking some distortion in the process. I’ll admit that in this respect I’m probably guilty as charged.
But I should clarify my purpose in doing this. I’m trying to cut back the various architectural styles under consideration to a simpler form where the essential characteristics can be discerned without confusion from some other non-essential characteristics. So rather than shoehorning, I’m trying to setup a strawman model which can be used as a starting point for discussion. Or maybe like a physicist I’m trying to model a very complex phenomenon using linear approximations which explain the broad outlines of the phenomenon at the risk of falling short on some of the details.
To extend this latter metaphor, I don’t think it is too much of a stretch to say that the architectural styles I’m considering could be likened to “fundamental” architectural styles and that real-world architectures could be viewed as “superpositions” of those fundamental architectures.
If we consider the simplified forms of EAI and SOA that I describe, each style falls short of representing a real world architecture, but the upside is that the EAI and SOA styles as I describe them are distinct and easily differentiated. So we have a model which provides a way of distinguishing between different styles (via the characteristics I’ve discussed) but falls short of exactly matching a “real-world” architecure.
If we look at any real-world architecture in recent years, I think we can see a superposition of EAI and SOA concepts. This probably reflects an evolutionary path between the two styles. EAI as practiced in the early noughties had already developed the idea of a normalised data model and technology independent interfaces. These were not standardized, but some of the characteristics of SOA were apparent in what was then called EAI.
Similarly, EAI was not always about data integration. There was (and is) a distinction between data integration and process integration. EAI techniques could be used to orchestrate processes across multiple systems. This is even closer to the concept of SOA which has at its core the notion of an independent process layer seperate from the service layer.
Even if we don’t superpose EAI and SOA into one solution, there are still legitimate ways in which EAI, SOA and EDA coexist within any particular architecture. We can easily imagine a solution in which a business process is orchestrated via SOA services, reference data is synchronised using EAI and overall process state is monitored using EDA techniques such as Event Processing.
So real-world solution architectures exhibit some overlap between the different architectural styles – EAI, SOA and EDA. Some of this is due to evolutionary legacies, or due to plain-old confusion between the different styles (e.g. JABOWS as really being EAI). Some of it is also due to legitimate mixing of different styles for different aspects of a solution.
I think that real-world architectures can benefit from seperating out the “essence” of each architectural style and being explicit about how those styles are being applied. Reducing architectural styles to simplified forms clarifies the stucture of a real-world architecture. Not very different from Design Patterns, really.
October 26th, 2009 — architecture
My last post explored some distinguishing characteristics of three common architectural styles in an attempt to understand better how they differ and therefore how they may apply in different contexts. A fourth distinguishing characteristic of these architectural styles is the way in which state is managed during the execution of a business process.
There are three aspects of state that I want to consider:
- Management – how state change is initiated or managed through a business process.
- Monitoring – how state is monitored or accessed or derived during the execution of a business process.
- Consistency – how state is made consistent across different systems involved in a business process.
A summary of the state characteristics of the different architectural styles is listed in the following table along with the other characteristics discussed in my last post.

In EAI, state is managed within one application and then synchronised to other applications, usually after the business process has completed. This means that in some cases state may be inconsistent across the organisation or its systems. That may or may not be a problem (see ‘Eventually Consistent‘) but is a common side effect when a business process is executed within one system rather than across systems in an independent process layer. During the execution of an EAI process, there is often no monitoring of the state. I.e. the process may simply replicate data changes to other systems without explicily tracking state. A limited form of state monitoring may exist in the sense that the local application or associated middleware may check the status of data synchronization and error out in the event of an exception. I refer to this as ‘local’ state monitoring. So under the EAI architectural style, state is managed locally, monitored locally (if at all) and is eventually consistent.
Under SOA, the driver of a business process is BPM orchestration in an independent process layer. In this case, we can say that state is managed centrally (in the BPM layer) and monitored centrally (also in the BPM layer). State in end-systems is updated progressively through the business process (via service calls) and so we could say that state is ‘progressively’ consistent as opposed to ‘eventually’ consistent.
Under EDA, there is no central or even local management of state. Instead, events signify distributed actions which together may be used to infer the state of a system. To the extent that any ‘management’ occurs, we could say that state is managed in a distributed fashion – one or more agents each acting on their own. Perhaps it is more accurate to say that state is manifested globally. Converesely, state is monitored centrally within an Event Processing (CEP) layer which correlates events to infer system state. Under EDA, state is progressively consistent because the system is progressively reacting to events which are a by-product of a hidden or implicit business process.