January 15th, 2011 —
Steve Jones reminds us that the business doesn’t care about technology – so stop harping about it and using it as an excuse for underperformance.
I totally agree that this is a key reason behind the endemic business/IT culture divide that is the root of many problems.
However this poses an obvious question – who does care about the technology?. The trick is not to over-engineer, but to engineer to just the right level to deliver business value now and into the future.
Somebody has to care about the technology (products, tools, methodologies), because otherwise you lose control and foster a legacy of technical debt which ultimately erodes business value.
I guess this is an axiom of Enterprise Architecture – that lack of governance leads to chaos and inefficiencies. Some would argue with this assertion, but I have never seen a counter-example. And of course the inverse statement is not necessarily true either.
So if the business doesn’t care about technology then who does? And if that is “nobody” then what happens?
November 30th, 2010 —
A simple use-case:
1. Unhappy customer publishes Tweet containing negative sentiment.
2. Correlate Tweeter with frequent flyer membership.
3. Eight hours later, Tweeter receives feedback offer from vendor.
Is this automated marketing reaction, or is it just a coincidence?
If the former, it’s a simple example of Event Processing – a single event detected, classified (sentiment), correlated with standing data and a response dispatched.
Did they need to operate in realtime? Probably not…same day is good enough (but no longer!)
Did they need to use a CEP engine for this? Probably not (perhaps they did it in PHP).
Did they get the right result?
November 30th, 2010 —
An interesting post by Colin Clark lamenting the inability of CEP to live up to earlier growth expectations. The article is definitely worth reading in full, but if I can pull out a few cogent points, I believe Colin ascribes lack of CEP growth to:
- CEP is mostly a marketing phrase.
- Vendors have focussed tightly on High Frequency Trading and neglected other areas of opportunity.
- Vendors have diss’ed developers by forcing them to learn new and arcane language(s).
- Vendors have neglected business users by neglecting visualization requirements.
In broad terms I agree – although I’m not sure languages are an impediment given the explosion of new language interests in the mainstream. I think the fundamental problems are two-fold:
CEP products haven’t yet crossed the chasm from toolbox to platform. They are still very technical and incomplete. Most CEP products concentrate on the “engine” and neglect two really important areas – visualization (as pointed out by Colin) and context. Event processing relies on understanding events within the broader business context which requires no barriers between the event stream and other systems of record or operational data stores. This is an extremely challenging technical problem – how to marry real-time data streams with large volumes of data “at rest”.
The business value for CEP is not always obvious. Unless you’re involved in a high stakes, low latency arms race moving at ever more relativistic velocities, batch will usually work for you. Most organizations don’t yet operate in realtime. Those outside of HFT that do or plan to operate in real-time are doing some work with CEP (e.g. Telcos, Utilities and Logistics) but there the challenges are around my first point – integrating existing/legacy network optimization applications with the event stream. In such situations, it’s the optimization technology that drives the implementation, not the event processing technology.
So whither CEP?
Ultimately CEP has three pre-requisites: the business need to operate in real-time, the IT infrastructure to support this and the ability to analyse events within the context of all relevant data assets. The CEP “product” comes at the end of a long line of dependencies.
November 20th, 2010 —
In systems architecture, there are rarely any right answers – mostly just trade offs between one solution or another. In such cases it helps to bear in mind some fundamental principles as a guideline. One principle I often use is cost vs. benefit. Another useful principle is to minimize coupling between systems. Coupling is pervasive and leads to a kind of inertia in enterprise systems. Newton discovered that inertia prevents change and if there is one thing that enterprises struggle most with, it’s change.
October 23rd, 2010 —
In my last post I showed how to send a Simple Notification Service (SNS) message to an email endpoint. Now I show how to easily add a WebHook endpoint. WebHooks are a design pattern using an HTTP POST to send a notification to a URL which the “subscriber” has registered with the service. WebHooks are being used in an increasing number of web APIs and there is an interesting interview with Jeff Lindsay on this topic at IT Conversations.
A useful test platform for WebHooks is PostBin.org. Simply click on the “Make a PostBin” button and you will be presented with a new URL for your notification messages – something like “http://www.postbin.org/1hf0jlo“. This is the URL you register with SNS.
Turning to the SNS dashboard, add a new subscription to a topic that you’ve already configured in SNS. Specify protocol “HTTP” and enter the PostBin URL as the endpoint. SNS will post a confirmation message to this URL before you can send through messages.
Go back to your PostBin URL and you should see the confirmation message.
Buried in the message is the SubscribeURL which you need to hit in order to confirm the subscription. I pasted it into notepad and edited “cleaned up” the URL before pasting it into a browser. This confirms the subscription with SNS.
Now back in the SNS Dashboard you can send a new message. In my case, since I still have my email endpoint, the same message is sent to both the email and the WebHook endpoints…thus: