On business-rules

Reading James Taylor’s recent piece “Business rules are king“, pretty much every one of my enterprise-architecture alarm-bells went off.

Yes, it’s a good article – recommended reading. And I would strongly agree with its implication that there’s a real and urgent need for discipline around business-rules. But the reason for the alarm-bells is that it’s promoting business-rules as ‘the answer’ – and for the most part IT-based ‘business-rules engines’ at that.

Which us places straight back in Taylorist territory, along with all those other classic IT-driven business failures such business-process re-engineering. Not a good idea…

The reasons why it’s not a good idea are three-fold:

  • placing all the business-rules into an automated system will lead to a ‘fit and forget’ attitude unless there is a very strong emphasis on rule-maintenance – one of many ‘human factors’ that were forgotten about in BPR’s rush to ‘IT-ise’ all business processes
  • identification and codification of business-rules assumes that the rules that can be derived from the people who run the existing processes are sufficient, invariant, accurate and complete – which, as early-generation knowledge-management also discovered, they rarely are…
  • the viability of using automation for decision-making is dependent on the context – a fact of which frighteningly few IT-system designers seem to be aware

There seems to be a view that everything can and must be reduced to simple rules, following a cart-before-horse thinking that everything should be done by IT, and simple rules are what IT handles best. In other words, dangerously back-to-front. It’s bad enough trying to get anything useful out of IT for decision-support; but using IT for all decision-making – which is the ‘nirvana’ that the article would evidently prefer – is likely to be lethal. And I don’t quite know what we as enterprise-architects can do to prevent this headlong rush into repeating the exact same mistakes as in BPR and the rest – all that’s different this time is that it’s more explicitly coming from the ‘rules’ part of the process, rather than process-implementation overall.

This is clear if we look at it from the perspective of context-space mapping:

Time, interpretation and abstraction

The point is that there’s a spectrum of abstraction of rules: principles sit at the low-abstraction end of this spectrum, rules sit at the high-abstraction end – in fact a conventional ‘rule’ is actually an extreme abstraction of a principle that applies to a specific context. If we try to use the wrong level of abstraction, especially in the wrong context or wrong type of context, we are all but guaranteed to hit serious trouble. And I see little to no awareness of that fact in most of the current literature on business-rules: instead, there seems to be an assumption that just about everything can be reduced to simple binary rules that can be implemented by simple IT, because that’s what we want to happen. In other words, the entire approach seems driven by little more than wishful thinking – which again is not a good idea…

IT-systems and simple business-rules work well together: both operate on a binary true/false logic, and both will enable high-speed binary-logic decision-trees – in other words, over on the lower right-hand side of the usual Cynefin-derived context-space base-map.

Most IT-based analytics – over on the upper-right of the base-map – work on the same binary logic as the simple systems, but introduce the ability to handle more and more layers of complication. The catch is that each layer of analysis takes a finite amount of time – which takes it further away from the ‘Now!‘ demanded by real-time decision-making. And the only real result of increased computing-power has been to increase the levels of complication in the analytics, sometimes beyond anyone’s ability to understand it – as was the case with the software systems used in many of the risk-calculation models that drove the current financial crash.

IT-systems are still not good at handling non-binary modal-logics – “the logic of probability, possibility and necessity”, such as expressed in the MoSCoW set of requirements-priorities of must, should, could and can wait. Humans are very good at modal-logic; IT isn’t. James Taylor’s article refers to pattern-based decision-making, which places it somewhat on the upper left of the base-map – but note again that each pattern-match must always take a finite amount of time, and it does not fit well with the underlying binary-logic of current IT-systems. Using IT as decision-support for human decision-making is generally okay, but the more that IT is involved, the higher the risk of what Dave Snowden describes as ‘pattern-entrainment’ – in other words, premature selection of a pattern, trying to force-fit a pattern to the context rather than ‘listening’ to the context itself. Current IT is getting much better at near-real-time pattern-matching, such as face-recognition or smile-recognition on most present-day digital cameras. Yet as anyone who’s used such systems would know, they’re nowhere near accurate enough to decide when a picture is actually any good – and sometimes we don’t want a smile in the picture. Much the same applies in business: using automated pattern-matching is great for decision-support, but extremely dangerous for decision-making.

And no IT-system is likely to be much good at dealing with real-time chaos, ‘the new’, where no possible pattern exists because it is new – but again, real people can handle decision-making in such contexts via skills and principles. In those contexts, there are no rules – and yet business-rule proponents seem to promote the delusion that their ‘business-rule engines’ can handle everything.

So I’m wary: very wary. Before letting any of such systems loose on any real-world context, I would want to make very sure that they’ve done the appropriate context-space mapping, and matched the decision-making methods to the respective contexts. But I don’t see much evidence of that: what I see instead is way too much wishful-thinking, and an almost desperate desire on both the business-side and the IT-side to try to force the world to fit their respective delusory dreams of ‘order’ and ‘control’. Oh well… Guess we have to wait and let them fail yet again, even more expensively, and then set out to tidy up the mess? – though I do worry that we’re getting close to the point where we’re no longer able to afford such expensive mistakes, in any sense of the word…

3 Comments on “On business-rules

  1. Business rules are procedural…doesn’t account for the emotional integration between the corporation and it’s customers, business partners, environment, culture and all those that affect the reaction of a business.

    Business rules are an important part of the analysis and an important piece of the structure of EA…it’s not the heart or soul of the EA.

  2. I don’t think it’s so complex… think about car manufacturing. Many business processes are similar in complexity to building a car, and could be automated likewise. It works great for BMW, Honda etc, so why not try to take our simple-to-intermediate processes and automate those things? Humans will always be driving the process ultimately, just like car manufacturing. We’re just using IT to replace some of the parts that humans used to do since IT can do the same job faster and cheaper.

    Realistically I don’t think any organization will try and automate everything in one shot and create rules to replace all human decision making. It’s a pipe dream to imagine. But constant improvements in the processes over time are realistic, starting with small pieces of automation and proving / growing incrementally.

  3. Hi Tom

    Prompted by the current interest in RPA, I’ve been looking over my old blogposts on Business Rules, one of which contained a link to yours.

    I’m not sure I agree that humans are good at modal logic – there are whole schools of psychotherapy devoted to reframing “always” and “never” into “sometimes”. But individuals may be better than machines (whether computer machines or bureaucratic machines).

    James Taylor’s post quotes Jim Sinur recommending a focus on high volatility rules – which would seem to tell against the “fit and forget” attitude you warn us about. But jumping forward ten years, we find RPA vendors recommending a focus on using bots for the low volatility rules. Meanwhile “mutant algorithms” try to tackle probabilistic decision-making. The issues raised in your post are clearly relevant to this new technology landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *

*