Dependency and resilience in enterprise-architecture models
This one’s back on the metamodel theme again, and is a follow-up to a query by Peter Bakker in his post ‘Thinking about Graeme Burnett’s questions‘, in reply to my previous post ‘EA metamodel: two questions‘.
I think that the most important question of all is still missing, namely:
— What do you rely on?
(The ‘you’ here is the entity-in-focus, as with Graeme’s original two questions of “tell me about yourself?” and “tell me what you’re associated with?”.)
Peter’s right, of course: it is one of the most important questions we need to ask. It’s one reason why I generally extend Graeme’s second question to “tell me what you’re associated with, and why?”. Yet there’s a practical concern here about just how where the intelligence needed to answer that question should reside – and I don’t think it belongs in the metamodel.
To explain why, we probably need to do a brief detour into modelling in general within enterprise-architecture, and in particular our modelling of dependency and resilience. As I see it, there are two key aspects to Peter’s “What do you rely on?” question:
- dependency – about what this entity relies on, within the shared-enterprise
- resilience – about what this entity needs to do, or can do, to get back on track if what it relies on isn’t there
In a systems-theory sense, the first answer to Peter’s reliance-question must be “Everything!” 🙂 Any shared-enterprise represents a system, and by definition everything within that system depends on the presence of everything else within that system (otherwise it wouldn’t be part of that system). So in practice, from a modelling perspective, there are two additional aspects to the dependency-issue:
- distance from ‘self’ (where ‘self’ is the entity that’s our current focus of attention)
- criticality to self – the risk or opportunity represented by the presence or absence within the system of another entity
To illustrate distance-from-self, consider a typical biological-ecosystem: for example, the world of a dung-beetle:
- distance-1: the dung-beetle relies on the presence of dung
- distance-2: the dung-beetle relies on dung-producing-animals
- distance-3: the dung-beetle relies on fodder for dung-producing animals
- distance-4: the dung-beetle relies conditions that support appropriate fodder for the dung-producing animals in the ecosystem
- (and so on…)
So if we have an invasive species of vegetation that out-competes the existing vegetation and is either toxic or non-palatable to the local cattle, our dung-beetle could be in trouble… Yet on its own it may have no way to know this, because the connections to the critical issues are indirect, several steps distant-from-self. And the greater the distance-from-self, the more whole-of-system knowledge any given entity will need, in order to understand its contextual dependencies, opportunities and risks.
(By the way, this is one of the main reasons why I’ve been pushing so hard for a ‘notation-agnostic’ metamodel for enterprise-architecture and the like. For example, a UML diagram typically shows only the direct point-to-point connections; so we’d typically need to be able to swap out of that diagram and into, say, a Senge-style systems-dependency model or fishbone root-cause model, using the same entities, in order to understand the systemic interdependencies of each of those items. Currently we can sort-of do this in some of the EA toolsets, though typically only within a single notation-set such as UML: my point is that we need to be able to do this, cleanly and consistency, across all and any notations in the EA space. The base for all modelling should be the entities and the relations and flows between them – not the notations with which we model them.)
Criticality is closely related to specialism; and in turn, resilience is closely related to criticality. The catch is that that criticality can occur anywhere in the system. For example, our dung-beetle may need larger piles of dung for its larvae to live in. As long as it doesn’t need specific nutrients that only come from one species, it’d probably be happy with any of the larger herbivores: cows, buffalo, camels, elephants, whatever. Even forest-clearance mightn’t much affect it, because to our dung-beetle a cow is much the same as a giraffe: criticality for that kind of change is low, hence resilience might seem to be high. Yet if there’s a change from cattle or camels to ‘pellet-dung’ producers such as sheep or goats, our dung-beetle could well be in trouble, because that kind of dung is too small to work with. And without the dung-beetle, the nutrient-recycling processes within the ecosystem may be too slow for self-renewal. Hence not just the dung-beetle but its entire ecosystem may be at risk from arbitrary decisions made by humans in ‘markets’ that may be tens or hundreds or thousands of miles away.
These whole-of-system dependencies and resilience-issues are classic simulation / systems-theory territory, illustrated well by the old military adage of a century or so ago:
For want of a nail, the shoe was lost;
for want of the shoe, the horse was lost;
for want of the horse, the rider was lost;
for want of the rider, the battle was lost.
Whole-of-system resilience is also a common concern in studies on complex adaptive systems, where resilience in one area can sometimes even enhance resilience in other more apparently-fragile areas. And there can also be dependency/resilience issues between nominally-interdependent systems that are in ‘system-of-systems’ relationships with each other: these are common in military contexts, though Nick Gall gives a useful non-military example in the section ‘Models of interdependent networks‘ in his seminal ‘Panarchitecture‘ paper for Gartner.
Identifying those kinds of dependencies usually requires a lot of ‘whole-system intelligence’ – in other words, cross-mapping of dependencies and criticalities that may be many steps distant from each other. There are two classic approaches to this:
- ‘inside-out’ – each entity knows everything it needs to know about its own system-context
- ‘outside-in’ – an external observer assesses the interactions across the entire system-context
In practice, neither of these approaches work well, especially for any large real-world system. The ‘inside-out’ approaches assumes high intelligence and information-gathering capability in each entity, which certainly doesn’t apply down at the level of bacillae and bacteria in living ecosystems; the ‘outside-in’ approach requires phenomenal computing-power, and probably can’t cope with any kind of non-linear relationships anyway; and both demand vast cross-context information-flows that can rarely if ever be seen in real-world systems.
Instead, what actually works is observation of patterns of emergence: the ‘intelligence’ – so to speak – arises from the network of relationships, rather from any specific entity either within or (nominally) outside of the system.
Hence, to bring it all back to the EA-metamodel, I suspect that the question “What do you rely on?” may actually be a bit misleading here: it’s certainly a question that we need to ask, and answer, but it’s not a question we can meaningfully answer at the metamodel level. Some kind of intelligence would definitely be needed in order to assess contextual concerns such as whole-system interdependence and resilience: yet the metamodel itself doesn’t have any intelligence as such – it’s just a means to structure information.
Hence for any given entity, it would be meaningful to ask directly each of Graeme’s questions:
- “tell me about yourself” typically identifies the immediate content of the item [and in this metamodel, also the content of any items within the scope of its related-items list]
- “tell me what you’re associated with” identifies the immediate (‘distance-1’) relations in which the item is referenced
- “…and why” identifies the reasons given within those ‘distance-1’ relations
Answering those questions doesn’t require much intelligence: we could easily embed that within an object-method for entities, for example. But as soon as we step beyond that immediate ‘distance-1’ level, the relationships and dependencies become very complex, very quickly – and I don’t think we would be able to embed that within the entities as such. (Okay, we probably could do it, but I doubt that doing so would be effective enough to worthwhile in a practical sense.)
Hence I would suggest that at the metamodel level we should stick to Plan A, and keep everything as simple as possible: Graeme’s two questions (in that slightly-amended form) should be enough to cover most if not all of what we need to support an emergent view of enterprise context.
Your comments on this, of course?
I think your article is very clear. And I like the distance concept very much because it fits nicely with my personal ‘explaining everything from here to there’ philosophy 🙂
You draw parallels with an ecosystem and systems thinking. In both feedback/sensory information is very important. Therefore as a modeler interested in plasticity I must know on what senses/sensory information an entity relies. And what the simple rules are that the entity uses to handle that sensory information in order to survive as long as possible.
There is also the issue of feed forward (forced attention). Why do you see only things you ‘want’ to see or are occupied with, why are things becoming a hype? Feed forward is very useful if you can use it right. But we’ll need simulation to be able to understand and use feedback and feed forward right. That is the direction I’m going now (in very small steps) with the Enterprise backbone idea.
So my question was not aimed at the metamodel idea, but your answer is very relevant 🙂
@Peter Bakker – Thanks, Peter.
That’s a very good point about ‘feedforward’, which I tend to refer to as Gooch‘s Paradox: “things have not only to be seen to be believed, but also have to be believed to be seen”. At a practical level, there are some very nasty recursions there, because the paradox also applies to anything that we might do to tackle the paradox itself… – for example, we have to do some very interesting conceptual juggling-acts if we are to use analysis to tackle the limitations of analysis.
Apologies if you hadn’t intended to connect the discussion back to the EA metamodel: I’ll admit that I did kind of assume that, but it did seem fair enough to do so, given that you’d referenced Graeme’s questions (which I’d previously used as a core test-criterion for EA metamodel-design) and provided a link to my initial EA-metamodel post… 🙂
Again, a useful challenge for me: many thanks indeed.
Tom, apologies are of course not necessary because I wasn’t clear about the context of my post in the first place. And I’m thinking a lot about how the ideas about the EA metamodel and the Enterprise Backbone relate to each other because I believe in both ideas.
The first time I learned about feed forward in the context of modeling (done by the brain) was when I read On Intelligence by Jeff Hawkins: http://en.wikipedia.org/wiki/On_Intelligence
I’m curious if you have read this book too?