Enterprise-architecture as hologram
What’s the best metaphor we could use for the information collected in enterprise-architecture and related disciplines? The answer, I’d suggest, is a hologram.
This is in part a follow-on to the post ‘What do we need from our EA tools?‘. What I’d like to do here is – for a moment at least – ignore all of the existing EA-toolsets, and go right the way back to first-principles:
- What is the actual aim and value for a toolset in EA and the like?
- What is the actual range of information we need to work with?
- How would we collect that information, find our way around that information, connect across that information, share what we find within that information?
The first two questions are relatively straightforward, as somewhat described in that previous post. Yet what we most need right now, though, is something that will help us deal with the messiness of most real-world enterprise-architecture development; and perhaps the whole point is that we don’t know what the information will be – all we know is that could be just about anything.
And the other questions above? – well, that’s where this gets to be interesting…
Think first of the usual approach. We start with a model-type – BPMN, UML, Archimate, whatever – and collect information for that. We then build a diagram. With standard Visio or Omnigraffle or the like, that’s all you’re going to get: a diagram, unconnected to any other diagram. With an EA toolset, you should also get some kind of underlying repository, that allows you to shift perspective from diagram-level to entity-level, so that you can re-use entities in other diagrams in the same set, and then apply that re-use so as to connect diagrams together as well – preferably even diagrams of different types.
Yet that’s so much the usual approach – in fact often the only one built into most of the toolsets – that it’s often difficult to see the hidden assumptions that underlie it. In particular:
- everything we need is describable in predefined views from predefined viewpoints
- every entity and relationship we might need is describable in predefined metamodels (true even if the metamodel is editable, because it requires definitions for entities and relationships to be in the metamodel before the respective items can be created and described)
- in most cases, entity and relationship can be created only in terms of a predefined syntax, the relationships almost always with a strict true/false modality (i.e. either do exist or don’t-exist, with no allowance for ‘requisite-inefficiency‘ or requisite-fuzziness)
- in most cases, the model-type allows only one type of visual representation (visual-syntax) for each type of entity or relationship permitted within that model-type
In short, probably fine for final-diagrams to be passed on to solution-architecture, system-design and system-implementation, where they do need things to be as stable as possible. But for the kind of work we do at the level of enterprise-architecture – where the whole point is that everything starts out blurry, messy and uncertain – those are really serious constraints: so serious, in fact, that they make it all but impossible to use those tools to do that part of our work. Which happens to be most of our work. Ouch…
So let’s step right back, and turn each one of those assumptions on its head:
- we have no idea, at the start, what views and viewpoints we will need – all we know is that, eventually, we may or will need to identify, create and use almost any kind of view or viewpoint
- we have no idea, at the start, of what entities or relationships we’ll need, or the attributes for either – though we know that we’re likely to want to attach some of them to some kind of predefined metamodel at a later stage in development
- we have no idea, at the start, of what kind of syntax, structures or business-rules might be needed for entities and relationships – though we know we’re likely to need almost any variety of them at any stage, in accordance with almost any kind of logic-modality
- we have no idea, at the start, of how best to describe or display any given entity or relationship – but we do know that, in many cases, we’re going to need to be able to show it in different ways for different contexts and different stakeholders
Which leads to a rather difficult discovery: within an EA-toolset that actually works with the way that we do, we need to be able to handle just about any type of information-about-things, in just about any format, related together in just about any way, changing in just about any way, and displayed in just about any way – yet still work with them, and link them all together, in a disciplined way.
Tricky…
So tricky, in fact, that most people seem to write it off as an impossibility, and slump back to struggling with – and complaining about – the existing EA-toolsets.
Despite that, I’m still certain that there is a better way. I’m also all too aware that it’ll likely take us a lot of work to get it together: but I’m still certain that it’ll be worth it – honest!
Where I started with all of this was what I might term the Burnett Query, after a former colleague of mine, Graeme Burnett, from when we were working together a dozen years or so ago on an information-system for test and inspection in aircraft-research. He said that, for any entity, we needed to be able to ask two key questions:
- tell me about yourself
- tell me what you’re associated with
This led us to develop a kind of ‘any-to-any’ database-structure, which worked well for that specific requirement. These days, though, I’d extend the Burnett Query somewhat further, so as to ask, for any entity:
- tell me about yourself, and why you’re the way that you are
- tell me what you’re associated with, and why
- tell me how you change, and why
(Given that relationships are themselves also entities, the last question also in effect handles modality, the nature of the ‘probability, possibility and necessity’ that underlies its modality.)
In short, a structure that supports something a lot less like a predefined set of flat diagram-like views, and rather more like a hologram that can be viewed in different ways from any different angle:
(Okay, that photograph above is a pretty limited example, and arguably not a true hologram, but it does at least give the right kind of idea: multiple and even simultaneous views into the same overall interrelated, interlinked space.)
We perhaps shouldn’t take the hologram metaphor too far, but there’s another key attribute that we want here:
- if we cut up a photograph into small pieces, each piece describes a tiny part of the whole, in full detail, but with no self-evident connection to any other part of the whole, or the whole itself
- if we cut up a true hologram into small pieces, each piece still describes the whole, though in less detail – and still with the connections across the whole essentially intact
What we actually need is somewhere between those two extremes:
- we need each view into the space to be like a photograph, in full detail for all the elements that we need, hiding everything else that we don’t need in that view – yet in the background still retain all the dynamic interconnections to every other element
- each piece of information-capture work enables that new information to be interconnected with anything else, as appropriate – allowing even the smallest piece of work to add to the available detail throughout the whole
Once we get that balance right, though, it gives us incredible power – almost literally a self-extending hologram, containing and/or referencing all information about that overall context. A large multi-screen system would have access to all of that power, and all of that information; and we’d pull out and re-merge extracts to go down onto progressively more-constrained systems, right down to an SMS-message on a dumbphone. Completely consistent, all the way across the entire toolset-ecosystem; yet also with any number of different toolsets and tooltypes working in their different ways and with different capabilities and user-experiences, likewise working on the same underlying repository. That’s the aim here.
To reiterate: we don’t expect that all of it can be done in one toolset, let alone any one model-type – that’s way too much to ask, way too unwieldy, way too unrealistic. But it can be done via any suite of toolsets all connecting to the same underlying data-structure – as long as that data-structure provides the right kind of continuity at a low-enough level. The common-factor is not the toolset – it’s the underlying repository. And the repository and its related interchange-format is not just for what, say, OMG or Open Group might describe as ‘architecture-information’, but any information – any type of information at all. Hence why it needs to connect at a low-enough level: but once we do have that kind of ‘inter-anything’ connectivity, we can do almost anything with it. That’s the key; that’s the difference.
So how would this work in real-world practice? About three years back, in the post ‘EA metamodel – the big-picture (and the small-picture too)‘, I described a possible data-structure for this, based on a root-level entity that I called a ‘mote’:
Think of it as a bit like a bacillus: a very small head – in essence consisting of an identifier, and just one parameter – and an any-length tail of references to other motes. So simple that it becomes incredibly versatile, via simple accretion of motes and cross-references between motes.
The first catch, though, is that there’d be an awful lot of motes: even quite a simple high-level entity might actually be made up of thousands of motes, by the time we include all of its change-history and so on. To get the performance up, we’d probably need a database optimised right at the root for this kind of entity: but in essence it could be a variant of today’s existing graph-databases – nothing actually all that new or difficult to do. (It’s implementable as-is as a two-table structure in SQL, but reality is that that would need so many recursive searches that it would be far too slow for anything other than a proof-of-concept.)
The second catch is that the motes themselves are very low-level: almost literally atomic. Any real entity or relationship would be more like a much larger molecule, that’s made up of such atoms. A usable toolset would need to work at a much higher level – but it needs always to remember that what it’s actually working with is right down at that root-level.
Given that mote, now imagine that the parameter carried by the mote could be a reference to any MIME-type or media-type, or any URL or URI: in short, a reference to anything – including photos, diagrams, videos or whatever. The mote-structure then allows us to cross-reference any of these to anything else. Other cross-references are to tags, to categories, to stakeholder-IDs, to dates – just like we would in, say, a wiki-type structure, or a semantic-web structure, or an XML/JSON-structure. An any-to-any database, yet one that also supports structured queries, parameterised search and iterative crawl – everything that we’d need for simulation, for what-if analysis, for automated update, all those tasks that we’d expect from a higher-end toolset. Model-types, syntaxes, connection-rules? – they’re also just another type of mote-cluster, used in a programmatic way. That’s what’s possible when we take the metamodel right down to the root-level, and use that as our core for information-capture and information-interchange.
That’s the core idea, anyway: our EA-information as a hologram, itself made up of tiny motes that can be viewed and interconnected in any appropriate way. For what it’s worth, I’ve done a lot more work on this, about how user-interfaces might work, about how to handle the ‘messiness’, and so on. If anyone’s interested in helping bring this to real-world fruition, perhaps get in touch?
Over to you, anyway.
I agree with the premise and its further elaboration that “we don’t know what the information will be – all we know is that could be just about anything”.
As a metaphor, while you made the analogy of a photograph cut to pieces, I compared EA to the big picture of an enterprise puzzle that enables us to mount the pieces properly into the whole.
To understand the modeling and build an EA framework, the analogy to the anatomy and physiology of the body is particularly relevant because it confirms that such an infinitely complex organism as ours could and was illustrated, in terms of structure and operation, for a long time now.
Any view or aspect of interest can be shown today as a CT scan section through the entire body of this same enterprise representation.
The heart of this EA representation and navigation is the framework and its metamodel – published some time ago – incorporated eventually in an EA tool repository.
See more at http://it.toolbox.com/blogs/ea-matters/enterprise-architecture-as-the-anatomy-of-three-dimensional-enterprise-body-63120