If enterprise-architecture is a kind of craft – part art, part science – then how do we get it to scale? Is it even possible to get it to scale? – because if not, the whole enterprise of enterprise-architecture itself is doomed to failure…
What triggered this one was a Tweet yesterday by Catherine Walker:
- transageo: Crafts don’t need theory (@danieldennett). Scaling needs theory (@snowded). Scaling moves away from craftsmanship, towards standardisation.
If we compare this to what’s happened over the years in the disciplines of enterprise-architecture, there’s been an increasing awareness that whilst we definitely do need standardisation, it can also cause as many problems as it solves. As I understand it, the challenge for enterprise-architecture is not just about creating ‘sameness’ across a large scale, but about an appropriate balance between ‘same and different’ – the right balance between control, complexity and ‘the chaotic’ – across the respective scope and scale.
But if the logic of Walker’s tweet is correct, then the only thing that we will be able to do at scale is standardisation – the kind of ‘complexity-reduction’ promoted by Roger Sessions, for example. ‘Sameness’ scales, ‘difference’ doesn’t – that’s what this seems to say. And that’s pretty much an existential-challenge for the validity of enterprise-architecture: it suggests that EA might well work at small scale, but as we expand the scope and scale, the art of enterprise-architecture – and perhaps even the science too – would necessarily fade into nothingness, replaced solely by standardisation.
Which for me is a worry, because to me the art – the craft – of EA is what makes it all worthwhile.
So yeah, I needed to think a bit about this one…
And if we do think about it, it’s clear that that tweet is framed as a formal-logic syllogism:
- Premise 1: Crafts don’t need theory.
- Premise 2: Scaling needs theory.
- Inference: Scaling moves away from craftsmanship, towards standardisation.
Given that, then if there are any questions about the validity of either of the premises, or of derivation of the inference, then it would imply that other interpretations are possible – and if so, those other interpretations might still leave a space for the ‘craft’-aspects of enterprise-architecture at scale.
[An aside: for this purpose, I’ll assess those premise- and inference-statements in their literal form above; but to be fair to Catherine, it’s really important to acknowledge that she’ll have almost certainly had to over-compress the intended idea to squeeze it into the 140-character limit. What she’s shown up in that syllogism is an important question in its own right, but I’m not going to say “Catherine is wrong!”, because she probably isn’t. 🙂 ]
So let’s do this step by step:
— “Crafts don’t need theory”
I don’t know the context for Daniel Dennett that Catherine Walker referred to for this, but to me there are two clear avenues here that we could explore:
- “crafts don’t need theory” is not the same as “crafts don’t have theory”
- a lot depends on what’s meant by ‘theory’…
The first of those two points has obvious implications for the implied assertion that craft can’t be scaled. If Premise 2 is correct – “scaling needs theory” – then crafts that don’t have theory can’t be scaled; but crafts that do have theory probably can be scaled.
Which brings us to the next point: if craft can have theory, what does that look like? Would it always take the form of standardisation? And what is ‘theory’, anyway?
So let’s take those questions in reverse-order.
Theory is… well, it’s theory, innit? In other words, yep, it’s another of these blurry words that we keep tripping over in enterprise-architecture. There are lots and lots of different definitions, but the one I’ll pick here is this: theory is a means to guide how we make sense of something, and then decide what to do. We can have all manner of theory: sometimes the theory is good, and sometimes it might not be very good at all, in the sense of lining-up with what actually happens in the real-world – the quality of theory, or appropriateness of theory, often depends as much on the context it’s applied to as on the structure of the theory itself.
And if we take a sensemaking-framework such as Cynefin or SCAN, then it should be clear that ‘theory’ doesn’t just belong in one place, one domain: it’s everywhere. The nature of theory, and the appearance of theory, is different in each of the domains in those sensemaking-frameworks: for example, Cynefin draws very important and explicit distinctions between the theory of ‘control’ versus the theory of complexity – yet in both of those domains it’s still theory, in the sense of ‘a means to guide sensemaking and decision-making’.
So let’s interpret this in terms of the SCAN frame:
As I’ve said elsewhere, the left-side of the frame is about certainty and repeatability, whilst over on the right is variance and uniqueness; the vertical axis is about how the underlying mechanisms for decision-making will (and, usually, must) change as we move downward towards real-time. If we map the various forms of ‘means to guide sensemaking and decision-making’ onto this frame, we can summarise the result roughly as follows:
- Simple (real-time, predictable): rules, supported by definitions
- Complicated (‘considered’, predictable): algorithms, supported by formulae and standards
- Ambiguous (‘considered’, uncertain): patterns or guidelines, supported by abstract meta-theory to contextualise those descriptive patterns
- Not-known (real-time, uncertain): principles, supported by embodied meta-theory to contextualise those normative principles
One useful SCAN cross-map is in terms of skill-levels:
The type of theory that a trainee would rely on is likely to be very different from the theory that a master would use: yet in both cases it’s still in the same overall domain or continuum of theory – and still theory, too.
Likewise if we do a cross-map to CLA, or Causal Layered Analysis:
The nature of the story will change within each layer of the analysis, but again it’s still all in the same domain-continuum, and all of it is ‘theory’ in the sense of ‘guide to sensemaking and decision-making’.
So to come back to the original syllogism, what this shows us is that standards and suchlike are just one particular class of theory, that would mainly apply in one particular aspect of the overall sensemaking / decision-making space – over towards the ‘control’ side or left-side of the frame, if we’re using SCAN as the descriptor here. Hence to describe standards as ‘theory’ – or, worse, as the only valid form of ‘theory’ – could be more than a bit misleading, especially if the context is is more ‘in’ a different sensemaking-domain.
To give a practical example, consider knitting. Yes, it’s a craft – yet it does still have distinct theory, with its own distinct mix of rules (‘knit one, purl one’), algorithms (which, confusingly, are usually called ‘patterns’), guidelines (how to adapt a pattern for different body-shapes and sizes) and principles (how to turn a ‘mistake’ into a joyous feature of the garment). There are standards – needle-size, thread-size, body-size, and much more – that help in decision-making in the craft, but they don’t determine the craft: that distinction is probably quite important.
We’ll come back to that later, anyway; for now, let’s move on to the second premise.
— “Scaling needs theory”
For once, I’m going to agree strongly with Dave Snowden here: I can’t see how it’s possible to scale without theory.
Whether we’re dealing with control, complexity or ‘chaos’ – uniqueness – we’ll need appropriate theory to match. And whether we’re going to embed the scaling into machines or IT-systems or the like, or engage large numbers of people in an idea or an intent, whatever we do is going to involve and rely on the scaling that’s made available by theory. We need theory in order to be able to scale: the only question is which type of theory applies in each case.
And the answer to that question is much as summarised above: the theory we’d use, and type of theory we’d use, depends on what we’re trying to do. To use Cynefin terms, theory for the Complicated domain – standards and suchlike – is often significantly different from the complexity-theory we’d use in Complex contexts. For example, ‘truth’ is definable in the former, but more a matter of probabilities in the latter; standards and certainties and ‘provable’ linear-algorithms aren’t much help in complex wicked-problems where the context can change just by very act of looking at it.
Just as the theory is different in each case, so too is the nature and means of scaling. For the Complicated domain, standardisation is one of the means available to us for scaling: for example, in enterprise-architecture we’d often try to standardise some aspects of language or terminology, or specify particular protocols for use in specific circumstances. Yet as we move more towards uniqueness and uncertainty – the Complex domain in Cynefin, the Ambiguous or Not-known domains in SCAN – then standardisation will often actually prevent us from scaling, because it can’t cope with the types difference that always occur across larger scales, especially in any human-oriented context. In those kinds of of domains, we need meta-theory, meta-frameworks, what we could almost call meta-standardisation – the sort-of-standards for thesauri and suchlike tactics for how to work where standards alone don’t make sense.
To use another SCAN cross-map, this time in terms of skills:
Standardisation largely assumes that methods are derived only from the mechanics of the context – the ‘objective / collective’ elements of the action. But once we acknowledge the contextual and ‘subjective / personal’ nature of skill, and its ability to adapt to difference in order to achieve the desired result, then we also need to acknowledge a contextual balance between methods, mechanics and approaches – that the appropriate methods for each context depend on an appropriate mix of mechanics (‘objective’) and approaches (‘subjective’). Which means that we need appropriate theory both for the mechanics and for the approaches that underpin the respective skill.
To go back to that knitting example, we’d often use standards when working on our own, but also our own judgement as well – which is what makes it a craft. But if we want to scale it up – for a knitting-bee, perhaps, or even for some broad collective such as Teddies For Tragedies – then we’d use a mix of predefined standards (‘objective’) and patterns and variance for individual skill (‘subjective’). In that type of context, the theory includes variance: as a sort-of-standard, it includes both standards and the bounded absence of standards. And it wouldn’t work – wouldn’t scale – unless it did support that variance.
Scaling needs theory: but a lot depends on what we mean by ‘theory’.
Which brings us to that initial inference from the syllogism:
— “Scaling moves away from craftsmanship, towards standardisation”
The simplest way to put this is that there’s a vast amount of ‘It depends…‘ here, not least on:
- what we mean by ‘scaling’
- what we mean by ‘craft’ or ‘craftsmanship’
- what we mean by ‘standardisation’
- what we mean by ‘away from’
- what we mean by ‘towards’
If our aim is to create an industrial product, and need identicality in that product, then yes, we’ll need standards and standardisation. To take a military example, it was almost impossible to scale the supply of armaments when every cannon had a different bore, rifles were so much hand-made that there was almost no interchangeability or substitutability for spare-parts: standardisation was the crucial manufacturing-revolution that gave the Union side the upper-hand in the US Civil War, and, later, the Allies in the Second World War.
Yet that use of standardisation only applies for scaling of things that need to be identical – or can be identical. When some things are inherently different – as they are in healthcare, for example, or even in knitting – then misuse or misapplication of standardisation will prevent scaling, because it tries to force things to fit assumptions that don’t actually apply.
I gather that, in talking about ‘Scaling needs theory’, Snowden referred to craft and standardisation in the context of the old Guild structures, and the way that those guilds tended to rely more on tradition and ‘received knowledge’ rather than systematic theory – and hence, yes, tended to block scaling, both for that reason, and also as a means to protect their own power and prestige in the respective industrial context. Those are important considerations, yes, but I tend to view those concerns more in terms of power-problems or related-themes such as purported ‘rights‘ in a possession-economy – in other words, more about the nature of politics than the nature of theory. From a theory-perspective, what I would look at more is how a ‘guild’-type culture tends to develop naturally whenever there’s a high degree of inherent-variance or ‘unorder’ in the context – because the ‘guild’ is a key social means via which theory can be shared, and hence scaling can be enabled, wherever there is variance, of skill, competence or context.
In practice, we see a ‘guild’-type structure develop in the business-context wherever there’s a need for knowledge-sharing and skills-sharing in order to support scaling of some kind of ‘craft’ – or, in more business-oriented terminology, a ‘profession’. (In fact it’s the lack of possibility for overall standardisation that distinguishes a ‘profession’ from a Taylorist-style predefined ‘job’.) We’ll see each profession develop its own ‘body of knowledge’ (BABOK, PMBOK, EABOK and all those other ‘..BOK’ frameworks we’ll see around the business-domains), its own communities-of-interest and communities-of-practice – which represent the key means by which that knowledge and skillset can scale.
Yes, to echo Snowden’s concerns above, we’ll also see the downside of ‘guild’-type structures in some of those cases: blatant power-games and exclusion-tactics, for example, attempts at legally-enforced monopolies, or certification-schemes that might be better-described as certification-scams… Yet beyond those problems, then yes, standardisation is valid for the ‘certainty’-side of the context – shared core-terminology, for example – but is not valid for the ‘uncertainty’ side: such as the absurdity of using a predefined multiple-choice exam as a means to test for competence at adapting a framework to contextually-dependent needs. In many cases there’s even a real danger of a near-inversion of that initial inference: standardisation moves away from scaling of craftsmanship.
To bring this back to enterprise-architecture, and the need to scale enterprise-architecture such that it can and does become the responsibility of everyone in the enterprise, we need to be very clear about applying the appropriate type of theory for each context. Sometimes we need standardisation; sometimes we do need rules, predefined algorithms, predefined formulae. Elsewhere, though, we’ll need theory in the form of patterns, guidelines, principles – and the meta-theory of how to apply those in different ways as the context itself undergoes change.
Enterprise-architecture is a kind of craft, a dynamic blend of art and science – and needs the right kinds of theory from each of them, in the right mix, in order to scale and to work well. But which theory, you might ask? – which is ‘the right theory’, for each circumstance? Well, that’s where we come back to that much-dreaded phrase ‘It depends…’; and, in turn, why enterprise-architecture is necessarily a ‘craft’ – a ‘craft’ that can indeed expand to any needed scale.