Is Cynefin a cult?
(Following up on the furore from my previous post – somewhat tongue-in-cheek, of course, but with a serious point.)
After Dave Snowden started accusing everyone – especially me – of ‘pseudoscience’ and ‘psychobabble’ – I began to worry. What if he’s right? What if everything I do is just pseudoscience, caught up in a cult?
(Oops – another long one: better split it here with a ‘Read more…’ link)
I re-read Beyerstein’s list of characteristics of pseudoscience in Patrick’s Lambe’s post on “Is KM a pseudoscience?“, and started to worry even more. Here’s that list:
- Isolation – failure to connect with prior and parallel disciplines
- Non-falsifiability – no means to invalidate hypotheses
- Misuse of data – leveraging data out of context or beyond validity
- No self-correction, evolution of thought – often centred round a single ‘thought-leader’
- Special-pleading – the claim that this is a special-case that can’t be measured in any other terms
- Unfounded optimism – unrealistic expectations
- Impenetrability – an over-dependence on complicated ideology and obfuscation, or bluster in place of debate
- Magical-thinking – such as “the belief that good things will result from willpower alone”
- Ulterior motives – particularly ulterior motives of a commercial kind
- Lack of formal training – including certification schemes that link back to #4
- Bunker mentality – such as complaints about being ‘misunderstood’ by others, and often linked to #5 and #7
- Lack of replicability of results – especially replicability by others under controlled conditions
For example, I often work at the places where the IT-industry and consulting-industry converge, so I would need to test both of those against that list, putting a check-mark against any of the criteria that fail:
- #1? Check – not always, but way too often for comfort.
- #2? Check – ditto.
- #3? Check – often. Usually from myopia and questionable competence (“I guess we failed to take enough account of the human factors”: BPR), though occasionally from a rather more deliberate ‘sexing-up’ of the statistics to prop up the purported position (‘In Search of Excellence’ etc).
- #4? Check – often. (All those management fads…)
- #5? Check – again, too often for comfort. (Real business-case for IT-only KM or Enterprise 2.0, anyone?)
- #6? Check. (In fact rarely anything other than ‘unfounded optimism’ – at the start of a project, anyway.)
- #7? Check – lots.
- #8? Check – ditto.
- #9? Check. (Rarely anything else, perhaps? – BPR, anyone? ERP? the dot-com bubble?)
- #10? Check. (Look at most enterprise-architecture training, for example.)
- #11? Check. (It’s usually called ‘the IT/business divide’ – or worse, of course.)
- #12? Check. (Often we don’t want to replicate the results that we actually get…)
And so on, and so on. According to that review, it looks like almost the entire industry is based on little more than pseudoscience. Oops.
And we’ve already seen from Patrick Lambe that knowledge-management is perilously close to a pseudoscience too. Also ‘Oops’, I guess.
But what about Cynefin? Surely that can’t be a cult – especially given Dave’s position on pseudoscience and the like. Better go through that checklist again, just to make sure:
- #1: Isolation? Plenty of reference to cognitive-science and suchlike – but I don’t see any evidence of cognitive-science etc connecting back to Cynefin. Looks suspiciously like spurious science to me, then. Oops.
- #2: Non-falsifiability? References to ‘retrospective causality’ in the Complex domain look a bit questionable in this regard; likewise much of the definitions of the Complex and Chaotic domains, and the interactions therein. Oops.
- #3: Misuse of data? Ditto, it would seem. Oops.
- #4: No self-correction? There is a genuine community-of-practice here, but it seems often to be silenced by a single figurehead who claims to hold ‘the only real truth’ about the discipline. Oops.
- #5: Special-pleading? Tends to be very good about challenging ‘pattern-entrainment’ in others, but not so good at applying the same analysis to itself. Claims to be a ‘sense-making’ framework, but the only way to test the ‘sense’ that’s derived is in terms of the framework itself. Kinda circular, really. Oops.
- #6: Unfounded optimism? Probably. Best let that one pass as only a minor ‘oops’.
- #7: Impenetrability? Lots. There’s the ‘ganglionic cross‘ with its cryptic markings, and the insistent demand that all devotees acknowledge that there are five domains, not four; also near-religious wars as to what each of the domains ‘really means’. Oops.
- #8: Magical-thinking? All we really know is that Dave is almost obsessively against it – which by the usual psychological games probably means there’ll be lots. Complicated pathways between domains that somehow magically change things might be a good example. A bit uncertain, perhaps, but very likely to be ‘oops’.
- #9: Ulterior motives? Lots. Celebrity-status, serious-money consultancy-fees, training-fees (see #10), sales of software that can only be used by registered practitioners (see #10), and consumable-supplies that can only be purchased from the central organisation: sounds a bit like Scientology, doesn’t it? We’d have to be fair and remind ourselves that that applies to much of the IT-trade and consultancy-trade too, but even so that’s a really big ‘oops’.
- #10: Lack of formal training? Would-be practitioners generally need some serious consultancy-time under their belt, but the Cynefin training itself is defined, run, certified and validated only by the central organisation. In other words, worryingly circular and self-referential. Kinda sounds like NLP, doesn’t it? Oops.
- #11: Bunker-mentality? Probably not in most cases, but it’s notable that the figurehead has an unfortunate habit of fulminating about anything else that can’t be forced to fit within the preferred assumptions – such as denigrating Six Sigma as ‘Sick Stigma‘, and so on, regardless of where or how it’s used. So most practitioners probably okay, but the figurehead probably not. Oops.
- #12: Lack of replicability? Lots. By definition, pretty much anything in the nominal Complex or Chaotic domains is going to have limited replicability. (There’s good replicability in the Simple and Complicated domains, of course, but also no real need for Cynefin-style ‘sensemaking’ in those two domains, so we can’t really claim that one as a plus.) Just about any consulting-assignment will be in part unique, too, so again little to no replicability there, again by definition. Also, as Dave puts it, “every diagnostic is an intervention”, so the very act of enquiry changes the conditions of the experiment, impacting on any possible replicability. And if Cynefin experiments are only repeatable by Cynefin practitioners, and everything has to be assessed in Cynefin terms, it somewhat blocks the possibility of proper third-party ‘outside’ review – kinda like the worst of ‘armchair Freudianism’, for example. Another big ‘oops’.
So is Cynefin a cult? Apparently the answer is ‘Yes’, because according to Beyerstein’s criteria, it seems to fail the ‘pseudoscience’ test on just about every count. Almost the only place where it doesn’t fail, in fact, is in the two logic-based domains, Simple and Complicated, where Cynefin isn’t much use anyway. Either way, definitely ‘oops’.
[Brief note to Dave: yup, I’m well aware that that assessment above ain’t exactly rigorous and peer-reviewed and the rest, but it’s a darn sight more rigorous and honest than the cheap hatchet-job you tried to do on me over the past couple of days… yes, I am indeed still angry over that…]
But that result is kind of odd, because most of us find that Cynefin is a very useful tool in consulting practice – especially in dealing with what Cynefin describes as the Complex and Chaotic domains. Hmm. Seems like something doesn’t quite match up here, does it? And we’re left with two probable reasons for that mismatch:
- either Beyerstein’s criteria do test well for pseudoscience in areas where simple Newtonian-style logic applies, but tend to break down as soon as we hit anything closer to real-world chaos – so we’ll need something other than Beyerstein and the like to validate quality in those areas
- or the whole idea of ‘pseudoscience’ is a red-herring that can be used by superannuated academics to bully others and prop up some vain and misguided ‘mediaeval delusions’ of their own ‘superiority’, in areas where their putative expertise in formal ‘proof’ by definition can no longer apply, because by definition the ‘normal’ rules of replicability and the like are no longer reliable once we move into the Disorder, Complex or Chaotic domains
Both of these could be true, of course. But let’s be polite, and assume that it’s only the first of these: Beyerstein is probably useful in the Simple and Complicated domains, but we’ll need something else outside of that simplistic rule-based world.
But how can we tell when we’re outside of the rule-based world? And what can we use in place of Beyerstein and its ilk?
For the former, the key criterion is, once again, repeatability and replicability. In both the Simple and Complicated domains, there’s always an identifiable ‘right answer’, and if we do an experiment in the same way, we’ll always end up at the same results. (A few special-cases such as symmetries in complex-math give two or more ‘right answers’, but the set of answers in that case is still identifiable, so the basic principle remains sound.) In short, it’s repeatable, which means it’s also replicable. There are definable, straightforward (more or less!) and linear sequences of cause and effect, so if we don’t get the same right-answer under the same conditions, something’s wrong – hence falsifiability. Either true, or false: hence it makes sense to describe Cynefin’s ‘Simple’ and ‘Complicated’ as the two ‘truth’ domains.
In the other Cynefin domains, things get kinda messy. The Disorder domain is where we start, before we do any sensemaking, but it’s probably best to leave it out of this discussion for now. Yet in the other two domains – Complex and Chaotic – doing the same thing in the same way does not guarantee the same results. In the Complex domain, any apparent causality will at best become apparent only after the event (a context which Dave Snowden describes as ‘retrospective causality’); in the Chaotic domain, where everything is inherently unique in some way, even the concept of causality itself makes no sense, by definition. In effect, one way that we know we’re not in the ‘truth’ domains because it’s not repeatable. Clearly Beyerstein isn’t going to be much use to us here.
But there’s a nasty corollary that follows from this. If one test of the Complex or Chaotic domains is that it’s not repeatable, how can we tell the difference between that and plain ordinary bad-science in the ‘truth’ domains? – because that’s also not-repeatable too. Following the logic of this, we discover quite quickly that there’s no simple ‘truth’-based test that could distinguish between the two, because in both cases doing things the same nominal way may lead to different answers. Beyerstein in this instance would not only not be helpful, but could be actively misleading, always labelling the workings of the Complex domain as ‘wrong’ and therefore ‘pseudoscience’. Which it might be, or might not be, but there’s no way to tell: even the concept of ‘true’ versus ‘false’ doesn’t make much sense in that kind of context. Which is a problem.
But instead of trying to cling on to a notion of ‘true’ versus ‘false’ in a context where it won’t and can’t work, what does make sense is to use some concept of value. In other words, the test-criterion we need in the two ‘value’-domains Complex and Chaotic is usefulness, not ‘truth’.
Next question: what determines ‘usefulness’? By definition this is always going to be somewhat subjective and context-dependent – but that doesn’t mean that it’s a random free-for-all. Feyerabend’s anarchic dictum “anything goes” does indeed need to hold sway here, but it’s a disciplined ‘anything goes’ – a considered, functional form of anarchy, if you like (or even if you don’t). In turn, this brings us into the well-understood (if, by its nature, not necessarily well-defined) realm of quality-management. Which brings us to all those tools that Dave has so happily despised, such as Six Sigma and the like.
The problem – and again for some impenetrable reason Dave doesn’t seem to like this fact – is that each of these tools is context-dependent. Six Sigma, for example, is all about managing quality in terms of defects per million events: so it only makes sense to use Six Sigma if we have millions of exactly-identical events, which in practice places us in Cynefin’s Simple domain. If we’re not in the Simple domain, don’t use Six Sigma: simple, really. No need to make a song-and-dance about it and denigrate it as ‘Sick Stigma’, because it’s perfectly fine where it does work. Same with every other tool and technique: we switch between them according to context.
Within any given context, I’ve also found it useful to compare against a relatively simple yardstick of effectiveness. In my own practice, for the past decade or so, I’ve used a frame which describes effectiveness in terms of five distinct dimensions:
- efficient: makes the best use of available resources – typically the least wasteful use
- reliable: can be relied upon to deliver the required results, optimised over the required timescale
- elegant: aligns best with simplicity, clarity, ergonomics and other ‘human factors’
- appropriate: ensures that the delivered results are ‘on purpose’
- integrated: assists in bringing everything to work together as a unified whole
All of these need valid metrics: and in general, any appropriate metric will do – even if sometimes it’s just a 1-5 subjective scale, such as I use, for example, in my SEMPER organisational-capability diagnostic. Once again, in effect ‘anything goes’ here: the selection-criteria for metrics revolve around effectiveness, not ‘truth’.
And ‘truth’ approaches – such as Dave has so aggressively promoted in the comments to the previous post and elsewhere – really aren’t much help in deciding metrics and models here, because ‘truth’ only applies to part of the context, as we’ve seen above. True/false logic can’t lift itself by its own bootstraps: it can work within a set of assumptions and postulates, but it can’t be used to define or validate them. (Attempting to do so is known as ‘induction’, otherwise known as ‘cheating’ – or ‘pseudoscience’, of course.) So to make it work we have to jump up a step to a kind of ‘meta-level’, which, as I said in the previous post, might be called ‘nonrational’ or ‘arational’ or ‘metarational’, but I prefer to use the good old classic term ‘magical’. Which Dave doesn’t like, but that’s too bad, bluntly. (He also doesn’t like the alternate term ‘technology’, so he’ll just have to lump it, really.)
To help in deciding metrics and models and the like, we need to run the whole thing reflexively and recursively. (I’ve described in some depth how to do this in whole-of-enterprise architecture in my book Real Enterprise Architecture, if you’re interested.) The Cynefin frame is useful for this: we run it backwards, so to speak, to help us identify what needs to be handled in an appropriate manner for Simple, Complicated, Complex or Chaotic.
Even more useful than Cynefin for this, as mentioned in the previous post, is the frame that we developed for Disciplines of Dowsing. And as you’ll see from the two-page reference-sheet, the reason why it’s even more useful is that not only does it describes characteristic to help us identify which mode or domain we’re in, but also how to recognise when we’re losing discipline within that domain, and reasons and tactics to move from one domain to another. For example, if we’re in the ‘Scientist’ domain (i.e. Cynefin ‘Complicated’ domain), and we start getting emotional and defensive or aggressive about it, that warns us straight away that we’ve allowed ourselves to drift towards the ‘Priest’ domain (Cynefin ‘Simple’ domain), and either need to get the emotion out of it to return to the Scentist, or else intentionally switch to the Priest, or one of the other domains, as appropriate. The result is that we maintain discipline throughout the whole space – not solely in the ‘truth’ domains, as with Beyerstein and the like.
I’m well aware that dowsing and suchlike may feel a bit uncomfortable for some folks here, but unfortunately it’s the only example I have available right now. (There’s another variant in the Berg Time & Mind article, showing how to balance subjective disciplines in archaeological research with the more conventional ‘objective’ disciplines, but it’s essentially the same as in the reference-sheet.) Likewise the examples in the useful set of ‘seven sins of dubious discipline’ in the Disciplines book mostly relate to dowsing and archaeography. So if you’re working in knowledge-management or enterprise-architecture, for example, you’ll probably need to do some significant translation to make it work in your own work-context. But I assure that it is worth the effort: the result makes it a heck of a lot easier to work out what’s going on in a context – especially the kind of dysfunctional, chaotic, blame-filled business-contexts that we so often have to deal with these days – because it helps to ensure that discipline of an appropriate kind is kept in play at all times.
So, to come back to the original question, is Cynefin a pseudoscience, a cult? Short answer, as we’ve seen above, is “probably not” – but you’ll probably need a little bit of magic to help you prove it! 🙂
Constructive comments and suggestions welcomed, of course – and many thanks for sticking with me this far on this.
Tom,
I came across your blogs on Magical-thinking and knowledge-management and Is Cynefin a cult? this morning and greatly enjoyed reading them. Very thought provoking.
I think that EA processes, EA frameworks and EA tools are needed to make sense of the the Complex domain, whereas most organisations would like to think that they can get away with using the methods and tools they use for the simple domain.
I also liked the quote: ‘Most self-styled ’scientists’ treat that word in the same way as IT-centric ‘enterprise’-architects treat business-architecture and beyond: namely a randomised, undifferentiated grab-bag of all the bits of reality (or business-reality, in IT-EAs’ case) that they don’t understand. And then complain that it’s a mess, and doesn’t make sense in their own chosen terms, and therefore doesn’t exist.’
I suppose the next question is ‘is EA a pseudoscience or cult too?’ and to reach for the checklist again … 🙂
Many thanks for this, Adrian.
Agree entirely with your point about the tendency/desire of many organisations to use Simple-domain tactics (rules etc) inappropriately within EA. A good example is reference-architectures. If we treat it as a Simple-domain rule, all we get is a mass of fights, especially during implementation, from which no-one wins and everyone loses, in every sense. But if we view in a Complex-domain sense, as a ‘conversation-starter’ that outlines a preferred direction – and we then use Cynefin recursively to identify which parts of the reference-architecture are negotiable and not-negotiable, and why – then we have a much better chance of a successful outcome all round.
As for “Is EA a pseudoscience or cult?”, short answer is “of course it is!” 🙂 The real concern, though, is about how to make that ‘cult’ *useful*, rather than worrying about whether it is or isn’t a ‘cult’.
I looked up “pseudoscience” in Wikipedia. “Pseudoscience is a methodology, belief, or practice that is claimed to be scientific, or that is made to appear to be scientific, but which does not adhere to an appropriate scientific methodology…” And then a few sentences later: “Professor Paul DeHart Hurd[12] argued that a large part of gaining scientific literacy is “being able to distinguish science from pseudo-science such as astrology, quackery, the occult, and superstition”.”
Clearly not all of these (particularly “superstition”) are claimed to be scientific, so this is inconsistent. “Pseudoscience” would seem to be a term that’s not well defined and therefore easy to use in a straw man argument. In fact, the idea of “pseudoscience” might perhaps itself be considered pseudoscience by Beyerstein’s list. The list, as far as I can tell, is a set of characteristics that are quite common in human thinking, easy to ascribe to others and to ignore in oneself. Probably any sufficiently radical new theory is likely to fail the test if these criteria are applied.
Agreed – very good points all the way through (though we should perhaps be unsurprised at inconsistencies in a Wikipedia article? 🙂 )
Two points come up for me on reading this. One is that rider about “but which does not adhere to an appropriate scientific methodology”. I agree strongly with what the Wikipedia-writer is trying to say, but I’m wary about one point: who determines what is an ‘appropriate’ methodology, and for what reasons? There’s an immediate risk of circularity here, because the methods used to assess the appropriacy of the methodology may themselves not be appropriate. There’s also the ‘bootstrap’ problem I mentioned above: we can’t use formal-logic *on its own*, because by definition it has no valid logical means to test its own assumptions.
What follows from this is that the term ‘pseudoscience’ can all too easily be used (or misused, rather) in much the same way as terms like ‘un-American’ or ‘heretical’. If there’s no truly objective means to determine appropriacy, it may be that all we’re left with is subjective beliefs masquerading as ‘objective’, which can lead to some very serious problems indeed – witness the excesses of McCarthyism or the Inquisition. This risk is exacerbated when a) there is a high degree of lockstep conformism in the respective culture (note that ‘heresy’ literally means ‘to think differently’ – it’s about purported ‘wrongness of thought’ rather than ‘wrongness’ of action), and b) the lockstep conformity is believed to contribute to and be ‘necessary’ for socially-desirable results (McCarthyism: the perfect ‘free’ society? Inquisition: bringing humanity closer to the Kingdom of God? ‘applied science’: freedom of opportunity and freedom from want?).
Because of the over-inflated social value placed on science, and the generally poor awareness of how science is actually practised (i.e. that although results must always be linked in logic, most *process* is inherently non-logical) this also means that (mis)use of ‘pseudoscience’ as a pejorative provides an easy gambit for followers of scientism to attack others – including those who genuinely *are* working within a true scientific frame. (Scientism is a form of religion based on an irrational belief in an over-idealised version of ‘science’, characterised by refusal to test one’s own assumptions – the assumptions are deemed to be ‘true’ *because* they claim to be ‘science’. For example, self-styled ‘Skeptics’ are frequently followers of scientism; those who are, are usually very poor scientists, or not scientists at all, such as the illusionist James Randi.) The attack uses both direct denigration and a process called ‘third-party abuse’ (see my redesign of the Duluth domestic-violence model at http://www.tomgraves.org/duluth ) – in this case the societal belief in the primacy of science is used to leverage attacks against the ‘unbeliever’. (Technically this would not be abuse but violence, as the aim is to ‘prop Self up by putting Other down’.)
None of those ‘game-plays’ resolve the problem that we do still need a test for appropriacy of methodology. For example, many concepts that might be labelled in the West as ‘superstition’ – such as conversations with rocks or plants – are regarded as norms in many other societies; given the apparent usage *within those cultural norms* in identifying now-proven pharmaceutical properties of plants etc, what means could we use to verify and enhance the ‘appropriacy’ of the respective ‘method for conversing with plants’, in that example? (Ref: Wade Davis talk on TED at http://www.ted.com/talks/wade_davis_on_endangered_cultures.html ) Trying to assess it in science-versus-pseudoscience terms makes no sense, because it isn’t science (or rather, science in the Western sense) in the first place; and although conventional ‘scientific’ techniques are not able as yet to distinguish between the respective plants, the results are replicable and pharmaceutically valid – so there’s clearly some kind of methodology involved, which means that that methodology needs to be validated somehow.
Developing methods to tackle that real meta-methodology problem is what I’ve been aiming to address in these past couple of posts. The ‘pseudoscience’ game-plays didn’t help – in fact they blocked any possible progress for a couple of days whilst I dealt with the resultant fallout – whereas you and Adrian and others have really contributed to this work, and I hope you too have gained value from it. Many thanks indeed, anyway.
@Tom G
My favourite example of a reference architecture is FEAF (http://www.whitehouse.gov/omb/e-gov/fea/).
It does provide a good starting point for a conversation about what an organisation should be trying to achieve with an EA reference architecture.
I agree that making EA be useful and effective is the main challenge – it hasn’t yet reached the tipping point where EA’s usefulness is as self evident as for example UML is now.
And for the usefulness of an Enterprise Architect to be as clearly valued as a Solution Architect is currently. We’re all journeymen on a long journey.
@Tom G
Well, thank you as well. Actually this is stuff that I’ve been thinking about for years and have never gotten around to writing about. You’re right, “appropriate” is a revealing term, because, as you point out, “the methods used to assess the appropriacy of the methodology may themselves not be appropriate”. Worse yet, the methods are typically not even explicit and therefore never subject to critical awareness. Even the idea that there are or might be such methods seems not to have occurred to many who like to talk about “pseudoscience”. In the quote from Paul DeHart Hurd, even the appropriateness issue itself is implicit. If “quackery” and “superstition” are not (always) claimed to be scientific, the pseudoscience accusation must rest on an implied value judgment that the subjects in question *should be* ruled exclusively by science.
Philosophers of science have discussed this issue for much of the last century. My impression is that they’ve largely abandoned it as unproductive. See the wikipedia article for a brief summary at http://en.wikipedia.org/wiki/Demarcation_problem. If you want a rigorous discussion, try Laudan’s 1983 paper (reference 9 in the wikipedia article).
It appears that the definition of science is more Complex than Complicated…
Its more important now than ever to understand the difference between pseudo-science and well grounded methods. Good critical thinking and reflective behaviour are the order of the day. By reflective I mean the ability to take into account multiple points of view; and relate them to the most independent and reliable evidence possible.
Cynefin seems to be grounded in biology. Compare that with neuro-linguistic programming, which most people would recognize for being about as pseudo-scientific as it sounds.
Not only has it nothing to do with neuroscience or linguistics, it follows the scientology con of claiming to program brains. The submodality concept of NLP is a similar piece of pseudo-science as scientology’s engrams. The ritials involved are near identical.
The only reason NLP and scientology exists is for feathering the nests of pseudo-scientific hucksters.
http://knol.google.com/k/neurolinguistic-programming#
http://www.dailymotion.com/video/xdwl8h_characteristics-of-pseudoscience_tech
http://www.youtube.com/watch?v=POpldW7BELQ
http://www.youtube.com/watch?v=B3AZQPNyGyQ
The above links are all used at university level (by me for one) for teaching undergrads the difference between science and pseudo-scientific quackery.
Live and learn
Very good article!
either Snowden admits Cynefin is pseudo-science afterall (not going to happen), or he “rightly” states that it is proper science …
BECAUSE of using frameworks to achieve results (what the above author calls ‘value’ of ‘effectiveness’ with 5 wonderful simple metrics. Therefore so is NLP in the same category as Cynefin: both are
1. frameworks
2. that seeks to achieve results (NLP achieves the most incredible results, is by the far the biggest consultancy practice in the world – with Tony Robbins for example getting 12,000 persons per visit to London, and similar crowds worldwide; but others like Paul McKenna, Bandler etc. also gaining such huge crowds.
It works as in people are transformed. It’s distinguished from religious-cult because people do not pray to Tony/Bandler but admire them as simply coaches. Some (or many) try to out-compete them!
Summary:
Cynefin is just like NLP. It’s a framework. Its very useful.
Cynefin “rationalises” itself very well [fill in the purported science here; fill in the purported justification here] – just like NLP [fill in the purported science here]
Dave and Bandler should do talks together. What say you Dave?