And more ‘Cynefin-like’ cross-maps (‘Beyond-Cynefin’ series)

(This is part of an ongoing series that explores alternate uses of a generic conceptual categorisation originally described in the well-known Cynefin diagram. This discussion is not about the formal Cynefin Framework; for details on the definition and use of the Cynefin framework proper, please contact Dave Snowden at Cognitive Edge. The term ‘beyond-Cynefin’ is here used solely as a placeholder to indicate this separation of interests.)

Here’s another collection of ‘Cynefin-like’ cross-maps that I’ve found useful for sensemaking in enterprise-architecture and related work:

  • ISO-9000 quality-model
  • Skill-levels
  • Automated versus manual processes
  • Asset-types
  • Data, information, knowledge, wisdom

More details after the ‘Read more…’ link.

ISO-9000 quality-model

A fairly straightforward cross-map to something that’s usually presented as a vertical stack but actually makes more sense in a Cynefin-style layout.

ISO-9000 quality-system

A work-instruction defines Simple rules that apply to a specific context. In Zachman-framework terms, it provides the row-4 detail-level What, How, and Who that apply at a specific When-event, with the Where usually defined in more generic terms (e.g. any location that uses a specific machine). The underlying Why is usually not specified.

When anything significant is changed – for example, a new version of software, or a new machine – we move ‘upward’ to the procedure to define a new work-instruction for the changed context. This accepts that the world is more Complicated than can be described in simple rules, yet is still assumed to be predictable. The procedure specifies the Who in terms of responsibilities, and also far more of the underlying Why – the row-3 ‘logical’ layer, in Zachman terms.

When the procedure’s guiding reasons and responsibilities need to change, we move upward again to policy. This provides guidance in a more Complex world of modal-logic: in requirements-modelling terms, a more fluid ‘should’ or ‘could’ rather than the imperative ‘shall’.  The policy describes the Why for all its dependent procedures – the row-2 ‘conceptual’ layer, in Zachman terms (though ‘relational’ might be a more accurate term here, as we’ll see from other cross-maps).

When the ‘world’ of the context changes such that the fundamental assumptions of current policy can no longer apply, we turn to vision. This is a core set of statements about principles and values that in effect define what the enterprise is. And because this vision should never change, it provides a stable anchor in any Chaotic context – the row-1 ‘contextual’ layer, in Zachman terms (though again ‘aspirational’ might be a more useful term here).

(Because a vision for an organisation or business is literally fundamental to the entire enterprise, it is essential to understand that it is not trivial, and it is definitely not a mere marketing-ploy. For more on this, see my presentations ‘What is an enterprise?‘ and ‘Vision, role, mission, goal‘ on Slideshare.)

Note that in some ways this cross-map is the exact opposite of the ‘Repeatability and ‘truth” cross-map in the previous post: there, the purported ‘universality’ of a given ‘truth’ increases as we move from Chaotic to Simple, whereas here the values become more general and broader in scope as we move from Simple to Chaotic.

Skill-levels

This cross-map links to a well-known and very useful set of guidelines on the amount of time that it takes to develop specific levels of skill.

Skill-levels

The ‘trust in capability’ spectrum shown here is actually an inverse of the amount of supervision needed both to compensate for lack of skill and to shield the person from the consequences of real-world complexity and chaos.

A trainee can be ‘let loose’ on Simple tasks after about 10 hours or so of practice (a 1-2 day training-course).

An apprentice will begin to be able to tackle more Complicated tasks after about 100 hours of practice (2-4 weeks); most of those tasks, however, will still need to be insulated from real-world complexity.

A journeyman will begin to be able to tackle more Complex tasks that include inherent uncertainties after some 1000 hours of practice (6 months full-time experience). Typical uncertainties include variability of materials, slippage of schedules, and, above all, people. Traditionally there is an intermediate point within the 1000-10000 hour range at which the person is expected to go out on their own with only minimal mentoring: in education this the completion of the bachelor’s degree, whilst in a traditional technical training this is the point at which the apprentice becomes qualified as a literal ‘journeyman’ or ‘day-paid worker’.

A trainee should reach a master level after about 10,000 hours (5 years) of practice – the traditional point at which a journeyman was expected to produce a ‘master-piece’ to demonstrate their literal ‘mastery’ in handling the Chaotic nature of the real-world. This is also still the typical duration of a university education from freshman to completion of masters’ degree.

Skill should continue to develop thereafter, supported by the peer-group. Building-architects, for example, often come into their prime only in their 50s or later: it really does take that long to assimilate and embody the vast range of information and experiences needed to do the work well. Hence there is yet another heuristic level of 100,000 hours or so (more than 50 years) – which is probably the amount of experience needed to cope with true Disorder.

For more details, see the article ‘10, 100, 1000, 10000‘, on my Sidewise weblog.

This additional cross-map for skills (using the old Cynefin labels of ‘Known’ and ‘Knowable’ for ‘Simple’ and ‘Complicated’ respectively) shows why this isn’t as straightforward as a simple linear stack. In the early stages of skills-development we in effect pretend that each context is predictable, controllable, reducible to some kind of ordered system; but at some point in the apprenticeship there’s a crucial stage at which we demonstrate that the world is inherently uncertain, inherently ‘unordered’. In the real world we can learn to direct what happens, but it can never actually be controlled – a distinction that is sometimes subtle but extremely important, and actually marks the transition to true skill.

As indicated in the cross-map above, there are fundamental differences in worldview on either side of that transition. For more details on this, and on the overall skills-learning process, see the article ‘Surviving the skills-learning labyrinth‘ on my Sidewise weblog.

Automated versus manual processes

This cross-map is a logical corollary from the skills-maps above, though it also has cross-links with the ‘Asset-types’ map below. It’s reasonably straightforward, but has extremely important implications for systems-design.

Options for automation

Physical machines follow Simple rules – the ‘laws of physics’ and the like. The Victorians in particular did brilliant work exploring what can be done with mechanical ingenuity – such as Babbage’s ‘difference engine’, or, earlier, Harrison’s chronometer – but in the end there are real limits to what can be done with unassisted machines.

Once we introduce real-time information-processing, algorithmic automation becomes possible, capable of handling a much more Complicated world. Yet here too there are real limits – most of which become all too evident when system-designers make the mistake of thinking that ‘complexity’ is solely a synonym for ‘very complicated’.

As with skills-development, there is a crucial crossover-point at which we have to accept that the world is not entirely repeatable, and that it does include inherent uncertainties. The most important breakthrough in IT-based systems here has been the shift to heuristic pattern-recognition – though there are real dangers, especially in military robotics, that system-designers will delude themselves into thinking that this is as predictable as for the Complicated contexts. Instead, to work with the interweaving relational interdependencies of this Complex domain – especially the real complexities of relations between real people –  the best use of automation here is to provide decision-support for human decision-making.

Within a true Chaotic context, by definition there is little or nothing that a rule-based system can work with, since – again by definition – there is no perceivable cause-effect relationships, and hence no perceivable rules. The only viable option here is a true expert skills-based system, embodied in a real person rather than an IT-based ‘system’, using principles and aspirations to guide real-time decision-making. One essential point here is that there is no way to determine beforehand what any decision will be, and hence how decisions are made. Although there indeed a very small number of IT-based systems that operate in this kind of ‘world’ – such as those based on ‘genetic-programming‘ concepts – we have no real certainty at the detail-level as to how they actually work!

Note that most – perhaps all – real-world contexts include a mix of all of these domains. This is why it’s essential that any real-world system provides procedures for escalation and de-escalation: moving ‘up’ from Simple to Complex to handle inherent-uncertainty via human skills, and ‘down’ from Complex to Simple to make best use of the reliability and predictability of machines.

Asset-types

A rather different cross-map, using a tetradian layout (four dimensions in a tetrahedral relationship). I’ve used this for several decades now as a background metaphor or model, but it’s made even more useful via cross-links to the Cynefin-type domains.

Asset-types in tetradian layout

There are four fundamentally different types of assets in an enterprise-architecture context:

  • physical – has existence independent of a person, and is alienable (if I give it to you, I no longer have it)
  • conceptual – may have existence independent of a person, yet is non-alienable (if I give it to you, I still have it)
  • relational – exists between two people, often in a physical-like sense (person as person)
  • aspirational – exists from a person to a conceptualised ‘something/someone else’ (person, brand etc as idea)

From what we’ve seen in other Cynefin-like cross-maps – such as the automation example above – it’s clear that for sensemaking purposes it’s useful to cross-map the Cynefin-style domains to the asset-dimensions as follows:

  • physical assets to Simple domain – physical objects follow Simple rules
  • conceptual assets to Complicated domain – information supports Complicated yet still predictable algorithms
  • relational assets to Complex domain – no-one would doubt that dealing with real people is Complex…
  • aspirational assets to Chaotic domain – vision, values, principles and suchlike are keys to survival when the world turns Chaotic

(Crucially, the key ‘asset’ in the human context is the relation with that person – not the person as such, as is implied in the well-meant yet lethally-dangerous phrase “our people are our greatest asset!”. For more on this, see the Sidewise article ‘The relationship is the asset‘.)

Note that many, perhaps most, real-world assets will include combinations of these types. For example, a paper form is both physical (it’s a piece of paper) and conceptual (it carries information); a CRM record is information about a business-relationship; a physical product will often be associated with an aspirational brand; and so on. Any real-world entity in effect is not situated solely on a single point in a single dimension, but occupies a region within the tetradian’s metaphoric asset-space.

A key point is that we can typically only hold up to three of the four dimensions in mind at any given time. For example, production folks will emphasise physical, conceptual and relational (machines, IT and people), but in the pressure of ‘Now!‘ may tend to forget about the aspirational dimension – in other words, the purpose of what they’re doing. IT folks tend to focus on information and machines, and often the purpose too (‘business-rules’ and the like), but are notorious for forgetting about real-people… Often it’s only by intentionally rotating the tetradian, to give different ‘views’ into the same enclosed space, that we can avoid falling into the trap of thinking that the ‘hidden’ dimension is somehow ‘not relevant’.

Another key point is that we can turn the tetradian in any direction, such that we can argue for a vertical hierarchy with any combination – hence all and none of any proposed hierarchies are ‘true’. The same applies if we ‘flatten’ the tetradian to a two-dimensional surface – hence, for example, the ‘figure-of-eight’ pattern of the path in the Plan/Do/Check/Act cross-map in the previous post in this series. Also if we place any two dimensions in opposition, we automatically place the other pair in opposition to each other: hence the semantic significance of an apparent opposition may not be inherent as such, but actually an artefact of our own arbitrary choices.

Data, information, knowledge, wisdom

This cross-map provides a fresh and potentially very useful view on a hoary old hierarchy.

Data, information, knowledge, wisdom

The conventional view of the relations between data, information, knowledge and wisdom is that they form a strict hierarchy, with data at the bottom, and wisdom at the top.

Yet it actually makes more sense if we map each of these as regions within a Cynefin-like space, and also as dimensions in a tetradian space (as per the ‘Asset-types’ cross-map above). Hence proverbs, for example, represent pre-packaged ‘wisdom‘ that may be ‘true’ in its own right, but only becomes useful when it is anchored into the real-world by concrete data and contextual metadata, and connected into the personalised knowledge built up through personal experience.

Note also that this is a kind of recursion on the Cynefin-like space: as we can see from the ‘Asset-types’ cross-map above, all of this relates to the conceptual dimension, and hence is ‘situated’ within the Complicated region of the root-level of the tetradian or Cynefin-like map, even though this cross-map also extends throughout the entire Cynefin-like space. This kind of meta-layering becomes extremely important in understanding the recursive relationships between ‘problem-space’ and ‘solution-space’ in meta-methodology – though more on that in a later post.

That’s all in this series of cross-maps for now, but I hope you’ll find them useful.

As usual, any constructive comments, ideas and suggestions would be most welcome 🙂 – over to you on that?

Previous posts in this series:

2 Comments on “And more ‘Cynefin-like’ cross-maps (‘Beyond-Cynefin’ series)

  1. There is a (Dutch) expression the goes “Asking the question is giving the answer” which comes to mind when viewing these ‘other’ models superimposed on the Cynefin map. In other words: the perception of the nature of the question is a guiding principle as to what ‘third dimension’ may apply to be superimposed on the ‘basic map’. I am still struggling with how this can eliminate the preconceptions (preferences) of the ‘interventionist’ e.g. the one that is the ‘advocate’ brought in from outside to #1 identify the ‘situation at hand’ and #2 helps create the solution. I am inclined to view both of the ‘cross-maps’ posts as a prerequisite view of some of the identifiers in the ‘problem space’ that not need to be chosen, but rather each needs to be either accepted of eliminated based on firm arguments. If there are no counter-indications that a ‘model’ is relevant, it is. Is that workable? For sure it is the only way (by elimination) to ensure that ‘we do not only see what we are looking for’. Again, this is pretty much how a general partitioning medical doctor would diagnose (the problem space): by elimination. Quite curious how ouy view this Tom c.s….

  2. Paul – Another English variant of that Dutch expression is Richard Stallman’s assertion that characterisation of the problem’ is what’s hard: once we’ve identified the key characteristics in ‘problem space’, an appropriate response tends to present itself within ‘solution-space’ almost automatically.

    We can’t “eliminate the preconceptions (preferences) of the interventionist'” – that, as I understand it, is the core of Dave Snowden’s important warning about the dangers of premature ‘pattern entrainment’ (otherwise known as ‘jumping to conclusions’). The aim in having a whole stack of different cross-maps is that it provides multiple views into the same nominal space, and hence somewhat delays over-automatic ‘pattern entrainment’; the occasional contradictions between the cross-maps also again encourage us to slow down a bit. Both of these push us slightly towards the Complex domain from the Chaotic (using these ‘Cynefin-like’ terms of the cross-maps rather than necessarily ‘correct’ Cynefin usage) – which is the exact ‘value-side’ counterpart of the ‘truth-side’ pull-back to analysis (Complicated) from the Simple domain.

    As you know, I’m still struggling with how best to describe this idea of ‘problem-space’ and ‘solution-space’ – it’s not there yet, certainly, though it still feels like it’d be useful. As I see it, these cross-maps are mostly about characterising the ‘problem-space’, by moving around with multiple views within ‘solution-space’ (what Richard Veryard would call ‘lenscraft’, I think). Often we don’t so much eliminate a model as show other cross-references and cross-links that need to be taken into account – such as the frequently-disastrous attempts at IT-driven ‘business process re-engineering’, which often fail to take any account of the Complex or Chaotic domains at all. This then leads to elimination, as in your medical-diagnosis example, yet often not as a simple exclusion-filter, but more like a modal-logic with intersecting sets.

    Again, I apologise that this is still so much a work-in-progress, with some of the ideas not very well expressed at all as yet – thanks again for your help, and your patience!

Leave a Reply to Tom G Cancel reply

Your email address will not be published. Required fields are marked *

*