On reflexive methodology

Apologies: this is going to be another long one, and probably more technical than most people want to see (especially at Christmas? 🙂 ). But I do promise that it’ll be useful to you if you’re interested in methodology of any kind; and I also promise that despite the problems that arose from the last couple of posts here, it won’t be an angry rant. 🙁

The point I’m trying to address here is this: what methodologies do we need to use to assess the validity of methodologies? As with the previous posts, this is still very much a work-in-progress: there’ll necessarily be a certain amount of ‘feeling my way’, and almost certainly a few mis-steps along the way. So please do allow me some room and leeway as you read this; and also, to get the best out of this for yourself and your own work-context, please do expect to have to do some in-depth thinking and cross-correlation of your own.

What I’m trying to tackle here are some of the most complex and paradoxical problems in the methodology of methodology itself: none of this is ‘kiddies’-level’ stuff, and you’ll need a solid background in theory and practice of methodology before you can make much sense of it. So please don’t assume automatically that I’m ‘wrong’, or that I’m some kind of religious nut, because you’ll miss the whole point of this if you do. This does also need to be a collective development, so as before, constructive comments and criticism would be most welcome!

Read on, anyway.

At first glance, assessing the validity of a methodology might seem simple and straightforward: it’s either scientific or it’s not, for example. But in practice that task is nothing like as simple as it seems, and it’s all too easy to misread what’s actually being described and said, especially when emotions come into the picture – which they always do whenever something new or unfamiliar is being assessed. For example, at one point during our somewhat unhappy interaction over the past couple of posts, Dave Snowden commented that:

Your language in this later post is now the language of cults by the way. You are not understood, you can’t really explain the concept to an unbeliever, you are in a different place. … . People who do not agree with you are not listening so you will have to withdraw from the argument. You are the possessor of disciplines that prevent you falling into error, lesser mortals who do not appreciate this are dogmatic, they disappoint you.

And he’s right: if you choose to read my posts in that way, you can find all of that in there. So if we use only simple language-analysis based on those rules above (i.e. Simple domain, in Cynefin terms), “he’s speaking like a religious fanatic” or suchlike is an automatic conclusion we could reach. (This would especially apply if we’ve thrown in a bit of unconscious pre-filtering – or ‘pattern-entrainment’, in Cynefin terms – to interpret others’ views as ‘cult-like’. This kind of pre-filtering is particularly likely to occur in a Simple-domain context because of the need for fast response rather than considered reflection, as per the Complicated and Complex domains.)

But the problem is that this kind of language isn’t unique to cults. We’ll see exactly the same phrases being used during the exploratory-phase of any new development. Almost by definition, new ideas are hard to describe to others – “you are not understood, you can’t really explain the concept to [others]” – partly because of pattern-entrainment in the critics (in everyone, actually), and partly also because the speaker’s framing and conceptualisation may well be a rickety mess, especially in the early stages of a development-phase. In a metaphoric or even literal sense the speaker may indeed be in a “different place”: that’s the whole point about multiple-viewpoints in enterprise-architecture, for example. That’s also the point of the old story of the blind men and the elephant: each one of them experiences something different, and interprets it in a different way, because each has an incomplete view. In other words, there’s an irresolvable logic-clash: to quote Edward de Bono, “everyone is always right, but no-one’s ever right” – each point of view is ‘true’ within itself, but fails to describe the whole picture.

That’s the rational side of the problem. The remainder of Dave’s summary of “the language of cults” accurately describes common emotional responses to the rejection that follows the logic-clash. The critics are “not listening” because they’re equally certain that they’re ‘right’ – which they probably are, from their point of view. Rejected, it’s quite likely that the speaker will “withdraw from the argument” – the passive side of the old ‘fight or flight’ response. Conversely, the active side of ‘fight or flight’ often leads to ‘propping self up by putting others down’: hence “lesser mortals” who are “dogmatic” and “disappoint you” and so on. We’ll see this kind of interaction all the time in any development environment: it’s one of the most common causes of conflict in business, for example.

But is it a dysfunctional ‘cult’-mentality, or is it the normal logic-clash that we expect to get during a functional, healthy development-process? The problem here is that the simple language-analysis can’t tell the difference between them. And whilst it’s technically correct to say  “Everyone is always right, and no-one’s ever right”, it doesn’t mean that everyone’s point of view is always valid – and as Dave correctly indicates (or implies, rather) in another comment, a random, rampant ‘anything-goes relativism’ will invariably end up in a seriously dysfunctional mess. To resolve this, we’re going to need better methods – which is where methodology comes into the picture.

The catch is that methodology, and particularly the more abstruse areas such as meta-methodology or ‘methodology of methodology’, has never had a very good press – especially in the Western academic tradition. For example, the clash with Dave reminded me of this incident in Robert Pirsig’s Zen and the Art of Motorcycle Maintenance:

When the Chairman did appear an interview took place which consisted essentially of one question and no answer.

The Chairman said, “What is your substantive field?”

Phaedrus said, “English composition.”

The Chairman bellowed, “That is a methodological field!” And for all practical purposes that was the end of the interview. After some inconsequential conversation Phaedrus stumbled, hesitated and excused himself, then went back to the mountains.

(Quote is from p.145 on the Scribd version of the book: would recommend also to read at least the rest of that page. I’ll admit that I’ve been strongly influenced by Pirsig’s work on quality and ‘gumption traps‘, though I dislike his subsequent work on value.)

‘Retreating to the mountains’ risks withdrawing into cult-like behaviour, or even to psychiatric illness or worse – as Pirsig’s all-too-literal alter-ego Phaedrus discovered to his cost. Yet it’s also unlikely that anything useful will arise from a fight – especially with someone as bull-headed and over-certain as Phaedrus’ Chairman. So whatever approach we  take here, it needs to steer well clear of those two extremes.

First, we need to acknowledge that, as Pirsig explains, methodology is itself a substantive field. And we need methodology in turn to validate the procedures and techniques used for each substantive field: so in this case we need a methodology for deriving methodologies, a ‘meta-methodology’. (Enterprise-architects would recognise this kind of recursion in that the first of our architecture-principles needs to assert the primacy of principles;  quality-management folks also know that the first procedure we need to write is the procedure on how to write procedures.) Substance and method are fundamentally different in their natures, yet each also includes the other within itself, much as in the classical Chinese ‘yin-yang‘ symbol – hence the Zen allusion in the whimsical title of Pirsig’s book. Getting the right balance between them is critical here – otherwise we end up with the kind of situation above, where the wrong tools are used to assess the required context.

To me, one of the keys to this is systems-theory, because it allows us to create a sense of the whole from whatever small fragments we have – such as the blind men’s different stories of the same elephant. Interestingly, the yin-yang symbol incorporates within the image at least three key-principles from systems-theory: rotation, recursion and reflexion. In my own consultancy-work I add two more – namely reciprocation and resonance – to provide a reasonably complete set of principles for whole-of-enterprise architecture; other people might use others, but these in particular do help to manage the complexities of that need for balance.

Rotation is probably the simplest of these principles: a systematic process of assess­ing a context from multiple perspectives, and then synthesising the result to approximate a picture of the whole. That’s what the blind men would do about their elephant – once they stop arguing about which of them is ‘right’, that is. All of us use this principle frequently in some form in our professional work: for example, even a simple checklist is a form of rotation in this sense. To give a more complex example, the methodology described in my book Real Enterprise Architecture is a kind of rotation through different views on the role and practice of architecture at the whole-of-enterprise scale.

Reciprocation draws on the understanding and experience that, with one key exception, systems must always balance out some­how. What makes it difficult to analyse is that this reciprocal balance is not necessarily direct or immediate. (Incidentally, this highlights one of the key differences between the Cynefin ‘Simple’ domain, which often only deals with real-time,  versus the Complicated domain, which is still rule-based but does have to deal with complex interactions over time and space and context.) In many cases balance may only be achieved over time at a system-wide level, with ‘energy-transfers’ often occurring between the dimensions – a classic business-example being a ‘slash and burn’ tactic which gives a short-term financial gain, but balances out by destroying the organisation’s ability to do work, soon wiping out all of the supposed gains. Again, this is a relatively straightforward principle, one which most of us will use in one form or another in our everyday practice.

Resonance – the feedback-loops which can be found in all complex real-world systems – provides the exception to that reciprocal balance. In systems-theory this can occur through ‘positive feed­back’ or feed­forward – both of which increase the ‘snowball effect’ towards self-propagation – or as ‘negative feedback’, or damping, which reduces the effect. This principle is especially important in assessing methodologies for use in social-systems: whilst most physical-systems operate a ‘win/lose’ dynamics (if variously Simple or Complicated, in Cynefin terms), most social-system operate a genuinely Complex dynamics in which simple reciprocal-balance is relatively rare, and the real choices spread across a very broad spectrum from ‘win/win’ to ‘lose/lose’ – with ‘win/lose’ being an interestingly illusory form of ‘lose/lose’.

Recursion is a ‘nesting’ of a pattern within the same kind of pattern – of which one of the most common forms is the hierarchical  structure of the everyday ‘org-chart’. These are patterns of relationship or interaction which repeat or are ‘self-similar’ at different scales, and again are common in the Complex space: identifying such recursion can make it possible to reduce complex-seeming processes into a much simpler – though rarely Simple – set of patterns. Methodologies that are recursive (and, in most cases, also iterative) are highly desirable for many different reasons: training is simpler, for example, because the same pattern is used on many different scales, and the overall pattern is much the same at every level of skill. In the IT industry, common examples of methodologies that are either overtly or implicitly recursive include TOGAF 9 (enterprise-architecture), ITIL (IT service-management), RUP and EUP (Rational/Enterprise Unified Process for IT-systems development) and the various Agile development-methods.

Reflexion is perhaps the strangest aspect of systems-theory, although it’s a direct corollary of recursion. It suggests that, as indicated in the yin-yang symbol, the whole, or aspects of the whole, can be identified or inferred from within the attributes and transactions of any part at any scale. Everything is connected to everything else, is part of everything else; every point is contained by and contains every other point. A useful analogy here is a holograph: unlike an ordinary photograph, even the tiniest fragment of a true hologram will contain a complete picture of the whole, albeit with less detail. (Fragmentation of a photograph reduces the accessible scope of each fragment, whilst still retaining the same level of detail;  fragmentation of a holograph reduces the level of accessible detail within any given fragment, but not the scope.) The same is true of business systems, social systems and so on: and once we develop an eye for reflexion, and see how it works in practice, we can create change-methodologies that can start anywhere, in any appropriate part of the system, and leverage the results out into the whole.

Use of any of those system-principles provides support for a good balance of flexibility and rigour, especially for methodologies that need to operate in the Complex or Chaotic space. And from almost forty years’-worth of experience developing methods and methodologies of many different kinds in many different industries and contexts, I would argue that any method or methodology that is used to assess other methods and methodologies – a meta-methodology, in other words – should always aim to incorporate all of these principles within its structure and design.

If we don’t do so, the methodology is almost guaranteed to give us ambiguous or seriously-misleading answers to key questions – as can be seen in the problems caused over-simplistic use (or misuse) of Beyerstein’s checklist for ‘pseudoscience’. And use of those principles not only makes the methodology more reliable, but usually also easier to use. Rotation, for example, helps us to balance simplicity with breadth and/or depth of scope. Reciprocation and resonance help us cope with potential problems of balance and perspective. And the combination of recursion and reflexion not only helps to keep things simple without becoming simplistic, but also allows a properly-designed methodology or meta-methodology to be used to assess itself in exactly the same terms as it assesses any other methodology.

That last point is especially important, because it’s one of the few ways in which we can resolve the ‘logic-bootstrap’ problem described earlier. We can only resolve that problem by going into the Complex or Chaotic domains, where we can test the validity of assumptions, but where by definition logic is unreliable. The recursion/reflexion combination provides us with an alternative approach, with a similar level of precision and discipline as in formal-logic, that reflects back on itself in every possible way – top-down, bottom-up, sideways-in, spiral-out. The resultant reflections shine the light of enquiry into every dark corner of the methodology – a process which may well unearth fundamental flaws that must be fixed before using that methodology in significant real-world practice.

The point here is that if this reflection isn’t done, the methodology may be left with gaping holes that can’t be seen from within the methodology itself, because they’re beyond the scope of the chosen logic. Perhaps the most common faults are circular-reasoning and invalid assumptions about supposed ‘universals’ – often from a complete failure to recognise even that the ‘logic-bootstrap’ problem exists. (Self-styled Skeptics’ frequent misuse of ‘scientific’ notions provides us with many examples of this, as we saw with Beyerstein’s ‘pseudoscience’ checklist.) Other serious problems arise from misuse of Complex-domain techniques such as post-structural linguistics: this is particularly common in political and social analyses which ‘deconstruct’ everyone else’s text to find structural flaws, but fail to apply the same analysis to their own reasoning. (When we do apply the same analysis reflexively to itself, the flaws that become evident can sometimes be startling. Some of the methodological errors in many of the models currently used in the domestic-violence ‘industry’, for example, are so fundamental, so blatant, and so horrendous in their consequences, that in a political/military context the promoters of equivalents of those models would be classed as war-criminals or worse: the methodologies really are that bad… Which is worrying, to say the least.)

But there are also plenty of examples where the reflection has been applied properly, resulting in methodologies that are simple (yet not simplistic), versatile, flexible, self-correcting and often self-adapting. Some industrial examples that come to mind immediately include variants of kaizen, kanban, Deming’s 14 Principles and much of the work of the Agile movement. In the futures context (professional futurists, not ‘futures’ in the finance-industry sense) one of the most powerful tools is Sohail Inayatullah’s Causal Layered Analysis, which applies recursion and reflexion to post-structural linguistics (hence its tagline “postructuralism as method”) to assess a context in many different views, from everyday ‘litany of complaint’ to deep-myth, in a manner which in some ways resembles Cynefin. Stafford Beer’s Viable System Model provides a similarly versatile means to model and manage information-flows at every layer within an overall enterprise, and can also be used in conjunction with his dictum POSIWID (“the [effective] purpose of a system is [expressed in] what it does”) to provide a reflexive means to review and contrast the nominal and actual drivers for an organisation:

“According to the cybernetician the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment or sheer ignorance of circumstances.” [Stafford Beer, University of Valladolid, October 2001]

(My book The Service-Oriented Enterprise explores how to extend Stafford Beer’s system-principles to all other aspects of the enterprise, to create a recursive and reflexive ‘Viable Services Model’. See also the Slideshare presentation ‘Enterprise architecture and the service-oriented enterprise‘ for descriptions of how to link those principles to existing enterprise-architecture tools such as TOGAF and Zachman.)

Finally, one other reflexive theme is an essential for every meta-methodology, and preferably every methodology too: the old ‘Golden Rule’, “do as you would be done by”. Developing a methodology is invariably messy, with many mis-steps: it’s rare that we “get it right” all at once, and even rarer to get it right on the first attempt. This is true for anyone who takes the risk of developing something new, something different. On the other hand, it is very easy to sit on the sidelines from a position of certainty – “that which is already proved” – and tell others that they are ‘wrong’, even though the logic being used to judge ‘right’ from ‘wrong’ may not apply in the respective context. It’s also much easier to demolish a temporary lash-up of a ‘work-in-progress’ than a rigid structure of cross-links and cross-braces – even though in reality it may be the latter that is actually ‘wrong’. So we do need to respect that fact, and respect the aim and intent of any ‘temporary lash-up’, rather than immediately reach out to tear it down.

Or tear down the person, for that matter. There are very good reasons why Deming included the phrase “Drive out fear!” as one of his ’14 Points’; for much the same reasons, one of the few rules for an After Action Review is “pin your stripes at the door”. Similarly, one of the core principles of 12-Step programmes is the explicit rejection of blame – whether from others or from self – and instead emphasising the centrality of personal and mutual responsibilities. Power enables change, but power is also the ability to do work, not the ability to avoid it – especially avoid it by dumping all of the work onto others and then punishing them for trying to do their best in doing that work, as is all too characteristic in destructive ‘critique’ of new methodologies and tools. More details on the business implications of that, if you’re interested, in this ‘manifesto’ on functional power-dynamics in the workplace.

Better stop there for now, I guess. Hope it’s been useful, anyway, and, as before, constructive comments most welcome.

5 Comments on “On reflexive methodology

    • What I’m aiming for, in my usual too long-winded way, is a checklist and process to validate methodologies, especially those that are used to validate other methodologies.

      Short-form version is that any broad-scope method/methodology/whatever needs to include proper formal checks-and-balances logic (to manage the ‘truth’-issues, primarily in the Cynefin Simple and Complicated domains) and incorporate those five system-principles (to manage the non-rational ‘value’-issues, primarily in the Cynefin Complex and Chaotic domains). If it doesn’t do that, it’s probably going to do more harm than good. Examples abound of methods/methodologies/etc that comply or fail to match these requirements, and the consequences of each; this approach not only helps to identify whether a given method/etc will work well or not, and suggested tactics to remedy any non-compliance, but also the probable consequences of success or failure to comply with those structural requirements.

      To give one really classic example of a non-compliant methodology, many (most?) attempted implementations of BPR (Business Process Re-engineering) failed because:
      a) they restricted it to far too narrow a scope (quote “I guess we failed to take enough account of the human factors”);
      b) in many cases they simply rebuilt the existing easily-repeated-processes into IT, without taking any account of the impact on the rest of the overall business-system;
      c) they failed to provide adequate or any means to handle out-of-scope exceptions into and out of manual processes (i.e. transfers into the Complex and Chaotic domains, which in general can only be handled by skilled real people); and
      d) they often focussed exclusively on efficiency (especially ‘saving money’) rather than whole-of-system effectiveness (of which efficiency is only one dimension);
      – all of which ended costing far more money rather than less (if it worked at all, which often it didn’t). The kind of meta-methodology I’m describing here would have picked up all of those fundamental flaws before any attempt was made to put the methods (BPR, in this case) into practice.

  1. I guess I don’t follow all of this, but one pattern that seems recognizable is the tendency to try to expand a method into areas in which it is not useful, or to devalue those areas. In the case of science, this is clearest when people try to be scientific in cases where there has not yet been sufficient scientific research to produce any useful results. It also happens that subjects that are difficult to study scientifically are considered unimportant or non-existent. And returning to the case of NLP, Bandler and Grinder tried to re-invent everything. Perhaps the least successful part of that was the attempt to re-invent ethics. It’s interesting and creative, but you have to stop and think before applying it to the real world. The problem is that, unlike techniques, you can’t just “try out” ethical principles and see whether they “work”. It’s another case of a methodology being unable to establish its own boundaries.

  2. If you “don’t follow all of this”, the fault is mine, not yours. 🙂 Aware I’m not explaining myself very well, and (re-reading what I wrote) the direction and end-point are not clear. I’m probably also being overly-defensive, which makes the flow of the argument even more tangled and long-winded. My apologies.

    [At this point I tried to include a comment about your point on trying to expand methods into inappropriate areas, and used scientism and ‘creationism’ as examples where religion intrudes into science. Oddly, the WordPress server refused to accept the paragraph – kept complaining about configuration errors and the like, though it was happy enough with all the paragraphs before and after it. I’ve had to rewrite it from scratch with different phrasing before the server would accept it. Must be magic 🙂 – or machine deliberately plotting against me, perhaps? 🙂 🙂 ]

    On “it also happens that subjects that are difficult to study scientifically are considered unimportant or non-existent”, a parallel to that (especially in business) is the tendency to over-emphasise the value of ‘the numbers’, and ignore qualitative metrics or anything else that cannot be reduced to an easily-identifiable quantitative metric. To paraphrase Einstein(?), not everything that matters can be counted, and not everything that can be counted will matter. In that sense, the meta-methodology needs to ensure that the methodology in focus has the appropriate metrics for its needs – once again, that word ‘appropriate’ is the key here, whereas ‘true/false’ can easily be misleading.

    Agree also with your point on ethics, and trying to use methods and methodologies beyond their scope. The meta-methodology I’m trying to describe here has a fairly narrow scope: its sole purpose is to enable us to ensure that methodologies do fully align with and support their nominal purpose. (It does help if that ‘purpose’ for the method is at least described somewhere, otherwise we end up either having to guess, or use Beer’s ‘POSIWID’ to reverse-identify it.)

Leave a Reply

Your email address will not be published. Required fields are marked *

*