What I do and how I do it
What do I do, and how do I do it? What’s the nature of my work, and the methods that I use? And for that matter, why?
That’s perhaps the shortest summary to a request by Anthony Draffin, in a comment to my previous post ‘Not quite bus-pass day‘:
On a selfish note… It’s apparent that the common thread to dowsing, printing and enterprise architecture is your ability to look at a field holistically and apply logical thought to extract inconsistencies and errors, as well as looking at new ways of doing something more efficiently to meet the original aims. That’s a rare skill. Have you given thought to documenting how you go about doing this? While I imagine it’s the application of a number of taught skills, the way you put these together must be far from ubiquitous. Have you considered teaching this? Personally, as a 27 year old, I want to soak up as much of your approach and thought process as you’re willing to offer.
(Warning, this is going to be another (very) long one, mainly because there’ll be several case-studies.)
Amused that Anthony says he’s 27, because that’s about the age that I really got going on this. (A little earlier, actually: the first dowsing book came out when I was still 24. I used to have to apologise for not being the age people expected me to be, namely at least 75! 🙂 )
I wouldn’t say that any of what I do is a ‘rare skill’, although it’s true that it’s not often acknowledged or respected – perhaps because, by its nature, it necessarily tends to be disruptive to any comfortable status-quo. I’ve been doing it since a very early age – for as long as I can remember, anyway, certainly way back in primary school – but it’s actually the standard approach used in most forms of design-thinking and the like, as taught in art-college or architecture-school or good engineering courses or even in the US military. It’s also what really happens in scientific research – see, for example, WIB Beveridge’s classic The Art of Scientific Investigation.
My own particular twist on it arose because I’m not much good at doing things, or making things (I tend to describe myself as ‘ambi-sinistral’ – the opposite of ‘ambidextrous’… 🙁 ). Hence I tend to focus instead on the thinking behind the doing or making or whatever, always searching for the simplest way to do things, the most effective way, and so on. Kind of recursive, if you like, but it works well. Except for that little problem that it tends to be so darn disruptive…
Methods, mechanics, approaches
One place to start would be around skill itself, and the key themes of my Masters thesis, way back in 1976. Back there, I described a skill – any skill – as being made up of three components:
- the methods used in the skill
- the mechanics and other real-world constraints of the ‘objective’ context of the skill – that which is common to everyone
- the approaches, assumptions, mindset, paradigms, physical dexterity and other ‘subjective’ context for the individual (the ‘operator’) – that which is specific to the individual
What I found, very quickly, was that most people seem to focus on the methods used in any skill. But that actually misses the point: the methods used by any skilled operator arise from their own personal resolution of the mechanics and the approaches – the ‘objective’ and ‘subjective’ components of the skill. This is why using someone else’s methods doesn’t always work, and why ‘best practice’ can be dangerously misleading: the mechanics of the issue remain the same, by definition, but the context is different, and hence may well need different methods.
Focussing on method also makes it much more difficult to tease apart the separate threads of mechanics and approaches. It should be obvious that blurring the objective and the subjective is not likely to be a good idea, and yet that’s exactly what happens whenever we focus only on method.
In all skills-work – in fact in just about every human context – we also come face to face with Gooch‘s Paradox: “things have not only to be seen to be believed, but also have to be believed to be seen”. In an all too literal sense, in skills-work, reality is what we say it is: we actually create it, from nothing, or rather from a combination of imagination and hard work. (In this kind of context, it doesn’t really make sense to ask the question “Is it real or imaginary?”, because the only possible answer is ‘Yes’ – both, therefore neither.) To resolve Gooch’s Paradox, we treat the approaches – our assumptions and beliefs – as if they are part of the mechanics of the context. The danger is that we may forget that point about ‘as if’, and – if we think about those assumptions at all – think that they are part of the fundamental mechanics of the context, rather than an arbitrary choice to achieve some particular purpose.
Once assumptions creep in – in other words, whenever the subjective is blurred into the objective without conscious intent to do so – what we have is a context to which arbitrary constraints have been applied. Which places arbitrary limits on possibility. Which is kinda pointless, really. But the only way that we’ll be able to see that the constraints are arbitrary is to step back a bit, and re-separate the subjective from the objective. Hence a kind of recursive methods-to-look-at-methods, analysis-to-unpack-analysis, and so on. Which is what I do.
As I mentioned in my reply-comment, much of the ‘how I do what I do’ is already documented in various ways throughout the books, such as in Everyday Enterprise Architecture (which focusses on method in a business context) and The Disciplines of Dowsing (which looks more at ‘thinking about thinking’). The core of the latter book is the ‘four disciplines’ section (see the summary on the separate two-page reference-sheet) and the ‘seven sins of dubious discipline’ (currently listed only in the book): it wouldn’t take much work to translate those into almost any other context.
What I’ll use here is the Five Element / effectiveness framework that I use in a lot of my client-work these days (though often in somewhat covert form). It’s nothing special, in fact it’s little more than a recursive use of a pair of matched checklists. The first of these, as summarised in the ‘Five Elements’ chapter in SEMPER & SCORE, is a set of perspectives on the overall context:
- Purpose – what are we aiming to do here? and why? (see also the slidedeck ‘Vision, Role, Mission, Goal‘)
- People – who would be needed for this purpose? what skills and relations do they need? what are their mutual responsibilities?
- Preparation – what planning and logistics would be needed for this purpose? what assumptions and mindsets apply here? what are the key events that trigger action?
- Process – what needs to be done to achieve the purpose? when, how and with what would this be done? when is each process complete?
- Performance – what constitutes ‘success’, and for whom? what information and metrics are needed to keep everything on track? what would be needed to support continuous improvement?
The other checklist is a set of keywords on effectiveness, which are sort-of orthogonal yet also sort-of linked to the Five Element set. Listing these in the same order as above:
- Appropriate – is this on track towards the purpose?
- Elegant – does this support the human-factors in the context? (e.g. simplicity, ergonomics etc)
- Efficient – does this make the best (e.g. least-wasteful) use of the available resources?
- Reliable – can this be relied upon to deliver the required results?
- Integrated – does this help to link everything to everything else in a consistent way?
To assess a context, we can start from anywhere at all. The point is that we use these checklists not as linear lists, but as a reminder to keep looking round, bouncing back and forth between each of the interconnected themes in the two lists, looking at the context from every possible angle, and at every level from really-big-picture to finest-detail, building up a kind of hologram of the overall context, using one form of sensemaking to bounce off others, and so on. The book Real Enterprise Architecture provides a complete worked-example of this kind of recursive process as applied to whole-enterprise architectures.
Questioning everything
Looking back at the various areas I’ve worked in or with, there’s a fairly consistent pattern about what I’ve done and the sequence in which I’ve done it.
The first stage is just getting involved at all: taking the ideas and practices at face-value, and putting them into practice as if they are entirely ‘true’. That usually works for a while (not least because that’s what everyone else is doing).
I then allow myself to start to notice the niggles, the things that don’t quite seem to work, where ‘what it says on the tin’ doesn’t actually deliver what it says on the tin. The problem, of course, is that we can’t assess the validity of a logic from within the logic itself. Yet we also can’t actually work on the context without being inside the logic (or some form of the logic). This is where we hit Gooch’s Paradox head-on: we have to see it to believe it, yet also have to believe it to see it. The only way out of that dilemma is to start to use beliefs as tools – which can be kinda challenging…
In my experience, there are two parts to this:
- identify the big-picture theme for the overall context (the ‘vision’ or, as architects would put it, the unifying ‘parti‘)
- apply design-thinking tactics to question everything, switching beliefs in order to experience the context in different ways, and test the apparent results
The tactics to identify the key-theme(s) are usually straightforward. A classic example is the ‘Five Whys’: just keep asking “why?” until eventually we hit a ‘Because.’ – or rather, a real ‘Because.’ that makes some degree of sense, rather than one that’s just used to get people to stop asking awkward questions! These days I tend to look for a brief overview-statement – usually only about three to five words – that has a distinct three-part structure: it identifies the ‘things’ or concerns that matter to everyone in the context, what’s being done with or to those items, and why it’s deemed to be important. This gives us a stable anchor to which we know we can return, and against which we can test anything in the context.
Then, following standard ‘design-thinking’ tactics, we use a suite of ‘disruptive’ questions about the context – for example:
- what’s another version of this?
- what does this look like at a smaller scale, or a larger scale?
- what happens if we substitute something else for this?
- what happens if we invert some or all of the rules?
- is there a ‘term-hijack’ here? – does a small subset purport to be the whole, blocking the view to any other aspect of the context?
This is where things often get to be, uh, fun… – because it’s very common to find aspects of the context that a) don’t and can’t make any sense, b) clearly don’t work ‘as advertised’, in fact usually work against the nominal aims of the overall enterprise, yet c) there are key players with a lot of vested interest in ensuring that the status quo remains unquestioned and unchallenged. Don’t be surprised at this: it happens every time.
This is where a certain amount of dogged determination becomes essential… Also essential is a very clear, insistent emphasis on the big-picture, on holding to the overall vision for the shared-enterprise, because that’s often the only thing that will persuade people that there’s no ‘personal attack’ here, that instead the only purpose of the challenge and the enquiry is to make things work better, for everyone. (We have to be real about that, too: we need belief in ourselves in order to keep going, it’s true, but we need to keep questioning ourselves as well. It’s one reason why serious self-doubt is a chronic yet necessary occupational-hazard here.)
We need to keep hammering at this until we do start to get a clear separation between the mechanics of the context – which usually turn out to be surprisingly simple – and the approaches to the context – which are, by definition, individual and subjective. Then we can start to work towards new methods that work with the context under the current conditions.
The same seems to apply to just about any type of context: an individual’s personal challenges in developing their own skill, a business, a social context, a single conceptual tool, or an entire discipline.
Scattered throughout this weblog and the sister-weblog Sidewise, you’ll find examples of those techniques in use. Sometimes it’s reasonably straightforward, sometimes rather more controversial, but you’ll see in each case that’s it’s essentially the same principles, the same tactics.
I’ll also summarise here those same techniques in use in four different large-scale domains that I’ve been involved with over the decades: dowsing, desktop-publishing, domestic-violence resolution, and enterprise-architecture.
Example: Dowsing (1970s)
Big-picture theme: finding things, particularly where conventional (mechanical/physical) techniques either won’t work or are unavailable.
History: as a discipline, has been around ‘forever’, and often highly controversial – first from priests who regarded it as ‘the work of the devil’ etc, then later from would-be scientists who wanted to ‘explain’ it and couldn’t. When I first got involved, in the late 1960s, the field was pretty much moribund, with a random mixture of wild claims, erratic discipline, no formal methodology or theory-base as such, a long history of inconclusive scientific experiments, and the first flush of hype-laden New Age ‘thinking’ (if that’s the right term…). Most of the people involved were well into their sixties, seventies or more (which I, uh, wasn’t…). The key players consisted of a kind of closed ‘military club’ (water-finding being very important to an army on the move), a few variously-erratic practitioners (often with wild-eyed ideas about health and the like), a swathe of armchair-theorist camp-followers who talked a lot but did nothing, and a few people who really did know what they were doing and wisely kept themselves well away from the mess.
Conceptual mismatch: The most common assertion was that it was a special ‘innate’ skill that only certain ‘special people’ could do. Methods that often clashed or even flatly contradicted each other could lead to the same result; the same method used by different people would lead to wildly different results. Most of the theory in use – such as notions of ‘waves’ or vibrations’ or ‘radiations’ – was either meaningless or just plain wrong in terms of conventional physics. (Much of it did sort-of make sense as metaphor, but there seemed to be little understanding of the difference between active-metaphor and concrete fact.) Muddle-headed ‘New Age’ ideas merely added to the overall mess.
Vested interests: On the one side was the moribund ‘military club’, who liked the idea of being ‘special and different’, and/or the ‘right’ to tell the ‘lower ranks’ what to do, whether it made any sense or not. On the other side were the upcoming ‘New-Agers’, who were not going to let anything block their path to potential fame and fortune. (I’m being cynical, I know, but that’s exactly what happened.)
Assessment and action: Assess the purported theory, and scrap most of it: it’s meaningless. The only parts of the theory that do make sense and do have solid experimental backing revolve around perceptual psychology and physiology – particularly around weighted-sum merging of multiple channels (which is why there’s no single ‘the method’) and around edge-triggered reflex-response (which is why some experienced water-finders can’t find static water even when they’re standing on top of it). If some kind of tool is used, almost all of the tools act as some form of mechanical amplifier – if I move my hand a little, the tool moves a lot. (I’ve only ever found one case where that principle didn’t apply at all.) Materials, structures, theories and so on seemed to matter only because people believed that they did: in most cases, a simpler alternative would work just as well, if not better. Keep stripping it back to the bare essentials.
It is a true skill – but it’s not one that’s restricted to only ‘special people’. Instead, it’s a learnable skill: anyone can do it – though whether they may or will do so are entirely separate questions! (There was quite a lot of pushback from the ‘military club’ against the idea that ‘anyone can dowse’.) It’s also a skill that requires a lot of practice and a lot of discipline to get right. (Unsurprisingly, there was a lot of pushback from the ‘New-Agers’ on that point, and there still is – see the book Disciplines of Dowsing.) It’s also a skill which often requires a wide range of psychological ‘tricks’ to help people slide past Batcheldor’s ‘witness-inhibition’ and ‘ownership-resistance’ – in other words, “this isn’t happening, and if it is, it isn’t me”.
End-result: After a few months’ experimentation and subsequent practice over several years with a wide range of students, I’d stripped it down to the point where I could get most people started on the basics within less than two minutes, using two bits of fencing-wire from the garden as simple instruments. The notion that ‘anyone can dowse’ is now firmly established in the canon, and the teaching-methods that I developed (based on, self-responsibility, self-critique and continual-improvement) are still some of the most common currently in use.
Example: Desktop-publishing (1970s-80s)
Big-picture theme: getting ideas and information out into the public space.
History: I trained as a graphic-designer/typographer, and became professionally involved in typesetting in the late 1970s, with the early developments in smaller phototypesetting machines. (‘Smaller’ being a relative term here: the first system we bought required a room of its own and a separate darkroom, and cost more than my house.) The big bottleneck was keyboard input: the typesetting unit was capable of running much faster than a single operator. Although the internal technology was extremely complex, the input was not: some machines still relied on a very simple 6- or 7-channel punch-tape reader, using control-codes to extend the effective size of the character-set.
At the same time, simple but usable microcomputers were just starting to come onto the market. (My first microcomputer had only an 8-character LED display, hexadecimal keypad and 256 bytes of memory; the more usable Ohio Scientific systems that we first used for real had a proper keyboard but still only 8kbytes of memory, and the only storage was on audio-cassettes.) Almost all of these machines used a 7- or 8-channel character-set (ASCII or extended-ASCII); most also provided some form of direct data input/output for interfacing to other systems.
It seemed to me that there should at least be some way to use a basic micro as a much cheaper input-terminal, using simple code-translation and a standard hardware-interface. It also seemed probable that other people would want to do the same – taking control of their own publishing, driving a typesetter direct, or both. In the longer term, that could well be quite a large market.
Conceptual mismatch: This is best summarised by the phrase (exact quote, in fact) that “there is no interest in typesetting from microcomputers, and there never will be”. There were all manner of arbitrary demarcation-lines across the whole context, both on the pre-press side – such as between authors, publishers, unions and printers – and on the technical side – particularly between typesetter-manufacturers, computer-manufacturers and various hobbyists and hackers – most of which arose more from historical ‘turf-wars’, ‘positioning’, and mutual misunderstanding than from any concrete distinctions. On the union side especially, there were many arbitrary assumptions, based on the belief that technology could not and would not change, or if it did, it could not and would not be allowed to make any difference to existing processes or roles.
Vested interests: The entire context was riddled with vested interests, almost all of which were in conflict. A stream of intermediaries – agent, publisher, pre-press, press, retail – stood between author and audience. Typesetting-systems were expensive pieces of equipment, yet with not all that much to justify their cost: there was lot of money to made there, both from machinery-sales and from fonts and other consumables, and hence a lot of ‘need’ to protect those sources of income. Until IBM eventually stepped in, most of the microcomputer manufacturers were trying to establish themselves as ‘the manufacturer’, resulting in a plethora of mostly-proprietary, mostly-incompatible hardware and software non-‘standards’ – at one point we had to buy two machines whose sole function was to read the two hundred or more different disk-formats used on the four distinct disk form-factors then in common use: 8″, 5.25″, 3.5″ and 3″. Weaving a path between all the different vested-interests and proprietary structures was, frankly, a time-wasting nightmare.
Assessment and action: On our first machine, we’d been told emphatically that it was physically impossible to connect a microcomputer; a weekend spent poring over technical specs and waving a soldering-iron around a bit on a prototype-board soon proved that ‘fact’ wrong, whilst the only software we needed at first was a straightforward lookup-table to translate between character-sets. It really was that simple. (We avoided warranty risks by using opto-isolators, so there was no electrical connection between the two machines.) For our later, larger systems – which were capable of typesetting a reasonable-sized book in less than an hour – the hardware-interfaces were already built in. This gave us ‘direct typesetting’ capability, but it still required operators to know – and use – the distinct formatting-codes for each type of machine.
The next step was to hide the complexity, using the format-code in common word-processors such as WordStar to trigger font-changes and the like. (I believe we were the first people to use style-codes, such that a single hideable code – *F1, for example – would change the entire style, including paragraphs, indents, font-family and so on.) At that point, people could use ordinary word-processors to typeset text: the first true precursor to desktop-publishing.
It worked, but there were still limitations. (Our main competitor, meanwhile, was using a mangled form of SGML which still required people to embed hard-codes in the text; in our system, all of the formatting could be invisible.) The main problem was that people couldn’t see beforehand exactly how much space any text would take up – a very important concern to two of our customers, who were producing page-spread books and partworks, Dorling-Kindersley style. Hence some serious code-hacking (all assembly-language, with multiple overlays to squeeze into no more than 40kb of memory) to create a post-processor that would copyfit line-by-line for the correct fonts and sizes, and output a symbolic result to a dot-matrix printer. This was probably the first viable attempt at a true desktop-publishing system – several years before Macintosh and, later, PageMaker.
End-result: I’m good at creating ideas and markets, and all the preliminary work that gets things going, but I’m not good at running businesses – that’s a different mindset entirely. Eventually we sold out to another pre-press company and (in an all too literal sense) I ran away, first to the US, and then onward to Australia. I believe it’s still running, and certainly made millions for the new owners. (I didn’t, of course.)
Example: Domestic-violence resolution (1980s-90s)
Big-picture theme: reducing and repairing the damage from social harm, particularly between individuals.
History: Fights and power-games between individuals in a domestic context have been part of the human story since forever, but had usually been largely covert and ignored as ‘a private matter’ for most of that time. It was brought into public notice in 1970s by women’s activists, most notably Erin Pizzey, founder of Chiswick Women’s Aid. Unlike Pizzey herself (who has always insisted that domestic-violence (DV) is a human problem, not a gendered one), most activists purport that DV is something that happens almost exclusively to women, and caused almost exclusively by men – so much so that some have called for the term ‘domestic-violence’ to be replaced always by the term ‘violence against women’. Most current law (e.g. US ‘Violence Against Women Act’), support-structures (domestic-violence help-lines) and formal theory (e.g. Duluth) reflect this assertion. I became involved in the field during the 1980s as a member of a pro-feminist men’s group who were taking up the feminist challenge that all violence was caused by men alone, and therefore men’s responsibility alone to resolve the (purportedly) ever-rising tide of men’s violence against women. The issues became more personal later when two of my lesbian friends asked me for advice after they had ended their relationship with a knife fight (without injuring each other, fortunately) but had been explicitly shut out from any help because no man could be blamed for the violence.
Conceptual mismatch: The theory was straightforward: men are the problem, women are the solution, and the only useful thing that men can do is blame themselves for everything that goes wrong in the world. Everything in my background supported that assertion, hence it seemed to make sense: self-blame had been a very deeply ingrained habit for me, going right back to earliest childhood. Yet the whole field seemed riddled with gendered special-cases: behaviours that were definitely violence if done by a man were, if done by a woman, either deemed ‘not violence’ or ‘indirectly caused by men, therefore men’s fault’. In the Duluth model, blame itself was classed as a form of violence only if done by a man, and only if the person being blamed was an adult woman: blaming of men (or in essence almost any other form of abuse of men), was explicitly not classed as violence. And the real catch was that, in terms of outcomes, it clearly wasn’t working: no matter how much we blamed ourselves, and blamed other men, the overall level of violence in the culture around us still seemed to continue to rise.
Vested interests: Looking around, it was very clear that there were a large number of players – mostly but not all women – whose identity and self-worth depended on putting men down, regardless of whether or not this actually helped women in general, or anyone in general. There were also very large sums of money, and large numbers of jobs, that depended on maintaining the assertions around women’s purported exclusive victimhood in this context.
Assessment and action: The first warning-signs appeared in one of our standard text-books, Paul Kivel’s Men’s Work: How To Stop The Violence That Tears Our Lives Apart, which is designed around a series of workshops for senior-school students. The book includes many oddly-unrealistic role-play scenarios in which an adolescent boy or young man is suddenly violent or abusive to a woman; yet the only real example of violence described in the whole book is an actual incident in which two girls had a full claws-out fight when one insulted the other in the classroom – and in which no boys were involved at all, other than to separate the warring parties.
After my lesbian friends had their knife-fight, we discovered that no violence-resolution material was available that acknowledged even the possibility that a woman could be a perpetrator of violence. The standard Duluth model defines violence as inherently ‘male’; on the Duluth Wheel, female pronouns are used exclusively throughout to indicate victim, and male pronouns exclusively for perpetrator, and mutuality (where both parties are both ‘perpetrator’ and ‘victim’ of each other and of themselves) – which clearly applied in my friends’ case – is explicitly denied. I decided to try a very simple thought-experiment: swap the gender-pronouns throughout, and see if it still makes sense in terms of real-world evidence and experience. It did: in fact for most of the Duluth categories of abuse it made more sense than the ‘official’ way round. Also – importantly – two key categories of abuse were absent from the original model: sexual abuse, and third-party-abuse. It became immediately clear that the Duluth model itself was structured as third-party abuse, primarily leveraged through other-blame – in other words, far from reducing violence and abuse, it was actually designed to increase it. (Whether that mis-design was intentional, or merely arose from incompetence and excess zeal, is a separate issue that I will not discuss here… 😐 – but the fact of its unfitness for purpose cannot be in any doubt.) A simple ‘de-gendered’ redesign resolved almost all of the structural problems, sufficient at least to satisfy my friends’ immediate needs.
That exposure of the extreme inadequacies of the original Duluth model forced our group to reassess all of our previous assumptions about gender and violence, and thence to look again at the research on whose purported facts we’d based those beliefs. I did two analyses of a much-published study on which Australian public policy was based – the first analysis on the public version of the paper and political assertions from it, and the second analysis on the original academic study, which took quite a bit of work to obtain, since it was not publicly available. Another colleague, as his MA thesis, undertook a meta-analysis of domestic-violence studies in Australia. The results were shocking. None of the original studies were based on defensible methodologies – in fact many were so riddled with basic methodological errors such as circular-reasoning that they were essentially meaningless. And in all cases, all of the methodological errors either inflated the female injury-rate or risk, diminished or denied the male injury-rate or risk, or both: there were no exceptions. In short, almost none of what we’d previously taken as ‘fact’ was fact at all. The only genuine facts we could establish was that domestic-violence was a systemic issue with some gendered overtones, and that although it that affected both sexes in different ways, overall it seemed to do so almost equally – though there were strong indications from hospital data and the like that the majority of victims were male, not female.
We then looked at public policy, and the provision of domestic-violence support-services. These too were based on the same fundamentally-flawed assumptions and the same unquestioned circular reasoning: women are the only victims, hence support-services are only available to women; and since only women use these services, this proves that women are the only victims. In some of our interviews we discovered that men who’d been abused – knifed, in one case – were referred to police for charges, simply because the models in use automatically deemed men to be the sole perpetrators, regardless of the actual context or evidence. In short, the entire domestic-violence resolution ‘industry’ it was, and still is, an unworkable and fundamentally dysfunctional mess whose structures and methods are all but guaranteed to cause far more harm than good: an archetypal example of the Shirky Principle that any institution will attempt to preserve the problem to which it purports to be the ‘solution’.
End-result: The domestic-violence ‘industry’ is the outcome of a classic example of a ‘term-hijack‘, in which a small subset of systemic issue is misframed as the whole, and strenuous efforts are made to deny or conceal any other aspect of that issue. In effect, the term-hijack converts a resolvable systemic context into a non-resolvable ‘wicked-problem‘, in which every attempt to resolve a problem is constrained by the structural myopia, inevitably making things worse with each iteration. Unfortunately, there are huge vested-interests in maintaining the term-hijack. Anyone who challenges it – as I and many others have learnt to our cost – is likely to come face to face with extreme violence from women who somehow purport that no woman is ever violent. 🙁 It seems clear that resolving these structural problems would require a high level of honesty and humility from those players – an honesty that in most cases at present seems conspicuous only by its absence…
Some of the material I wrote is out there and in daily front-line use by others – with real success, according to the occasional emails I still receive on the subject. But to be blunt, after a decade of relentless ongoing abuse from almost all sides, I just gave up and literally threw away most of the work that I’d done… the structural dishonesties in this mess are so entrenched and so ‘political’ that I found it just too painful to be involved at all, and it still seems that resolving the mess would require fundamental shifts in societal attitudes and beliefs that would be unlikely to occur within my own lifetime. Oh well.
The issues are generic, though, and can be resolved at a more generic level. You’ll see how some of these exact same issues are addressed in the business-context in my book Power and Response-ability: the human side of systems and its accompanying ‘manifesto‘.
Example: Enterprise-architecture (2000s-to-present)
Big-picture theme: helping organisations and overall shared-enterprises become more efficient and effective (‘doing the right things right, on purpose’).
History: The main focus of enterprise-architecture is around the relationships between structure, purpose and business-execution.As a discipline, it’s been around for at least a century in various forms, such as Taylorism (‘scientific management’), operations-research and organisational cybernetics. I often describe it as based on a single, very simple idea: that things work better when they work together. Although my work often touched on it over the decades, I first became actively involved perhaps fifteen years ago, when trying to tackle issues around long-term knowledge-management in aircraft research. Over the past decade, most of my work has revolved around various aspects of enterprise-architectures.
Conceptual mismatch: The term ‘enterprise-architecture’ implies a very broad whole-enterprise scope. In recent decades, though, the term ‘enterprise-architecture’ has often been (mis)used to denote a very small subset of the real scope, relating to IT-infrastructure or IT-systems in general. This (mis)usage probably arose from a simple conflation of the term ‘enterprise- or organisation-wide IT-architecture’. The result, however, is a very serious term-hijack: the tiny subset of the overall enterprise represented by IT purports to be the whole, with all other aspects of the enterprise – including people, purpose, physical facilities and non-IT machines of any kind – either concealed or denied. In effect, it becomes all but impossible to discuss any aspect of enterprise-architecture without being forced to describe everything in terms of IT – even in contexts where IT-systems are either not relevant or not available.
Vested interests: There are huge vested interests in maintaining the story that ‘enterprise-architecture’ relates only to IT. Many, many billions of dollars are invested each year on IT-systems that purport to resolve inherently-complex enterprise-scale concerns such as customer-relationships, market-relationships, regulatory-compliance and the like. However, by definition, many if not most of these systems are incapable of resolving all aspects of the respective concerns, in effect converting them into non-resolvable wicked-problems; maintaining the ‘enterprise-architecture’ term-hijack makes it possible to conceal or deny the inherent dysfunctionality of the systems, instead maintaining the faith or fiction that the problems created can only be solved by yet another IT-centric system at yet further cost. There are also large vested-interests in training, certification and the like for IT-centric ‘enterprise’-architectures.
Assessment and action: The starting-point for assessment was a simple review of the term itself, deriving the natural-meaning via term-inversion. The ‘natural-meaning’ of a term is the meaning implied by the individual words of the term. The term-inversion here is ‘the architecture of the enterprise’: hence the natural-meaning is ‘anything to do with the structure and purpose [architecture] that underpin the emotional drivers and actions (the animal spirits of the entrepreneur”) in the shared context [enterprise]’. The purported exclusive-association of enterprise-architecture with IT does not occur in the natural-meaning: in fact the role of IT in the enterprise-architecture is implied only peripherally, as a minor aspect of support for ‘the animal spirits of the entrepreneur’. In other words, what we’re dealing with here is definitely a term-hijack – and an extremely unhelpful one at that, because the constraint on the scope (i.e. ‘enterprise’-architecture constrained solely to IT aspects of the enterprise) has such a limited connection with the actual scope (which would naturally focus more around people than machines).
Most of my work in the past decade, and particularly the past five years, has been focussed on finding ways to highlight the term-hijack, to resolve the resultant problems and dysfunctionalities, and to create models, methods and frameworks to guide a true enterprise-scope architecture, in some cases all the way out to a global scale. The public outcomes of this work so far include several books, a couple of dozen conference-presentations and other slidedecks, and many, many weblog posts.
End-result: We are getting somewhere with this one. Most ‘enterprise’-architecture conferences these days do explicitly include some discussion of the enterprise-scope beyond IT, usually under a banner of ‘business-architecture’, and there’s much stronger linkage to true business-architecture models and techniques such as Business Model Canvas. The real danger now is there’s a tendency towards ‘business-centrism’ rather than ‘IT-centrism’ – in other words, where the architecture sub-domain of ‘the business of the business’ rather than the sub-domain of ‘the IT-systems’ becomes used as the base for yet another term-hijack. The crucial understanding that we’re still somewhat struggling to get across to most of the players in the field is that in a true enterprise-architecture, everywhere and nowhere is ‘the centre’.
But yes, we are getting somewhere with this one. Slowly… 🙂
Summary
So that’s what I do, and how I do it:
- explore a context that is of interest to me
- identify the conceptual mismatches that occur within that context, and that make it difficult to achieve effective results within that context
- identify the vested-interests that drive and maintain the current dysfunctionalities in the context, and, where possible, devise strategies and tactics to disarm and disengage those vested-interests
- assess the details of the dysfunctionalities in the context, and identify or design workarounds for those problems, and methods to work on the context when the dysfunctionalities are disengaged
- document the end-results in various forms, as appropriate
It’s a lot of work, and sometimes very painful work, but someone has do it? 😐
A gentle warning on occupational-hazards
To anyone who might want to do this kind of work, I really ought to add some important caveats.
The work itself is actually not that hard. All it requires is a willingness to let go of assumptions, and tackle each of the issues with a rigorous attention to discipline, following the ever-changing rules of the different disciplines that apply at each moment whilst working in that context. Using beliefs as tools can be kind of challenging at times, but again it’s just another skill, and one that’s not that hard to build up over time.
It’s the social aspects of the work that are hard: sometimes very hard…
For starters, it’s often lonely. Very lonely. Part of that is because there aren’t many people who do this kind of work: at a guess, from what I’ve seen around the net and elsewhere, there may be as few as five or ten thousand people in the entire world who work in this space. Social-media does help to ease the loneliness a bit – the people I work most closely with are scattered literally across the entire globe – but it’s not the same as working in close proximity with close colleagues every working day.
Another part of the loneliness is that the feeling of loneliness – and likewise insistent sense of self-doubt – is actually inherent in the work. It’s almost an indicator of success: as Whitney Johnson put it in her HBR article ‘Disrupt Yourself‘, “If it feels scary and lonely, you’re probably on the right track”. To put it the other way round, the times when we feel most certain are probably the times when we’ve most likely missed the point. It’s hard, and it usually hurts, every single day: so if you can’t cope with a relentless, all-pervading feeling of failure, and yet somehow still create the required results, you really shouldn’t to do this work. There are plenty of other much easier ways to make a living, after all. (This isn’t a macho thing, “I’m tough” and that kind of garbage: in my own case, to be honest, I’m probably not suited to do most other kinds of work anyway. 😐 For me, though, there’s a real sense of ‘a calling’, an inner drive to do this work, whether I want to or not: and often that’s the only thing that keeps me going… 🙁 )
Another crucial point is that whilst there’s a great need for this kind of work, there’s also a huge ‘anti-want’ for it. Every aspect of this work implies some kind of mythquake; and anyone who has a vested interest in the status-quo – which in effect that includes most of our would-be employers, amongst many, many others – will not want that mythquake to occur. It’s disruptive: it is, in a very literal sense, often anarchic. So for much if not most of the time, we’ll need to do the work ‘by stealth’, embedding it in other more conventional analysis-work or the like. Doing it ‘by stealth’ is often the only option if you’re an employee, and even then it can be risky: as one of my ProFuturist colleagues put it, “if you’re employed as a professional futurist, and you’re not being fired at least once every year or so, you’re probably not doing your job properly!”
In my own case, I’ve never been an employee: only ever a self-employed contractor, an independent consultant or running my own business. I’ve survived somehow, though often I don’t know quite know how – it’s certainly not an easy way to run one’s professional-life. But I’m well aware that’s not a viable option for many people, especially those with young families. If you are an employee, and you want or need to do this kind of work, you definitely need a Plan B – and work hard on building and maintaining your professional reputation, such that you can recover from being fired after that ‘one disruption too many’.
Another subtle problem that affects many of us arises from the fact that this work requires us to be very good generalists. The good part of being a generalist is that we’re able to learn fast and be interested in anything, at any level of the enterprise. The disadvantage is that, when people compare us to specialists, we almost always come off second-best – and the fact that we specialise in being generalists doesn’t seem to count, especially where the over-simplistic assessments of recruiters and the like so often come into play. In almost all of my contract- or consultancy-work in the past couple of decades, I’ve ended up doing a different (and much broader-scope) role than the one I was nominally employed for: the problem was that I somehow needed to employed for something in the first place, and that can be a real hurdle. So the catch for us is that we need to be at least as skilled as the typical specialist, whilst also being very skilled as a generalist. It’s not easy, and is one reason why the really good enterprise-architects tend to be older, often into their fifties or more – simply because it takes that long to build up the generalist portfolio and experience whilst embedded in what is (to be honest) often a complete waste of time and effort in a ‘required’ but irrelevant specialist role.
Overall, though, it’s probably the loneliness that hurts the most. But if you can cope with that, and with all of the other challenges of ‘the trade’, then yes, we definitely need you… come and join the club, perhaps? 🙂
you are NOT alone!
@Pat Ferdinandi – Yes, I do know that. 🙂 But the point is that even if in reality we are not alone, it feels that we are – and as in that quote from Whitney Johnson above, if we don’t feel that sense of aloneness and isolation at times, we’re probably not doing our job properly.
As it happens, right now I don’t feel that loneliness: fact is that often I do. And so does just about everyone else in this ‘trade’: I know you have at times, because you’ve said so, and so does everyone else I know. By the nature of the work, a persistent feeling of loneliness is an everyday occupational hazard in this trade; so if someone finds that kind of loneliness too hard to bear, they probably shouldn’t take up this line of work. It’s not about me personally here, it’s about the nature of the work itself: that’s all I’m saying. Honest! 🙂
Tom, I really appreciate you taking so much time to indulge me. These examples are great!! This post highlights your methodical thinking. The methods, mechanics and approaches is illuminating.
I’m already beginning to experience some of the loneliness. I’m a purist at heart. I’m learning that I have to compromise. Still there are a lot of people who don’t get what I do and I’m only at the beginning of the journey.
Tom, your story made me think about that other Frank Gehry again and one of his quotes:
“Architecture is a small piece of this human equation, but for those of us who practice it, we believe in its potential to make a difference, to enlighten and to enrich the human experience, to penetrate the barriers of misunderstanding and provide a beautiful context for life’s drama.”
– Frank Gehry
Tom, the paragraph about being a generalist and more importantly about performing a broader role than the specific thing you were taken on for, certainly strikes a chord. To be honest, no one’s ever asked me to come and do enterprise architecture (not 100% true but the one exception turned out not to be what I would call EA anyway!). My (our) customers have their own jobs to do and live in the real world that is created by doing that job. So they tend to have a specific need, which typically requires some (business) strategy work and which can’t be done (by me at least) without understanding at least the relevant part of the business architecture (in the sense in which you use the term) and the various elements of the possible solutions (strategy options) to their requirement. And hey presto, here we are doing EA.
But the nice thing (getting back to being a generalist) is that often one’s ability to be a good generalist is actually appreciated by the business folks, who are constantly confronted by specialists with tunnel vision. So it’s not really so lonely and, when it is, well we have networks like the ones that lead us to this site, don’t we?