Quality-systems and enterprise-architecture

Continuing on from ‘Framework versus body-of-knowledge‘, the same colleague asked me for some notes on how we could apply quality-systems concepts to enterprise-architecture itself.

Background to this is that perhaps a dozen years back, I was working at an engineering research-lab, doing databases for test-management and that kind of stuff. At the time, the lab was trying to find out how to apply quality-systems to its own work – in part to comply with government regulations, but even more as a means to reassure its customers that its work was good (which it was). We were asked to join in with the quality-systems effort.

The catch was that the quality-system standards just didn’t seem to fit. There was a national standard (whose name I’ve now forgotten) for laboratory-work, which is what they ended up using, even though it only covered one aspect of the work. The main international standard, the ISO-9000 series, seemed only to cover manufacturing and repeatable services (ones that simply substituted ‘service’ for ‘product’ in what was otherwise a very manufacturing-like context) – and the whole point of the place was that it dealt with things that were new, different, difficult and at first generally not repeatable. There was an attempt to apply ISO-9000 to the tiny amount of manufacturing that took place there, and hence gain a somewhat spurious ISO-9000 certification through that, but it was all a bit sad and pointless, really.

The breakthrough came when one of our team asked a simple question: what actually was the product of the place? Facts were an important part of it, sure: but our clients didn’t really use those facts as such. A few patents, which were often important money-spinners: but that wasn’t the real aim of the place. (Officially, anyway…) What the clients were really paying for was advice, recommendations, process-designs, checklists, revised ways of working – that kind of thing. In other words, the real product of the place was opinions.

And much the same applies to enterprise-architecture. We don’t have much that’s visible in terms of product or direct-deliverables – one of our real challenges, in fact. At the topmost levels of enterprise-architecture we don’t even deliver designs: that’s the solution-architects’ territory, not ours – and they can get quite upset if they think we’re treading too much on their turf. Most of what we do is about connecting-the-dots, wandering around, collecting and sharing ideas, talking with people, or getting other people to talk with each other – which is often a lot harder than that sounds… But when we stop and think about it, it’s actually much the same the research-lab: our real product is opinions.

And if an opinion is a product, then all of the ISO-9000-type quality-system stuff should therefore apply.

Yet how?

The simplest way is probably to split it into the usual three focus-areas:

  • input
  • process
  • output

Simple as that, really. So let’s make a start on this, in terms of each of those focus-areas.

Quality of inputs

What are the inputs to enterprise-architecture? What do we need to do to ensure and verify the quality of the inputs to our work? Some suggestions:

Our work will incorporate both fact and non-fact: both are valid here, but we need to be clear as to which is which. For example:

  • Physical things are facts. Hard-data are facts. Opinions and interpretations – including the fundamental ‘end-products’ of the work itself – are not facts.
  • Emotions and feelings are facts. Opinions and interpretations and ‘shoulds’ about those emotions and feelings are not facts.
  • Hard-data are facts. Information is personal-interpretation of nominal fact, and should usually be regarded as non-fact.
  • The calculations in an algorithm may be demonstrable as fact. The assertions and assumptions that underpin the algorithm’s calculations are likely to be non-fact.
  • The past is a fact. The future is a non-fact.

We will need to establish trail-of-provenance for facts and non-facts. This is because apparent-fact may be founded in part on non-fact, and non-facts such as interpretations and opinions would – or should – usually have at least some foundation in fact. Again, we need to be clear as to which is which. Example tactics include:

  • For research-opinions, follow the trail of citations.
  • For data-as-facts, identify the context from the metadata.
  • For ‘shoulds’, identify the scope, context and (where appropriate) power-relations.

We will need to compensate for Gooch’s Paradox – “things not only have to be seen to be believed, but also have to be believed to be seen”. Beliefs (or paradigms, in the Kuhn sense) act as filters on what can be perceived, or what is considered relevant or not-relevant. Examples include:

  • Frameworks constrain what can be seen, what can be modelled, and hence what is implicitly included in or excluded from scope.
  • The assumptions, worldviews, taxonomies and ontologies of a discipline can greatly distort both perceived-fact and visibility about anything beyond the natural-scope of that discipline – hence, for example, the impact of the inside-outward IT-centrism that plagues our profession at present.
  • Clashes will tend arise wherever two or more groups or individuals assert that their own distinct worldviews or perspectives are ‘the truth’ – in other words, non-fact (opinion) masquerading as fact.

We will need work-instructions, procedures, policies and vision (see ‘Layering and recursion’ below) to provide guidance for all of these concerns.

Quality of process

What aspects of our processes affect the quality of the enterprise-architecture? Some suggestions:

We will need clear specifications for process, governance, trails-of-provenance, required skill-sets etc – especially where such items are repeatable. This is a key role for frameworks and methodologies. When evaluating such frameworks and methodologies, for example:

  • Ensure completeness of process-description – particularly for ‘non-process’ concerns such as governance, required skill-sets etc.
  • Ensure completeness of process-tracking – such as recommended or mandatory standards for documenting process-instances and process inputs and outcomes.
  • Ensure that process- or framework-scope is appropriately specified – particularly where the respective domains may be affected by the arbitrary mis-constraints of a ‘term-hijack‘, such as ‘Enterprise 2.0‘, or IT-centric ‘enterprise-architecture‘, or even ‘economics‘.
  • If the framework includes taxonomies or metamodels, ensure that these cover the nominal scope as specified, without arbitrary ‘term-hijack’ gaps or exclusions.
  • If the framework includes taxonomies or metamodels, ensure that their structures are internally-consistent – and preferably extensible.
  • Ensure that, wherever practicable, process-records and entity-instance records are re-usable for future processes and other uses, and exchangeable between content-repositories and modelling-tools.

— We will need to ensure that appropriate skills and knowledge are applied within each of the respective work-processes. For example:

  • Ensure that, where appropriate, those carrying out the work have attained the related formal certification or other qualifications.
  • Ensure that those whose opinions and advice is sought during the work do have the appropriate skills and experience to provide contextually-valid information.
  • Ensure that those whose opinions and advice is sought during the work do have the competence and experience sufficient to distinguish between fact and non-fact within the respective context, and that such distinctions are appropriately documented.

We will need to maintain trail-of-provenance for processes and actions – who, what, where, when, how and why. Typically, this would be tracked via metadata – for example:

  • Ensure that reasons for doing each item of work, sponsors for that work, and other business-administration details for work, are fully documented in traceable form for each respective item of architecture work.
  • Ensure that appropriate metadata is available and used for all content-entities.
  • Ensure that processes to map derivation-trails from content-entity metadata are available and used.
  • Ensure appropriate governance for capture, maintenance and use is available for all content-entity metadata, particularly for trail-of-provenance and trail-of derivation uses.

We will need explicit guards against plagiarism and related knowledge-derivation issues. This also relates to the crucial distinctions between fact versus non-fact. For example:

  • Ensure secure trails-of-provenance and trails-of-derivation for all process-inputs and their usage within processes.
  • Especially, maintain full records of derivation and/or intersection wherever patents, academic- or analyst-publications and other so-called ‘intellectual-property’ concerns may arise.

We will need clarity on terminology, taxonomies, ontologies, abstractions, model-types and related themes.

  • Identify and document all taxonomies, terminologies etc currently in  use in all relevant domains of the respective context.
  • Develop and maintain glossaries and thesauri appropriate to the context – including cross-references to identify conflicting usages of terminology etc in different domains or sub-domains of the context.
  • Identify the scope, constraints and assumptions of all abstractions and model-types in use and for use within the respective context – including any conflicts between abstractions and model-types.
  • Identify, and develop workarounds for, any contexts where abstractions or models apply inadvertent ‘term-hijacks’ to that context – such as the ‘Business / Information / Technology’ pseudo-layering commonly in use in IT-centric ‘enterprise’-architecture models and frameworks.
  • Ensure that all staff are familiar with and make full use of agreed standard terminologies, and of shared glossaries and thesauri – including training and, if appropriate, formal certification in usage of such terminologies.
  • Ensure that all special uses of terminology, and potential clashes and/or misinterpretations of special terminology, are fully captured and documented in all stakeholder-interviews and engagements.
  • Ensure that all terminologies, taxonomies etc used in models and model-types is fully documented, with full traceability back to the respective definitions current at the time the model was developed.
  • Develop, and ensure usage of, methods and documentation to track changes in use of terminology over time within the respective context.

We will need clarity on what is repeatable versus what is not-repeatable – and ensure that appropriate processes and methods are applied to each.

  • Use methods or models such as SCAN or context-space mapping or the ‘Inverse-Einstein Test‘ to identify the repeatability or non-repeatability (uniqueness) of a context.
  • Ensure that the repeatability, non-repeatability or ‘variety-weather‘of each context is appropriately documented.
  • Ensure that processes and models that assume or depend on repeatability – such as Six Sigma, or most usages of UML and similar model-types – are not used in non-repeatable contexts and/or contexts with significant ‘variety-weather’, or at least are used only with full awareness of the limitations of such models in those contexts.

We will need appropriate management of inherent-uncertainty in the work. For example:

  • Develop and ensure use of appropriate guidelines and checklists, to reduce cognitive-load and aid in sensemaking and decision-making in partially-repeatable contexts.
  • Develop and ensure use of appropriate principles, to aid in sensemaking and decision-making in fully-novel contexts.
  • Develop and ensure use of governance-processes that accept and work with inherent-uncertainty.
  • Develop and ensure use of reporting-metrics that accept and work with inherent-uncertainty.
  • Ensure training in self-management for all people who must work with inherent-uncertainty – for example, emotional realities such as feelings of failure or of panic, and practical techniques such as Agile-style development methods and real-time improvisation.

We will need appropriate methods to manage the exploratory nature of ideation and prototyping stages in the work – including iterative processes and ‘rework’. For example:

  • Ensure appropriate training in techniques such as design-thinking, Agile-style iterative-development, and methods from strategy- or futures-development.
  • Ensure that success-metrics are aligned with the type and level of uncertainty

The main quality-system challenges for processes in enterprise-architecture relate to inherent uniqueness and non-repeatability: however, these can be addressed – and should be addressed – via means such as those summarised above.

Quality of outputs

In what forms and via what means is our output delivered? What do we need to do to ensure and verify the quality of the outputs of our work? Some suggestions:

All outputs must be presented in formats appropriate for the respective audience. This is primarily a concern about relevance and usability: if an output cannot be used by its intended audience, its practical quality will in effect be low, regardless of the quality of the work that led to that output. Hence, for example:

  • Different diagram-types, text-structures and other content-formats are likely to be needed by different audiences – for example, abstract diagrams may be appropriate for technical audiences, but not appropriate for non-technical audiences, who may need more concrete illustration such as full visualisations and ‘fly-throughs’.
  • Model-content repositories will usually need to be able to present the same nominal content in multiple formats.
  • Processes and selection-criteria for matching presentation-formats with audience-types should be explicit and generally available.

All outputs must provide clear distinctions between fact and non-fact. This is a direct corollary to the equivalent concerns regarding inputs, hence the same distinctions should apply:

  • Facts include physical things, hard-data, emotions and feelings, algorithms and other ‘hard-structures’ and non-changeable past.
  • Non-facts include opinions, interpretations, assertions, assumptions and future-intentions.
  • All non-facts used and presented within outputs must be identified and identifiable as non-fact.

All outputs must enable clear trails-of-derivation to the respective inputs via the respective processes. This is a direct corollary of the equivalent concerns for inputs, hence much the same practices would apply:

  • For facts, identify the trail of derivations.
  • For opinions and other non-facts, identify the trail of sources.
  • For mixed fact and non-fact – probably the most common case in enterprise-architectures – identify the intersections between the trails of derivations (fact) and sources (non-fact).

Wherever practicable, all outputs of processes should be structured to enable re-use and re-purpose of those outputs in future processes and future trails-of-derivation. For example:

  • Use consistent document-naming and entity-naming schemas.
  • Use standard document-formats – preferably both human-readable and machine-readable.
  • Wherever practicable, enable multiple-use and multiple-presentation of entities in content-repositories.
  • Wherever practicable, content-repositories should enable export and import of content in sharable and re-usable formats.
  • Ensure that naming-schemas and file-formats are appropriate for the full life-cycle of potential use and re-use of the information.

Note that last item about life-cycles: often these are far longer than direct use of any artefacts from architecture work. For example, many organisations still use fifty-year-old mainframe-technologies in parts of the information-architectures; health or finance information may well apply literally to someone’s whole lifetime, or longer. Civil-engineering projects may well impact civil-architectures for centuries: for example, some parts of London’s waste-water management still use channels and culverts from the Roman-period city – almost two thousand years of continuous use, yet now in very different contexts from when they were first built. Information-lifetime, and entity-lifetime in general, is rarely as simple as it may seem at first sight…

Layering and recursion

If we take the view of the ISO-9000:2000 standard, there are four distinct layers to a quality-system:

  • at the day-to-day level, we have work-instructions that specify the how and what and where and when of what to do  in a defined, specific context
  • if any part of that context changes, we must move ‘upward’ to the procedure, which defines the applicable roles and responsibilities for that context – the who and why – including the roles and responsibilities for defining a new work-instruction
  • if the roles and responsibilities become unclear, we must move ‘upward’ again to the policy, which outlines the current choices and guidelines within the broader context
  • if the reasoning behind those choice and guidelines becomes unclear, we move ‘upward’ one final step, to the nominally-unchangeable vision for the overall context

This is the same kind of layering that we’ll often see in enterprise-architectures, using ‘why?’ to move ‘upward’ towards abstraction and generalisation, versus ‘how?’ or ‘with-what?’ to move ‘downward’ towards concrete realisation – the story made real.

These relationships are largely recursive and reiterative: for example, a change in policy is likely to require new procedures and work-instructions to implement the policy. Also different hierarchy-‘layers’ within the organisational structure will have their own policies, procedures and work-instructions, and any change in those ‘layers’, or in different business-units, may impact on and force changes in, the policies, procedures and work-instructions of other business-units that interact with them.

The overall principles of quality-systems are well-documented and (for the most part) well-understood in most business-contexts. And there’s no inherent reason why enterprise-architecture should be any different. The challenge here, then, is to apply the same quality-system principles to our own work as enterprise-architects: what can we do to ensure and improve the quality of our work?

Leave a Reply

Your email address will not be published. Required fields are marked *

*