Metrics for qualitative requirements
Just how should we handle qualitative requirements in system-design and enterprise-architecture? Should we, for example, reframe them into quantitative terms, as metrics – because it’s a lot easier to keep track of ‘measurable things’?
Over the past couple of days I’ve been having a great Twitter back-and-forth on this with Catherine Walker, with a bit of brief assistance from Dave Snowden and Sally Bean. The start-point was a Tweet by Catherine, quoting software-guru Tom Gilb at a conference on Lean Systems Development:
- transageo: “Keep on drilling down and decomposing until measurability becomes obvious” @imtomgilb
Hmm, yes, very ‘Tom Gilb’, that… – in fact looking back at some old notes, I discovered that I’d had a fairly big argument with him at the Unicom-EA conference in September last year about exactly this point. His theme there was that we should describe every requirement in a quantitative or measurable form; my response then, and now, is that, yes, in principle we can do that, but whether we should do that is a very different question – and in many cases the answer would be a most definite ‘No’.
Part of the reason I say this is that whilst many system-designers would focus on the functional requirements, much of architecture is ‘non-functional’, in that it often depends on so-called ‘non-functional’ or qualitative elements – and those qualitative requirements need to be kept intact, without being fragmented by overly-insistent analysis. Catherine Walker unintentionally highlighted what is perhaps the deeper problem here in another quote from Tom Gilb:
- transageo: “Selling an engineering culture to people steeped in a craftsmanship culture” (paraphrasing @imtomgilb)
Sorry, but this is dear old John Zachman’s beloved metaphor of ‘engineering the enterprise’ all over again: valid enough when used exclusively within a software-engineering context, but completely the wrong metaphor in any human context, and hence in many (most?) qualitative-contexts too…
Okay, yes, I take the point about ‘a craftsmanship culture’, by which I presume he means a technical-culture with little or no understanding or discipline with formal-rigour. But in essence we covered that in the previous post here: and in real-world practice the problem is often not too little engineering-style formal-rigour, but too much of it – or rather, applied too much to the exclusion of everything else. The problem, in fact, is usually the almost exact opposite of Tom Gilb’s phrasing above: the need to build awareness of craftsmanship and respect for the uncertainties of ‘It depends…‘ amongst people so steeped in engineering-culture that who can’t or won’t acknowledge that no amount of analysis can ‘solve’ inherently-irreducible complexity.
There’s also the point that Sally Bean posted in a Tweet shortly before all of this, in a quote from Dave Snowden:
- Cybersal: “The problem with user requirement capture is that someone assumes there’s a requirement.” @snowded #cynhki
Who determines that something is ‘a requirement’, and that something else is not? That’s an extremely important question that gets too-easily skipped-over in the rush to reduce everything to quantitative measures – and it’s not a trivial question, either…
Anyway, I reTweeted Catherine’s quote of Tom Gilb’s “Keep on drilling down…” assertion, with an addendum that said something like “huh??? in a systems-context???”. (I’ve managed to lose my actual Tweet somehow…). Later that evening she came back on Twitter with the following clarification:
- transageo: A requirement for a s/w project could be, eg ‘High Availability’. Drill down: availability = (eg) reliability + maintainability // Then “Maintainability” can be expressed as (eg) av. fault fix time. “Reliability” as (eg) av time between system crashes. // A fuzzy requirement becomes quantifiable. More: http://www.gilb.com/dl437 [PDF] (though slides a poor substitute for the speaker)
However, I was still very unhappy with the notion of “a fuzzy requirement becomes quantifiable”, for reasons as above, and also illustrated in a great quote the other day by Harold Jarche, posted by Mark Foden:
- markwfoden: “One should never bring a knife to a gun-fight, nor a cookie-cutter to a complex adaptive system” @hjarche
The context of that quote was about the ‘cookie-cutter’ methods used by too many large-consultancies, but it’s the same core-problem as in this case: an over-reliance on analysis to the exclusion of everything else. Hence I followed up to Catherine’s comments with another Twitterstream-reply:
- tetradian: agreed re how drill-down to metrics works, but is inherently fragile relative to whole-system level, the original ‘why’ // danger is that fragmentary metrics in whole-systems lead to misuse of eg. Six Sigma – ask eg. @snowded for advice/comment on this // key point is that fuzzy-requirement _can_ be ‘quantified’ but also _always_ remains fuzzy: Complex, not Complicated. // Simple drill-down analysis tends toward presenting Complicated-only world, eventually pretending Complex/unorder doesn’t exist // (I suggest to talk w @snowded b/c you know his work, and it fits well with this case re fuzzy-requirements in whole-systems)
As per those Tweets, I suggested that she contact Dave Snowden, in part because of that comment quoted by Sally Bean above, but even more because this is exactly the kind of context he works in. (We disagree in some areas of our respective work, but not in this one.) He was kind enough to send in a couple of comments overnight:
- snowded: New Scientist: “extrinsic rewards destroy intrinsic motivation” 6S OK stable systems but damages innovation etc. // reductionist approach (drill down) may end up with you measuring wrong thing – hope that helps
It did help; and in the morning, Catherine and I returned in earnest to our back-and-forth:
- transageo: Thanks. Understood re. risk of small system components achieving undue weight/important connections missed. // I risk finding complexity so appealing that I deny/forget that quantification ever enhances shared understanding
- tetradian: I concur w @snowded on both points (extrinsic blocking intrinsic; risk of ‘measuring wrong thing’) – also break of link w whole // (classic example of fragmentation-by-misplaced-metric is ever-expanding breakdown of UK NHS – “death by targets” etc)
- transageo: so @imtomgilb ideas good discipline for me. I could def benefit from #CognitiveEdge education alongside, though 😉
- tetradian: @transageo: “so @imtomgilb ideas good discipline” – yes, is good discipline: yet must also balance w discipline to maintain systems-as-whole
At this point, Catherine came up with a very important question:
- transageo: Doesn’t that conflate metrics w/ targets? Measures should not have to become targets (tho agree they often, inappropriately, do)
- tetradian: in the hands of most managers, _every_ metric becomes a ‘target’ – especially if behaviour/bonus etc are linked to it… // _every_ metric/quantification needs a huge “It Depends…” label attached to it! 🙂 #entarch #bizarch
- transageo: Yes. @imtomgilb now talking about defining & quantifying “circs in which …(ie depends) “. Something will be missed.
- tetradian: “Something will be missed.” – (or “_may_ be missed”?) – yes. Hence need for systematic balance of drill-down and whole-system.
The trap here is that most managers still believe too strongly in the old dictum that “if you can’t measure it, you can’t manage it” – and therefore that whatever we can measure becomes a ‘target’ that must be trimmed back and trimmed back relentlessly in the name of ‘efficiency’, whilst whatever we can’t measure is deemed not to exist at all. As too many organisations have found to their cost, that mistake usually has disastrous consequences – for which the only ‘solution’ offered is, yes, yet more mangled ‘management’… Or, as Catherine commented:
- transageo: Or the system may meet reqs, missing greater needs, but material is in place to cover arses if needed
- tetradian: yep: CYA-centric view of ‘requirements’ / metrics is _exactly_ the danger here… 🙁 – “operation successful but patient died”..
That last quote was all too popular amongst Victorian surgeons – a somewhat extreme case of missing the point…
Anyway, Catherine did a wrap-up at this point, and the conversation came to a close:
- transageo: Yes, danger is clear 😉 I do think some degree of quantification can have value in underpinning shared understanding, though // Fully accept your point & @snowded’s that drill-down does not dissolve complexity, it must be embraced & absorbed
- tetradian: yes, quantification definitely has value (if only as a focus-discipline) – it just needs to be kept in balance & context, is all
That’s probably the best summary here: “drill-down does not dissolve complexity, it must be embraced and absorbed“. It’s often very useful to apply a drill-down to qualitative-factors, as a way of creating a better understanding about them, and better agreement amongst stakeholders as to how they themselves interpret those factors: yet it’s essential always to remember to also keep the qualitative-factors as themselves, whole and complete in their own right.
To put it at perhaps its simplest, there’s a qualitative difference between quantitative-requirements and qualitative ones: and the latter cannot and must not be reduced solely to some form of quantitative metric, else the quality that makes it ‘qualitative’ will itself be lost.
Over to you, anyway: your comments, perhaps?