What happens when kurtosis-risk eventuates
Quite a few times on this blog I’ve talked about kurtosis-risk (‘fat-tail’ risk), and why it’s a crucially important issue for enterprise-architecture. But what exactly is it? What does it look like in real-world practice? Why is it such a serious risk? And what can we do to mitigate that risk?
Here’s a real-world example that hit the news this week…
Part 1: A pair of newcomer DJs on a Sydney radio-station place a prank-call to the hospital where Kate Middleton (as Duchess of Cambridge, wife of an heir to the British throne) is undergoing treatment for acute morning-sickness. The DJs pretend to be the Queen and Prince Charles. Their call is put through, and they are given confidential information about the Duchess’ condition:
Michael Christian had basked in the attention that he and his colleague Mel Greig, a former reality TV star, had obtained for their hospital hoax.
“The only bad thing about our Royal Prank… is knowing that I will NEVER EVER top this,” he posted on Facebook. “Less than a week in the job & I’ve already peaked.”
Part 2a: There are strong objections by the hospital about invasion of privacy, though the incident is largely dismissed as ‘one of those things’ by the family themselves. The hoax is borderline-legal, but no direct action is taken against the radio-station. However, the nurses involved are deeply upset, and receive counselling from the hospital and general messages of support.
Part 2b: Media-attitudes in Australia seem to be that the hoax was ‘fair game’ – especially in mocking ‘the royals’ – and that the reaction illustrates the inability of Britons to take a joke.
- London Daily Telegraph: ‘Only Britons hated hoax call, Australian media says‘
Part 3: The Indian nurse who took the call at the hospital is found dead, having apparently committed suicide as a result of shame and remorse over her mishandling of the hoax call.
- “The BBC understands [the nurse] had not been suspended or disciplined by the hospital. The BBC’s Nicholas Witchell said it had been suggested to him that she had felt ‘very lonely and confused’ as a result of what had happened.” (BBC: ‘Duchess of Cambridge hoax call nurse found dead‘)
Part 4: The hoax instantly ceases to be viewed as a joke, even in Australia:
- “Accused of having blood on their hands by furious listeners, the Australian DJs at the centre of the UK royal hospital hoax tragedy have been taken off air as the public backlash intensifies.” (BBC: ‘Australian DJs face backlash over hoax death‘)
Part 5: The hoax has serious commercial and personal impact for the radio-station and its operators and investors:
- “As the pressure mounts, the bosses of troubled Sydney radio station 2Day FM have suspended until Monday all commercials, after some of Australia’s best-known companies, including telecommunications giant Telstra and supermarket heavyweight Coles, withdrew their advertising.” (BBC: ‘Australian DJs face backlash over hoax death‘)
- “Southern Cross Austereo chairman Max Moore-Wilton promised in a letter to the chairman of King Edward VII’s Hospital that the company would co-operate with any investigation.” (BBC: ‘Duchess hoax: Australian radio station to review practices‘)
I won’t comment much on the incident itself: what I want to focus on is that this is a classic example of a kurtosis-risk – a ‘low-probability’ risk in which the (usually short-term) gains achieved by ignoring the risk are then massively outweighed by the losses incurred if and when the risk eventuates.
As a quick summary, from the organisation’s perspective, the short-term gains from the first two days of the incident:
- huge media-exposure in local (Sydney) and national (Australia) media and social-media, most of it positive
- high audience-ratings, boosting customer (i.e. advertiser) exposure and satisfaction
- instant career-establishing ‘daredevil’ status for the two newcomer DJs
And, after the nominal suicide, the immediate and longer-term losses, again from the perspective of the organisation and its employees:
- huge media-exposure, local, national and global, almost all of it intensely negative
- reputational damage not just to the station, but also to its advertisers and to the industry as a whole
- immediate loss of advertising income (and possible permanent loss of advertising clients)
- instant career-destruction and ‘pariah’ status for the two DJs
- probable lifetime-scale emotional damage, certainly for the two DJs, and probably for many others in the organisation
- social pressure for probable political, legal, financial and other impact across the overall media-industry in Australia
This disparity of gains versus losses is fairly typical for a kurtosis-risk: we saw much the same in the News of the World phone-tapping scandal or the BP Deepwater Horizon oil-spill, for example, or in the milder yet still significantly-damaging ‘United Breaks Guitars‘ incident.
In each case the trigger-incident is seemingly ‘unpredictable’ and ‘completely unexpected’:
- “The company has today written to the hospital, King Edward VII, to say the tragic death of its nurse, Jacintha Saldanha, was ‘unforeseeable'” (Telegraph, ‘Duchess hoax call: backlash against Australian radio station ‘unfair’‘)
- “People should be careful in going too hard on these guys. They never would have thought, in a million years, that it would have gone this far.” (The Age, ‘Pranks common, so how did one call go so wrong?‘)
Because the incident is viewed as ‘unforeseeable’, the negative reaction is often perceived and/or described by the organisation and its apologists as ‘unfair’:
- “‘It feels as through the British media are on a witch hunt,’ [company spokeswoman] told the Daily Telegraph. ‘It is quite easy to blame us … The Australian industry seems to sit quite fairly behind us. Prank calls have been going on for 50 years in the radio industry. It is not designed to humiliate or embarrass. All protocols were followed to the letter. It was only supposed to be a harmless prank.'” (Telegraph above)
And it’s true that that it’s rarely about the people as such – hence the irrelevance and real unfairness of the ‘blame-hunts’ that so often occur after each incident of this type. Yet even a brief assessment of the context in each case would indicate that the respective incident is a direct outcome of a systemic flaw in design, operation and/or governance – frequently expressed as organisational culture. In other words, it is definitely predictable as a systemic risk: the only point that is not predictable is which exact iteration of the system would trigger the eventuation of that risk.
Strictly speaking, the exact failure-trigger is probably close to random, and hence in that sense its probability should remain much the same throughout. However, the longer a flawed system is allowed to run unchecked, the effective probability of eventuation will probably increase, because the dysfunctional behaviours that the flaw drives become more and more deeply entrenched, coupled with an expectation that the people involved will be able to ‘get away with it’ and reap the short-term gains onward into the indefinite future. In these examples listed above:
- radio-station: systemic flaw: active promotion of mockery-culture (abusive/violent); payoff: higher audience-figures leading to higher advertising-revenue
- News International: systemic flaw: active promotion of blame-culture (abusive/violent); payoff: higher readership figures leading to higher advertising-revenue, plus self-portrayal as ‘crusading’ newspaper
- BP: systemic flaw: reprioritisation of short-term profit over safety or reliability (abusive – evasion of responsibility); payoff: significantly higher short-term profit
- United Airlines: systemic flaw: ‘customer-service’ system apparently explicitly designed to frustrate complainants into abandoning claims (abusive/violent); payoff: lower customer-compensation-costs
Note again that the specific trigger-incident in each case is arguably ‘unforeseeable’, but the probability of those systemic-flaws creating risk of a trigger-incident is foreseeable: it’s a direct outcome of that respective type of flaw and the behaviours that it will engender. The individual incident-risk is low, but over time, and if only by sheer weight of numbers, the effective risk is high.
In assessing the risk, two key factors should be understood:
- the nature of anticlient risk
- the increased leverage of anticlient-risk afforded by social-media
The anticlients we’re usually concerned with here are ‘betrayal-anticlients‘ – those who feel betrayed in some sense by the organisation’s behaviours, especially where those behaviours run counter to the organisation’s own espoused-values. (This ‘betrayal’-response can clearly be seen in relation to each of the examples above.) In most cases, an organisation can ignore or even ride rough-shod over a ‘betrayal’-complaint (as evidenced in the radio-station’s claim to itself be the ‘victim’, or News International’s attempts to portray complainants as ‘publicity-seekers’); but leverage can be created, sometimes on a huge scale, in an extreme case or when public anger rises beyond a crucial tipping-point (as in the News of the World hacking of the voicemail of murder-victim Millie Dowler). Present-day social-media provide much greater leverage than in the past for such anger to ‘go viral’ all the way up to a global scale.
The systemic-flaws and concomitant kurtosis-risks may take on any at all of a myriad of possible forms. Yet for purposes of assessment, the key factors behind almost all of those forms can be simplified right down to just two lines:
- power-over (aka ‘violence’): any attempt, in any form, to prop Self up by putting Other down
- power-under (aka ‘abuse’): any attempt, in any form, to offload responsibility onto the Other without their engagement and consent
(Those are the ‘win/lose’ forms: there are also somewhat-less-common ‘lose/win’ variants, respectively ‘prop Other up by putting Self down’ and ‘take on responsibility from the Other without engagement or consent’. There’s more detail on this in the ‘manifesto’ on power and responsibility in the workplace from my book ‘Power and Response-ability: the human side of systems.)
Wherever either of those two factors occur in organisations or elsewhere, some form of anticlient or systemic-failure kurtosis-risk will be created: it really is as simple as that. To again use those same examples:
- radio-station: prank-call is ‘propping Self up by putting Other down’, coupled with ‘offloading responsibility to the Other without engagement or consent’
- News International: phone-hacking is ‘offloading responsibility to the Other without engagement and consent’, for the explicit purpose of ‘putting Other down’
- BP: downrating of safety-inspection or equipment-inspection and failure to provide adequate supervision and monitoring of contractors is ‘offloading of responsibility to the Other’
- United Airlines: customer-complaints system apparently designed around systematic ‘offloading responsibility’ onto just about any Other, plus ‘propping Self up by putting Other down’ in a financial sense (and usually other senses too)
For enterprise-architecture and the like, there is a very simple system-design criterion: anything at all that promotes or condones either power-over and/or power-under in any form represents a systemic risk. This principle applies at all levels of the enterprise, all the way down to automated interactions between individual web-services and the like; but it’s most evident – and usually most impactful – at the ‘big-picture’ level.
In present-day business-contexts, most of the more obvious forms of these risks have already been dealt with via legislation and suchlike: for example, we’re not likely to be able to run a business-model for long on direct slave-labour or highly-dangerous work-conditions or even on a one-sided monopoly, and we’ll likely go to jail if we’re caught out in theft or any of the more serious safety-breaches. In short, most of these risks won’t eventuate in any expected form. Yet the systemic flaws that create the risk still operate: so the catch – and the very real danger to the organisation and enterprise – is that the risk is therefore most likely to eventuate in some unexpected form. In other words, as supposedly-‘unforeseeable’ kurtosis-risk.
To make better sense of this, it might be useful to turn to two models I’ve used quite a bit in my work. The first is the Market Model:
Most business-organisations view their context from an ‘inside-out‘ perspective, centred first on themselves, then their links with customers and supply-chain, and perhaps expanding outward to their overall market. Most of the business-risks that apply in the market-space will be known and understood – if perhaps rather too often flouted, in turn often with unsurprising business-consequences.
Yet there’s another whole outer layer to the extended-enterprise – and since most organisations don’t seem even to know it exists, let alone do anything concrete about it, it represents very serious dangers if the risks reside in that space. Here, for example, are our non-clients who’ve withdrawn completely from the market; here too are the families and communities of our employees and others whose lives are impacted by whatever our organisation does or does not do; and here are our anticlients, who really don’t like us or whatever it is that our organisation represents. If systemic flaws in our architecture create risks out in this layer, and we don’t acknowledge that this space even exists, the first we’ll know about it is when the whole thing blows up in our faces, seemingly ‘without warning’.
To understand how this works, we can turn to the other model, the Service Cycle or ‘market-cycle’:
All of this operates ‘outside-in’, not an organisation-centric ‘inside-out’. As Chris Potts put it, “Customer do not appear in our processes, we appear in their experiences” – their stories.
Conventional views of business focus only on the transaction-oriented sections of the cycle: gaining consumers’ attention, and then doing the transaction, with transactions considered complete as soon as payment and profit have been made. The risks in this space – the space of ‘the market’ – are mostly visible, and mostly well-understood. Outside of that transaction-oriented subset of the cycle, however, visibility and awareness are often conspicuous only by their absence: yet it’s in these ‘outer-regions’ that the real underpinning of the market – its foundations in trust – actually reside. From there, the implications should be obvious:
- if trust is lost, so are relations;
- if relations are lost, there will be no conversation;
- without conversation to establish the grounds for transaction, there will be no transaction;
- without transaction, there would be no possibility of profit.
Or, to put it in its simplest form:
- the foundation for all business is trust
- trust begins and ultimate resides outside of ‘the market’, in the ‘extended-enterprise’ space
- systemic power-over and/or power-under, in any form, will undermine, damage or destroy trust
- therefore any forms of power-over and/or power-under must inherently represent fundamental business risk
The huge danger with systemic power-over and power-under is that it tends to push the apparent risks out to the extended-enterprise space, where they’re then dismissed as Somebody Else’s Problem, ‘out of sight, therefore safe to ignore’. Which they’re not: by the nature of kurtosis-risk, they can suddenly become very visible, and very much Our Problem…
The prank-that-went-sour is instructive here for several other reasons. One is about culture-specific tolerance. Way back in my work on domestic-violence a couple of decades ago, it was clear that different sub-cultures could have very different tolerance for varied forms of abuse and violence: for example, middle-class cultures tended to have zero-tolerance for physical-abuse, but far higher acceptance of emotional abuse, whereas some working-class cultures tended to regard physical-violence as an inevitable fact of life but emotional-abuse as utter anathema. Hence additional risks apply when one culture’s ‘acceptable norms’ about abuse and violence are applied to another.
In the terms above, it should be obvious that prank-calls are emotional-violence: their whole point is that they ‘prop Self up by putting Other down’. From those articles in The Age (and from my own first-hand experience), Australian culture seems to regard that form of violence as ‘fair game’ – yet other cultures don’t and won’t. It’s a particularly high risk, for example, for Asiatic cultures where personal-honour and its inverse as personal-shame are major themes – and the nurse in this case does seem to have come from that kind of culture.
(It’s also significant here that in this case the victim was a woman, and telling that the female-DJ’s said “I remember my first question was ‘Was she a mother?’.” So deeply has abuse against males become entrenched in current Australian cultural-sexism that if the victim had been a man, the more likely response would have been more like “What a wimp!”, and in some sub-cultures such a death would actually have been regarded as a good outcome for the affair…)
In short, for enterprise-architects, any assumptions that cross cultural-boundaries should be classed as exacerbating elements for any kurtosis-risk.
Another reason why this is instructive is the risk inherent in the the ‘everybody’s doing it’ excuse, where the same behaviours infect large swathes of an entire industry. It’s evident in the ‘prank’ example that such abuse-games were (and are) actively promoted in the industry; as the Leveson inquiry made clear, the illegal phone-hacking was ‘on an industrial scale’ at News International, but it was by no means the only newspaper group doing it; it’s clear that BP was by no means the only organisation in the oil-industry taking risky short-cuts in the name of short-term profit; and it’s clear in the ‘United Breaks Guitars’ example, where customer-‘service’ was almost uniformly abysmal throughout the airline-industry.
In this kind of kurtosis-risk, what actually happens is a kind of industry-wide game of ‘pass the grenade’: there’s an arbitrary assumption that it’s just a dud grenade, but the reality is that it is going to go ‘bang’ at some point – yet no-one knows when it will explode, or who it will be that’s holding it when it does. And whoever is holding it at that time gets hit with all of the anger about all of the dysfunctional behaviours of the respective industry – which, perhaps correctly, will be seen as very ‘unfair’ by that one participant in the ‘game’. The ‘United Breaks Guitars’ incident was a good example: United Airlines found itself bearing the brunt of a lot of blame about the behaviour of other airlines. And on a more personal level, the DJs in the ‘prank’ case found themselves on the receiving-end of a huge amount of anger and blame that were not so much about their own misguided actions, but about dysfunctional practices that are endemic not only in that radio-station and through much of the Australian radio-industry, but in reality throughout much of Australian culture – in other words, very unfair indeed.
So for enterprise-architects, any assertion that ‘everyone’s doing it’, especially in relation to any kind of risk-laden behaviour, should actually be treated as an urgent warning-signal – not an excuse. ‘Pass the grenade’ is not a wise game for any business to play, no matter how profitable it might seem in the short-term… And to avoid getting caught in the overall backlash when a ‘grenade-game’ explodes, it’s essential to build a reservoir of broader public trust that can be drawn upon when needed: respect is a survival-issue for business-organisations.
These are fundamental requirements and concerns for any viable enterprise-architecture.
0 Comments on “What happens when kurtosis-risk eventuates”