Seems a lot of people in the enterprise-architecture space wonder why I don’t toe the ‘party-line’, and push for IT-centric automation-of-everything, like almost everyone else in ‘the trade’. Well, here’s a nice first-hand example as to why…
I’ve been doing some preparation and further development for my upcoming enterprise-architecture course, and hence recently bought a basic laminator to make laminated-cards of some of my models and materials. That worked out so well that I thought I’d do some experiments at something closer to a playing-card size, extending the ideas I’d developed a couple of years back for the ‘This’ game for service-modelling. I asked at my local office-supplies firm, but they didn’t have the right size laminator-pouches: so I set out to order some from Amazon instead.
The usual Amazon package for the pack of laminator-pouches duly turned up a few days later, right on schedule. But when I opened up the wrapper, it immediately became clear that something was wrong:
The size I’d ordered was nominal A6 – large enough to contain a sheet of paper of the size shown above, underneath the box, and with a small border to spare – and 75micron thickness, to work with my rather cheap-and-cheerful laminator. That’s what the bar-coded sticker says, too: document-laminating materials, manufactured by GBC, “Pouches A6 2x75mic (Pack of 100)”, exactly as per my order. But the box is way too small for that. What…???
So I turn the box over, and this what I see:
Manufactured by Q-Connect, not GBC; 54x86mm, not the 115x158mm or so that would be needed for A6-sized paper; and 125micron thickness, not 75micron. In other words, the wrong make, the wrong size – around a quarter of the size I’d ordered – and too thick to work in my laminator. The label is correct; the content is not. Oops…
Okay: go through the not-terribly-tedious Amazon process for doing a product-return, explaining that the reason I’m sending it back is that it’s been mislabelled and hence is the wrong size and so on. I get an auto-generated email pointing to an auto-generated web-page for the bar-coded return-label and bar-coded return-advice. A short while later, I also get a separate email from Amazon customer-service, this time from – gosh! – a real person named Zulfi, very polite, very respectful, apologising for the screw-up, and telling me not to bother to send the pack back as it’d cost more than it would be worth. He acknowledges my warning that the pack was mis-labelled, and generates a new replacement-order for the correct-sized pack, to be sent as soon as possible, at Amazon’s expense. In other words, it is good customer-service, and I’m grateful for that.
Shortly after that, I get the auto-generated email that confirms the free-replacement order; and later that day, I get another auto-generated email, confirming despatch. All well and good.
It duly arrives, a day later than promised, but that’s not Amazon’s fault. So I open up the package, and… – well, you can imagine my joy at discovering that I’ve been sent another instance of the exact same mis-labelled pack…
Back we go through the returns-process. Not surprised that all I get back is the same auto-generated ‘print out this auto-generated return-label’ email, and another seemingly auto-generated email notifying me of a refund (which I hadn’t asked for – what I want is the right product!), and somewhat sharply telling me that the product is now out of stock. Which at least doesn’t leave me out-of-pocket as such – which is something to be thankful for, I suppose – but still with nothing that I can use – which was the whole object of the exercise.
Yet it’s all too obvious what’s happened: they trusted the computer, not the in-person message warning them that there was a mismatch between the IT-world (the bar-code sticker) and the real-world (the actual content). They relied on the computer to be right – even though they’d been explicitly told that it wasn’t. And no-one bothered to check.
Whacking the wrong label on a box spinning down a conveyor-belt: well, yeah, that’s the kind of mistake that’s real easy to do – especially if you’re under pressure to keep ‘productivity’ as high as possible, and no-one has the time or attention-span to check that box and label actually do match up. And at the kind of wages that Amazon pays its warehouse-staff, and the pressures that it puts on them, I’m kinda surprised it doesn’t happen a lot more often. But it’s also the kind of mistake that, even in this small example, has cost Amazon a significant amount of money – several times more than any possible profit-margin on a straightforward sale, I’d say.
But that’s exactly the kind of glitch that happens whenever we rely too much on IT-based automation, without any cross-checks or whole-of-system awareness. The IT-automation can make some parts of the process seem very efficient – yet not necessarily very effective overall. And it’s whole-of-system effectiveness – not subsystem efficiency – that really determines whether something is truly fit-for-purpose, and, in turn, fit-for-profitable, too.
Which is precisely why I push for an enterprise-architecture that is a lot wider than IT alone: because doing whole-enterprise architecture, not merely an IT-centric architecture, saves money, and a lot more – especially in the longer run.
You Have Been Warned, etc…?
Great articles as always.
Automation (IT-based or otherwise) is not the enemy. Lack of whole-of-system awareness/competence is. I would pretty much guess that the world-wide efficiency and effectiveness of the Amazon business model was achieved precisely because of automation. Failures and sometimes embarassing failures will happen regardless.
Some failures, as in your case, can be attributed to lack of whole-of-system thinking.
Nature does not tolerate vacuums. An IT Architecture approach will fail precisely because of lack of whole-of-system thinking and not because of automation. Such a system will inevitably be replaced by a holistic Architecture approach.
Attributing failure to the correct root cause.
“An IT Architecture approach will fail precisely because of lack of whole-of-system thinking and not because of automation” – yes, that’s actually the point I’m making, though evidently I haven’t made it well enough. Most of the EAs I’ve been discussing with start with the assumption that “IT is the answer! – now what was the question?”: it’s reductionist, exactly as you say, but its surface appearance is an obsession with IT-automation and a literal ‘ignore-ance’ of just about everything else. Hence – doing a root-cause analysis as you suggest – the surface symptom is over-reliance on IT-automation; the underlying symptom is linear reductionism; the actual root-cause is unawareness and/or denial of need for a whole-of-system view.
The catch, of course, is that root-cause analysis is itself a reductionist technique: by definition, it assumes that there is a ’cause’ and consequent ‘effect’. A full holism would need to see it more in terms of networks of forces (‘drivers’) and suchlike metaphors – otherwise the search for ‘the root-cause’, and isolated action upon that root-cause, would itself become a wicked-problem. In other words, we need to apply whole-of-system thinking to whole-of-system thinking itself – oh the joys of recursion! 🙂
Root cause analysis is in my world a cause (singular) or set of causes (network of forces at times.) It is a convergent step (‘reductionistic’ if you will) in the problem analysis. The divergent step is the very next step viz. the problem solving step – when potential solutions have to be considered and evaluated according to their effectiveness, relevance and endurance.
Wicked problems are and must be outliers – exceptions for that matter. Problem solving approaches to such will likely prove expensive for tame problems.
The mark of a true problem solver is their ability to deal with ambiguity i.e. tame problems found within wicked problems. Applying the correct problem solving technique (s).
I agree, whole-of-system thinking will serve the profession well but is unfortunately rare. Comfort with ambiguity beyond just the wicked problem’s cause-and-effect-ambiguity will also improve this. Context is thus key. Knowing what problem is being solved and using the right tools.
@Sello: “Wicked problems are and must be outliers – exceptions for that matter.”
I don’t quite follow you there? My experience is that it’s the tame-problems that tend to be the outliers – they get a lot of attention primarily because they can be ‘solved’, but the result is that the context for the supposed ‘tame-problem’ is almost invariably ‘wicked’ – in fact will be so almost by definition, wherever it touches the human realm.
@Sello: “The mark of a true problem solver is their ability to deal with ambiguity i.e. tame problems found within wicked problems.”
Strongly agree with you on that – the catch, as I’ve just said, is that ultimately we’re always dealing with some form of a wicked-problem. And if – as with classic Taylorism or IT-automation – the only problem-resolving techniques are suited only for tame-problems, we’re likely to create a great deal more actual ‘wickedness’ in the context each time we supposed ‘solve’ a tame-problem element.
We can ‘solve’ a tame-problem, in that sense that once we’ve found ‘the answer’, it stays that way. With a wicked-problem, we can only re-solve it: each instance and iteration can be sort-of ‘correct’ for that specific context, but may not be so for any other. And when supposedly ‘tame’-problems are embedded in larger-scope wicked-problems, and may (and often do) contain embedded wicked-problem elements within themselves, things can get decidedly tricky… That’s really the point I’m aiming to highlight here – and why a whole-of-enterprise (‘holistic’) approach is a wiser approach for enterprise-architectures than a domain-centric one, such as in the implicit IT-centrism of (at present) more ‘mainstream’ models such as TOGAF/Archimate, IAF, Zachman and the like.
I had this exact same problem a few months back with a new CD release of the Schubert Wintereise, from a different vendor. The beautifully produced volume arrived with no CD in the pouch, and it was clear there had never been a CD in the pouch. So I went to the vendor website, explained the problem, and was quickly sent a replacement, with the exact same problem. This occurred three times. After the second occurrence, I went to the Amazon website, looked up the CD in question, and discovered it was “temporarily unavailable” due to an unspecified problem with the product. This strongly suggested that the distributor had cases of these CDs and no one in the production chain had noticed that a step had been omitted — the one where the CD gets inserted into the packaging. I finally found a source whose supply either pre- or post-dated the batch that got made with a crucial step overlooked.
The lesson here for me was to be careful about automating people out of a process; there may be things that they do so naturally and intuitively that they do not explicitly consider it a part of the process (e.g., “confirm that there is is a CD in the packaging”), and which may thus be overlooked when the process is automated.
In both your case and mine the failure occurred well before the point of sale touchpoint, and the often reasonable assumption at the point of sale was that such failures are likely isolated not systemic, so it was not possible (or at least not likely) that another item that outwardly appeared to be OK would share the same defect. The problem was that there was no mechanism for the point of sale to quickly recognize the recurrence of the same flaw in the product begin sourced, even when they were being repeatedly provided with an explanation to that effect; they were probably treating each replacement transaction as an unrelated event.
@Len: “The lesson here for me was to be careful about automating people out of a process”
Yes, exactly – and also making sure that people are included in the process in the areas that people do best, such as dealing with the things that don’t fit process-designers’ expectations (such as your “confirm that there is is a CD in the packaging” and suchlike).
One of the practical problems is that people too often tend to be used only in sub-robotic parts of the process, still as mindless ‘cogs in the machine’, but doing the minor-variance bits that are still too expensive to get a robot to do, such as awkward assembly or moving things out of the way or handling non-standard parcels or cleaning up a mess on the floor. And given that it is ‘mindless cog’ work, wonder why people fail to take much interest in doing it well, or at all… There are some really large challenges there, at a whole-enterprise level, that are way-too-easily glossed-over if we sit out only in the safe-and-certain comfort-zone of IT-automation.
@Len: “The problem was that there was no mechanism for the point of sale to quickly recognize the recurrence of the same flaw in the product begin sourced, even when they were being repeatedly provided with an explanation to that effect; they were probably treating each replacement transaction as an unrelated event.”
Yep – an inability to join up the dots, caused largely by the process-design ‘dotting the joins’ and fragmenting the view of the whole into isolated ‘process-instances’.
All of which will be really important for Open Group’s healthcare-information initiative: perhaps chat briefly with you on that at Open Group Amsterdam, if you’re going to be there?