Architecture disaster? – we have an app for that!

One of the comments on the previous post on the unacknowledged risks of  ‘cooperative IT’ triggered off an essay-length response that really deserves its own post. So here it is. 🙂

The comment that started it off was from Ric Phillips. (I’ve trimmed it slightly, but you can see the original here.)

The innovations that led to mini-computers led to the increasing importance of information processing based on the technology’s ability to capture and model transactions (atomistic events). It really did change the nature of work and organisations and made a new kind of information available.

It wasn’t really the advent of PCs that changed things. If the information about the world that could be stored in them and used had not changed radically they would have simply replaced the niche occupied by terminals. But they allowed people to simulate sheets of paper and type writers. And spreadsheets – which were existed prior to software and were done on very large sheets of paper. Later came sound files, photographs, building designs, industrial machinery, complex electronics (like audio mixing decks) and a thousand other things that are now simulated in software.

In this wave computers became personal productivity tools. The changes to how personal productivity expressed itself in our lives when assisted by the new ‘virtual’ things PCs could provide is what changed our jobs, our professions and be extension our lives.

The internet started out as an extension of publication and communications models that already existed. But (in this case much more slowly that in previous transformations) our activity on the internet started to capture large amounts of information that previously wasn’t subject to computation – social information, information about opinions, subjective value, and what we might call (tentatively) knowledge.

There are intersecting trends (consumerisation for example). But mobile computing, ubiquitous data, web 2.0 and so on are all converging to create a new domain of information – information that allows us to model and manipulate in computers new and extremely complex things. Once again this will transform organisations. But this time maybe even whole societies.

I don’t see this as an impending disaster. Our world is changing again. As a strategic profession EAs need to get their heads around this. We are leaving the era of ‘information processing’ and ‘ICT’ and entering the era of social computing and Knowledge Technology.

Reading it again, I now realise that this critique has completely missed the point: all it’s doing is extolling the virtues of each of the transformations in technology, yet seemingly ignoring any possibility that there might also be vices associated with those virtues. Yes, each of those transformations are real and valuable to some context, and that is indeed a key driver for change. Yet the change itself is not the risk, and neither is the technology: it is the dependence on that technology that creates the risk.

So, as I put it in my response, I strongly agree that “mobile computing, ubiquitous data, web 2.0 and so on” are not in themselves an impending disaster. The same applies to their initial impact on organisations and “maybe even whole communities” – in general I see those impacts as desirable, even if certainly not something we can ‘control’.

What does worry me is what happens next. As an EA I’ve spent many months at clients tracking down all those small private-to-a-workgroup spreadsheets and databases and log-files and the like that were a) business-critical and b) unmaintained, undocumented, not backed up, inherently fragile [such as trying to use MS Access as a multi-user database, which it was never designed to do], unregistered, and in many other ways a real business risk. Whenever some key person changed jobs, or a single hard-drive failed, or a sysadmin triggered an automated application-upgrade, or any other of a myriad of seeming-trivial events, that business-unit would literally lose that part of its mind – and an entire business-process, affecting an entire cross-functional workstream, would grind to a halt until someone could work out what had gone missing and how to set up yet another kludged workaround.

When the business-application is non-critical, kludges usually don’t matter: it’s how people learn, it helps get things done, and it’s exactly what ’shadow-IT’ is for. The new mobile technologies and the like are brilliant for this – just as spreadsheets and single-user databases were (and still are). Everything’s fine as long as they’re essentially used in the same way as Lego bricks or a Meccano set or the like – a ’serious toy’ that can be used to knock out a quick prototype to test out an idea, or perhaps even to keep around as a vaguely-useful tool and talking-point. And as long as they’re used for that kind of purpose, it shouldn’t matter much when they do fail – especially if we can use that failure as a way to learn what to do differently next time. In other words, we accept failure as part of the deal – it’s ’safe-fail’.

But don’t try to use a ’serious toy’ for anything that’s business-critical. It’s not inherently wrong, but it’s simply not ‘fit for purpose’: they’re not robust enough, resilient enough, agile enough, secure enough, and so on – which means that as a system we cannot set them up to ’safe-fail’ in such a context. Sure, you could use Lego to build a house (it’s been done), or Meccano to build a bridge (that’s been done, too), but the effectiveness of doing so is questionable at best, especially over the longer term.

It’s the ‘-ilities’ that usually matter most in architecture. The functional requirements for a system are usually much the same at any scope or scale, but the qualitative or so-called ‘non-functional’ requirements are what will usually make or break the system in practice. Building an IT system that can handle half a dozen strictly-sequential requests in half an hour or even half a minute is relatively trivial; building one that can handle thousands or even millions of parallel, interleaving, fragmented, potentially-incomplete requests every second is not trivial at all; and yet the functional requirements are essentially the same. That’s the difference between a ’serious-toy’ prototype, and serious engineering with serious architecture and serious service-management and support behind it.

What we have right now in mobile-computing, ubiquitous-information and cloud is a whole bunch of serious-toys desperately pretending to be more than they are, and – more worryingly – being sold and used as if they’re more than they are. Sure, the function is there – but that’s easy. It always is. Getting them beyond that ’serious toy’ stage is not easy – and because it’s hard work to get there, it hacks into the short-term profits, too, so it’s not exactly popular amongst the money-obsessed.

So we have here all the ingredients for a ‘perfect storm’: more and more of individual people’s lives and livelihoods being placed onto platforms that are inherently unstable and unsustainable, because little or none of the work to make them stable and sustainable is as yet in place or even in progress. If you’re not already seriously worried about what will happen when large chunks of our society literally lose their collective mind and memory through the failures of these kludged-together toys, you’re not thinking hard enough about the architecture of the enterprise… :-|

The lessons of history are plain to see, and it’s also plain to see that the level of unaddressed risk has been raised each time, with even the earliest-period risks still not fully addressed even now. You Have Been Warned?

2 Comments on “Architecture disaster? – we have an app for that!

Leave a Reply

Your email address will not be published. Required fields are marked *

*