As you may have noticed, I’ve been kinda struggling somewhat to fully explain the ‘Why’ behind all of this talk of ‘new toolsets for enterprise-architecture‘ and related disciplines. And then, out of the blue, via a reTweet from Phil Beauvoir of ‘Archi‘ fame, comes this:
Coding is not the fundamental skill;
modelling is the new literacy.
Yes, exactly. Exactly.
Okay, it’s from a guy I don’t know, called Chris Granger, in a current blog-post, ‘Coding is not the new literacy‘, about a somewhat different topic, namely the over-hyping that we see so much at present about getting schoolkids to write computer-code.
But he’s absolutely right: coding itself is not the problem. Coding isn’t hard (relatively-speaking, anyway): anyone who can write and pay attention to instructions can write code without much difficulty. As Chris Granger says, “coding, like writing, is a mechanical act”.
It’s knowing what to code that’s the real problem. Coding isn’t hard; but making sense of a context, and deciding how to describe it in a form that can be coded? – yeah, that is hard…
Which is where the Squiggle comes into the picture – that depiction of the overall development-process, from first fleeting ideas to nominally-‘finished’ product:
Which is where modelling comes into the picture – along the full length of the Squiggle.
Which is where toolsets to help describe the outcome of that modelling would come into the picture.
Which is where toolsets that can cover the whole of the Squiggle – and not merely disjoint fragments of the Squiggle – would come into the picture.
Ultimately, though, the toolset aspects of this are relatively straightforward. We just need to buckle down and do it, that’s all: apply enterprise-architecture to our own work.
But modelling in itself? – ah, there’s a real problem there…
A problem that’s right at the root of the literal ‘architecture of the enterprise’, at just about every possible scale.
Yet a problem so huge that it seems most people dare not look at it at all…
We could summarise the core of the problem in just two lines:
— modelling a context requires us to ask questions about the context – every possible question that might usefully apply in that context
— but asking questions is often considered unacceptable in the extreme – in particular, asking any questions that might cast doubt on the certainties, structures, politics and ‘rights of authority’ in enterprises of any form
Those two requirements don’t mix well at all…
Hence, on the one side, we have the real-world need, such as described in Maria Popova’s post ‘The Psychology of Why Creative Work Hinges on Memory and Connecting the Unrelated‘.
And on the other side, we have the painful reality, as described, for example, by David Edwards in his Wired post ‘American Schools Are Training Kids for a World That Doesn’t Exist‘.
If we’re honest about it, the so-called ‘education’-system of the present day isn’t really much about education at all: it’s about training, about compliance and conformance to the expectations and demands of society in general – or, to be even more blunt about it, to the expectations of a society that largely no longer exists, of supposedly-guaranteed jobs, strict segregation of socioeconomic classes, certainty, control, control, control. Which would perhaps be fine, if that society’s assumptions matched up to the real-world – which they don’t. Not any more.
There’s plenty of politics around this, of course – such as Ivan Illich’s ‘Deschooling Society‘, or Postman and Weingartner’s ‘Teaching As A Subversive Activity‘, to give two now quite-old examples. If possible, though, I’d prefer to sidestep the politics, and look at the context from a SCAN perspective – and perhaps particularly the crucial distinction between order and unorder:
At present, mainstream school-education, from around 8-16yrs, is almost exclusively ‘order’-based, focussed around tests, true/false answers, quantification, exact like-for-like comparisons – repeating rote-fashion what someone else has taught as ‘the truth’ or ‘the facts’, with little to no room for variation or individuality. (As an old phrase puts it, “originality will get you either A or F, with nothing in between”.) In other words, tame-problems:
Tame-problems are (relatively) easy to teach, and to test: the problem has already been solved, and, importantly, stays solved, such that everyone should get the same answer every time. Most relevant for this context here is that no modelling is required – it’s already been done by someone else in the past, and (in theory, at least) won’t change in the future.
But the catch, as per the diagram above, is that tame-problems are only ever a subset of any context – for the most part, a subset that relates only to a somewhat-imaginary order-based view of the world (“lies-for-children“, as Cohen and Stewart put it in The Science of Discworld), and that, even in the more ordered parts of the context, doesn’t work as well in practice as it seems to do in theory. The rest of the context consists of wild-problems – unordered ‘problems-in-the-wild’ – which don’t necessarily conform to expectations, for which there is often no ‘final correct answer’, and are often too unique to compare with anything else anyway. For those wild-problems, we do need to be able to model them – because it’s the only way we’d have any chance to get the outcomes we need.
Classically, a school-type approach – or any training, really – will focus on method. Which, yes, does work well with tame-problems, or anything well over to the left-hand ‘order’-oriented side of the SCAN frame. But as soon as we have to work with wild-problems, anything values-based, anything with any form of unorder-complexity or uniqueness, then methods alone are not enough: we need to go back one key step, to understand from where methods actually arise.
Particularly for anything involving any real skill, each person necessarily arrives at their own distinct methods, derived in part from the ‘objective’ aspects of the context – the mechanics – and the ‘subjective’ aspects – their own individual approaches to the context, derived from individual difference in body, perception, history, gender, socialisation and much, much more. Hence in practice, method arises at the intersection between mechanics and approaches – which we could summarise visually as follows:
By contrast, school-‘education’ all but ignores the subjective side of this story: every student is, in essence, assumed to be the same. The trade-off is that if we do this, we can test and grade them as if they are the same – but in the process, we actively shut down the students’ internal processes for individual skill, individual creativity and (as per this context) the ability to make sense of and respond appropriately to anything different, anything ‘new’. All of which are very much needed as soon as students step out from the cloistered confines of school and out into the real-world: and especially so in a world that is changing fast, in almost every possible way – as is certainly the case right now.
Which brings us back to where we started, with Chris Granger’s comment that:
Coding is not the fundamental skill;
modelling is the new literacy.
It should be clear why there’d so much of an ‘education’ focus on “coding is the new literacy”: it’s something that can easily be taught within the existing school-system, without requiring any real changes at all – it’s just another predefined ‘subject’ to add to the existing predefined ‘curriculum’, really. Yet ‘coding is the new literacy’ contributes almost nothing to help with the real problem, and real need – the literacy of modelling, of learning how to make one’s own sense of the world, in order to act appropriately and ‘on purpose’ within it.
Which brings us back to the Squiggle, and the need for tools that help ‘join the dots’ across the whole of the Squiggle – because without that support and whole-of-context continuity, learning how to model and make sense of the world becomes much, much harder than it would otherwise naturally be.
A lot more we’d need to explore on this: but perhaps that phrase “modelling is the new literacy” would be a good place to start.
Bonus link: see the video of Bret Victor’s talk ‘Inventing On Principle‘ – just under an hour, but a real eye-opener all the way though, especially from a toolsets-perspective. (Hat-tip to Chris Granger, Rob Attorri and the Light Table blog for the link.)