What’s all the fuss about autonomous weapons? Isn’t it just a logical progression from the weapons-systems we already have?
Perhaps it is – and that’s the problem…
To illustrate this, first take one of these:
In that form, it’s a relatively-passive weapons-system. It doesn’t do anything other than just sit there, waiting for the right stimulus to set itself off. Very handy for area-defence and all that: landmines like this one are still scattered liberally, and literally by the million, by warring parties all round the world. And just occasionally someone bothers to map out where they’ve put them.
But now give that landmine some legs, or wheels, or wings, so that it can move about on its own. Which means that a simple map of where we first dumped it into the battlespace won’t help when we need to track it down later. And which also means that signs like this one won’t be much help, either:
And now also give that landmine-on-legs some sort-of-intelligence – just enough to make its own way to a target. Which it probably doesn’t find. Or it breaks down, so that it can’t move any more anyway. So it sits there – wherever ‘there’ happens to be – waiting and waiting and waiting for the right target to come along. Okay, the intended target might look like this:
Or maybe like this:
But in practice, ten, twenty, fifty or more years later, when it eventually finds a target for its designed-for ‘services’, that ‘target’ will more likely be a farmer trying to make a meagre living in what had once been someone else’s war-zone:
Or a boy from another generation entirely, who’d merely been following after the family’s errant goats:
During or just after the conflict, a ‘dumb’ munition might look like this – unexploded right now, perhaps, but already very dangerous to disarm and dispose of:
Yet move forward twenty years, long after that war, and fig-trees now grow in the former gardens of that abandoned town. But the bomblets are still as deadly as ever, half-hidden now like children’s toys beneath a litter of fallen leaves and someone’s long-lost false-teeth:
And those are just so-called ‘dumb’ weapons: static, perhaps, yet inherently unsafe, and often ever more deadly over time. Imagine, then, what would happen with so-called ‘autonomous’ or ‘intelligent’ weapon-systems, still mobile but position unknown, still capable of target-acquisition and independent action, yet suffering from bit-rot and long-failed pattern-recognition, still striving towards a purported ‘target’ that ceased to be such some years or decades ago. Not A Good Idea…
Reality is that ‘dumb’-weapons are problematic enough already – a level of problematic that we can barely cope with right now, as can be seen all too easily in those examples above. Semi-autonomous weapons that allow someone in an air-conditioned hut to make near-random guesses about a so-called ‘target’ ten thousand miles away on the basis of a blurry image on a screen – well, that’s even more problematic. Likewise for semi-static but fully-autonomous systems designed for point-defence, such as ship-mounted CIWS. But fully-mobile, fully-autonomous, fully-independent, perhaps using swarming or suchlike for target-acquisition? – that’s a whole new level of problematic, and one that we’re not equipped to cope with at all.
I don’t see any easy way out of this one.
Yes, there is an existing worldwide ban on landmines and cluster-munitions. A supposedly-global ban to which, we note, some countries have somehow not yet gotten round to sign. (No surprise that those holdouts include certain countries that not only still manufacture the wretched things, but still somehow make them easily available – but at a healthy grey-market profit – in just about every current conflict-zone…).
Yes, there is a supposed global ban on weapons of mass destruction – chemical, biological, nuclear. Which, we note, certain countries still insist on holding for themselves, whilst forbidding them to everyone else.
Yes, there is a supposed global ban on space-based weapons. For which, we note, there is as yet no inspection-regime, no proof at all, just hollow platitudes.
Yes, there is a supposed global ban on use of lasers as weapons. Not that that’s stopped certain countries from developing and, in at least one case, openly deploying them, in ship-based, vehicle-based, aircraft-mounted, and, probably, space-based forms.
In short, bans are a great idea in principle, but just don’t work in practice. (We’re dealing with humans here, after all – with all that that implies…)
So the reality is that fully-autonomous weapon-systems are going to happen, whether we like it or not. Supposedly ‘intelligent’, but actually just a new kind of stupid.
(For example, how long before those existing ‘gated-communities for the wealthy’ start to demand the ‘right’ to their own autonomous weapons-systems? After all, having stolen so much from others, they now need to defend that that theft… We’ve already seen the start of this around some illegal drug-plantations, in the Americas and elsewhere: how long before it becomes more general? Looks a great value-proposition from the point of the defender – and attack-dogs need leashes, handlers, feeding, whilst conveniently-‘controllable’ free-roaming robots don’t. But not such a great value-proposition to the emergency-crew responding to a fire, for example, who get treated as a target even though they’re only there to help…)
Seems to me that the only sane response to that kind of mess would be that in Terry Pratchett’s parody-novel Jingo, in which the police-commander arrests all of the staff-officers of two warring armies for ‘conspiracy to breach the peace’. But since, sadly, we still have no global-level police-force – or none with that kind of jurisdiction, anyway – we’ll have to settle for something less.
One option, perhaps, might be to paraphrase somewhat the NRA‘s infamous slogan, as “robots don’t kill people, people kill people”. There’s a fair amount of guffleblub around, about ‘killer robots’ and suchlike, but the practical reality is that until robots become fully self-aware (just beginning to occur right at the limits of present AI), fully-sentient (theoretically feasible in the relatively near- to mid-future) and fully capable of appropriate ‘Asimov-Compliant‘ ethical decisions in all contexts (which is probably a very long way off, given that few humans manage to do so, especially under combat-conditions), then the concept of ‘autonomy’ is more technical than ethical. Which means that the ethical responsibilities reside with the respective humans – not with the machines.
To be blunt, the purported ‘autonomy’ seems little more than a new way of pretending that the respective person is not responsible for the mayhem they cause. And we must not allow that pretence to go unchallenged. If we’re to have any way around this literally-lethal mess, we need to establish, right now, that the responsibilities for ‘autonomous’ weapons-system are exactly the same as for any other type of weapon. Setting a target for an autonomous system is exactly the same as pulling the trigger on a firearm: there are legal responsibilities for that, even in war, leading to the concept of a ‘war-crime’. Setting an autonomous system loose to acquire a target on its own is exactly equivalent to pulling the trigger on a firearm, but without bothering to aim: and that in itself is a war-crime. Killing or injuring a non-combatant is exactly the same as murder or grievous-assault, which is a crime – no matter how much certain people may try to gloss it over as ‘collateral damage’ or some such. And as for landmines and the like, that responsibility does not end with the ending of hostilities: it continues onward forever, until such time as the danger is literally defused.
In that sense, there’s no real need for any new legislation here: all of the requisite issues and needs are already covered under existing international law. To me, the real challenge is not that the law doesn’t exist, but that isn’t enforced – and all that autonomous weapons-systems add to the picture is that the consequences of not enforcing the existing law are likely soon to become very much worse, for everyone.
So to bring this back to the realm of enterprise-architecture, what can we do as enterprise-architects, in an architectural sense, to play some useful part in this? For example, what are the ethics of involvement in military AI? If we are involved in that work, what could we do to help make autonomous weapons-systems more compliant with international law? If we’re not involved in such work at present, then in what ways, for example, could we adapt our existing mechanisms for large-scale governance-of-governance to help in this?
Right now, and for the foreseeable future, there is no such thing as a truly autonomous weapons-system. Instead, there’s always a human behind it – and we need to make it clear that that human, at all times, bears personal responsibility for whatever that system does. If we fail to hold everyone – including ourselves – to that responsibility, then we only have ourselves to blame…
Comments, anyone, perhaps?
(Many thanks to Australian enterprise-architect Darryl Carr for suggesting the theme for this post.)