Should We Ban Killer Robots?. Deane Baker
Чтение книги онлайн.
Читать онлайн книгу Should We Ban Killer Robots? - Deane Baker страница 4
The film opens with a Steve Jobs-like figure speaking on stage at the release of a new product. Only, instead of the next generation of iPhone, the product is a weapon – a tiny autonomous quadcopter loaded with three grams of shaped explosives, and which combines artificial intelligence (AI) and facial recognition technology to lethal effect. After proudly explaining that ‘its processor can react 100 times faster than a human’, the Steve Jobs of Death demonstrates his creation. We watch as he throws it into the air, and it then buzzes autonomously, like an angry hornet, over to its designated target – in this case a humanoid dummy. After latching parasitically onto the forehead of this simulated enemy soldier, the drone fires its charge, neatly and precisely destroying the simulated brain within, to the applause of the adoring crowd. If that were not demonstration enough, a video then plays on the giant screen, showing a group of men in black fatigues in an underground car park. The mosquito-like buzzing of the quadcopter causes the men to scatter in fear, only to be killed one by one as the tiny drones identify, track and engage them, detonating their charges with firecracker-like pops. ‘Now that is an airstrike of surgical precision’, says Mr Death-Jobs. As if sensing the concern that is building as we watch, he is quick to reassure his audience: ‘Now trust me, these were all bad guys.’ (Of course, we don’t trust him one tiny bit.) Our concern only increases as he tells us that ‘they can evade … pretty much any countermeasure. They cannot be stopped.’ Another video rolls on the big screen, this one depicting a huge cargo aircraft that excretes thousands of these tiny drones, while we are informed that ‘[a] 25 million dollar budget now buys this – enough to kill half a city. The bad half.’ (Just the bad half – yeah, riiiight.) ‘Nuclear is obsolete’, we are told. This new weapon offers the potential to ‘take out your entire enemy, virtually risk-free’. What could possibly go wrong?
At that point the film cuts across to a fictional news feed that’s designed to help us see the dirty reality behind the advocacy and smooth assurances presented by the Steve Jobs of Death. The weapon has fallen into the wrong hands. An attack on the US Capitol Building has killed eleven senators – all from ‘just one side of the aisle’. TV news reports that ‘the intelligence community has no idea who perpetrated the attack, nor whether it was a state, group, or even a single individual’. We witness the horror of a mother’s Voice over the Internet Protocol [VOIP] call to her student-activist son that ends with his clinical killing by one of the micro drones, as swarms of them hunt down and murder thousands of university students at twelve universities across the world. The TV talking heads inform us that investigators are suggesting that the students may have been targeted because they shared a video on social media ostensibly ‘exposing corruption at the highest level’. Then, suddenly, we’re back on stage with Mr Death-Jobs, who tells us: ‘Dumb weapons drop where you point. Smart weapons consume data. When you can find your enemy using data, even by a hashtag, you can target an evil ideology right where it starts.’ He points to his temple as he speaks, so that we are left in no doubt as to just where that starting point is.
It’s all very chilling, and it taps into some of our deepest fears and emotions. Weapons like tiny bugs that attach to your face just before exploding – creepy. Shadowy killers (states? terrorists? hyper-empowered individuals?) striking at will against helpless civilians for reasons we don’t fully understand – frightening. People targeted on the basis of data gathered from social media – terrifying.
Slaughterbots was released to coincide with, and influence, the first of the 2017 Geneva meetings of the delegates working under the auspices of the United Nations’ Convention on Conventional Weapons (CCW) to decide, on behalf of the international community, what (if anything) should be done about the emergence of lethal autonomous weapons systems (LAWS).1 The year 2017 was the first year of formal meetings of the Group of Governmental Experts (GGE) on LAWS, though it followed on the heels of three years of informal meetings of experts tied to this process. At the time of writing, this international process continues. In addition to the state delegates to these meetings, a range of civil society groups are also represented, most notably the coalition of non-governmental organizations (NGOs) known as the Campaign to Stop Killer Robots. Originally launched in April 2013 on the steps of Britain’s Parliament as the Campaign to Ban Killer Robots, it was ‘the Campaign’ (as it is commonly known) that hosted the viewing of Slaughterbots at the 2017 GGE meeting in Geneva.
Slaughterbots certainly provided a significant boost to the Campaign’s efforts to secure a ban on lethal autonomous weapons (or, failing a ban, to otherwise ‘stop’ these weapons). Unfortunately, the emotive reaction generated by the film is in large part the result of factors that are entirely irrelevant to the issue at hand: the question of autonomous weapons.
Remember what Russell identified as the key issue? ‘Allowing machines to choose to kill humans’. If you have time, watch the film again, and ask yourself this question throughout: what difference would it make to the scary scenarios in the film if, instead of the drones selecting and engaging their targets autonomously, a human being seated in front of a computer somewhere was watching through the drone’s cameras and making the final call on who should or should not be killed? I don’t mean just pressing the ‘kill’ button every time a red indicator flashes up on his or her screen – let’s assume he or she takes the time to (say) check a photo and make sure that the person being killed is definitely on the kill list. To use a key term at the centre of the debate (which I will examine in depth in chapter 2), in this mental ‘edit’ of the film, a person is maintaining ‘meaningful human control’.
In this alternative, imagined version, AI would still be vitally important in that it would allow the tiny quadcopters to fly, enable them to navigate through the corridors of Congress or Edinburgh University, and so on. But there are no serious suggestions that we should try to ban the use of AI in military autopilot and navigational systems, or even that we should ban military platforms that employ AI in order to carry out no-human-in-the-loop evasive measures to protect themselves. So that’s not relevant to the key question at hand.
What about the nefarious uses to which these tiny drones are put in the film? It is, without question, deeply morally problematic, abhorrent even, that students should be killed because they shared or ‘liked’ a video online; but the fact that the targeting data were sourced from social media is an issue entirely independent of whether the final decision to kill this student or that was made by an algorithm or by a human being. Also irrelevant is the fact that autonomous weapons could in principle be used to carry out unattributed attacks: the same is true of a slew of both sophisticated and crude military capabilities, from cyberweapons to improvised explosive devices (IEDs), and even to antiquated bolt-action rifles. In short, a ban on autonomous weapons – even if adhered to – would make essentially no material difference to the frightening scenarios depicted in Slaughterbots.
There are real and important questions that need to be asked and answered about LAWS. But in order to make genuine progress we will need to disentangle those questions from the red herrings thrown up by Slaughterbots and, indeed, by many contributors to the debate. This book seeks to take steps in that direction by trying to give a clear answer to the question raised by the Campaign at its formation: should we ban these ‘killer robots’? As campaigners rightly point out, this is a choice we have made before, in the case of other kinds of weapons systems: the international community has successfully negotiated treaties and agreements that have resulted in bans on military capabilities, including bans on chemical and biological weapons, antipersonnel landmines, and even blinding lasers. There’s much that could be said about the process of securing such a ban, and what avenues might be available for doing so and to what effect, but that is not the question in focus here. Rather, this book is about whether or not we should ban LAWS.
To give you the bottom line up front, my answer to this question is in the