Bitskrieg. John Arquilla
Чтение книги онлайн.
Читать онлайн книгу Bitskrieg - John Arquilla страница 11
This brings us to the matter of how cyberwar applies to countering insurgency and terrorism. As important as an information edge is in conventional warfare, it is crucial in irregular conflicts. For without information, refined into actionable knowledge, it is simply too hard to find, much less to fight, elusive foes. Cyberwar, which emphasizes the informational dimension, offers two remedies: hack the enemy, and turn one’s own forces into a “sensory organization.” They can still be “shooters,” too, but “sensors” first. As Paul Van Riper and F. G. Hoffman put the issue – about what will matter most in 21st-century conflict – “what really counts in war is gaining and maintaining a relative advantage in . . . awareness.”45
Lack of knowledge about enemy dispositions, movements, and intentions was the cause of the American debacle in Vietnam, where the insurgents were able to remain hidden much of the time, and had greater awareness of their opponents’ plans and maneuvers. And attempts to find the guerrillas by “dangling the bait” with small Army and Marine patrols proved costly and frustrating. As Michael Maclear summarized, “On patrol, the GIs were inviting certain ambush.”46 This problem was never adequately solved in Vietnam, and recurred in Afghanistan and Iraq when insurgencies arose in these countries after American-led invasions.
In Iraq, though, General David Petraeus understood that failure to gather information about the insurgent networks was the critical deficiency that had so undermined the occupation forces’ efforts during the first three years (2003–6) after the invasion. Given overall command there, Petraeus repeated techniques he had used in the Kurdish north – i.e., embedding with the locals rather than surging out patrols and raids from a handful of huge forward operating bases (FOBs). He knew that, as Victor Davis Hanson has summarized the Petraeus strategy, “He had to get his men outside the compounds, embed them within Iraqi communities, and develop human intelligence.”47 This was as much an information strategy as it was a military strategy. And the insurgents’ information edge was soon blunted by the flow of intelligence about them that came from Iraqi locals who felt exploited by al Qaeda cadres. In less than a year, violence in Iraq dropped sharply. Where nearly 40,000 innocent Iraqis were being killed by terrorists each year before Petraeus and his emphasis on gaining an informational advantage, just a few thousand were lost to al Qaeda annually from 2008 until the American withdrawal at the end of 2011. When US forces returned in 2014, to fight the al Qaeda splinter group ISIS, the Petraeus model dominated. Only small numbers of Americans were sent; they embedded closely with indigenous forces, and decisively defeated ISIS.
But this aspect of cyberwar – controlling or “steering” the course of conflict by gaining and sustaining an information advantage – still has few adherents, and the dominant view of limiting cyberwar just to cyberspace-based operations prevails. It is a reason for failure to repeat the Petraeus approach in Afghanistan, where the reluctance to distribute small forces throughout the country among the friendly tribes – which worked so well there back in 2001 – allowed the Taliban insurgency to rise and expand. Sadly, even the very narrow, tech-only view of cyberwar has not been properly employed in Afghanistan, nor in broader counter-terrorism operations globally. In Afghanistan, the Taliban’s command and control system, and movement of people, goods, weapons, and finances, all rely to some degree on communication systems – locally and with leaders in Pakistan – that are hackable. That they have not been compromised is proved by the growth of the insurgency. The same is true of worldwide counter-terror efforts; cyberspace is still a “virtual haven” for terror cells. Yes, they often rely on couriers. But the Taliban locally – as well as ISIS, al Qaeda, Hezbollah and a host of other dark groups who operate more widely – would be crippled if they were to lose faith in the security of their cyber/electronic communications. And if these systems were compromised secretly, all these groups would be destroyed. Even this narrower approach to cyberwar, if employed as the lead element in the counter-terror war, would prove decisive. As yet, this has not been the case. The world is much the worse for it.
Rise of the intelligent machines of war
Another technological aspect of cyberwar – an especially “cool” one – has to do with the rise of robots – or, more delicately put, artificial intelligence (AI). These machines, devices, and their software are the ultimate cyber tools, embodying the principles of control-through-feedback that Norbert Wiener envisioned. Back in the 1950s, he thought of the “human use of human beings.” Today, we should be thinking about the “human use of artificial beings.” In cyberspace, there is already much use of automation, by many countries, where bots have the authority to move swiftly on their own to counter attacks on information systems they are tasked with defending. The pace of cyberspace-based attacks is often too fast for humans to detect, track, and disrupt. On the more proactive side, there is widespread use of information-gathering AI “spiders” and other searchers – though the world’s more democratic societies have strived to impose at least some limits on the use of such capabilities. And when it comes to employing bots in physical battle, those same liberal societies have regarded the matter as close to abhorrent, demanding almost always to keep a “human in the loop” for purposes of control. There is even an effort to ban the development of “killer robots,” which has been championed at the United Nations and by many non-governmental organizations. Secretary-General António Guterres put the matter very starkly at a “web summit” held in late 2018:
Machines that have the power and the discretion to take human lives are politically unacceptable, morally repugnant, and should be banned by international law.48
Guterres’s speech buttressed the position of the 25 nations and the Holy See that had already signed on to the call to ban killer robots – and two more nations joined shortly after he spoke. However, as of this writing (2020), no NATO member states have supported such a prohibition on “Lethal Autonomous Weapons Systems” (LAWS); nor have the Russians. As to China, its position is to call for no first use of such weapons, but still to allow for their development and production. Interestingly, quite a few in the scientific and high-tech commercial sectors have embraced efforts to prevent the rise of military robotics. In 2015, 1,000 experts in AI signed an open letter expressing their opposition. At the same time, luminaries such as Stephen Hawking and Elon Musk took the position that the rise of robots, if allowed, could “spell the end of the human race.”49 This alarmist view has been articulated over the past several decades, the jumping-off point in popular culture probably being the 1984 film The Terminator. The Matrix movies and the re-booted television series Battlestar Galactica that came later both reinforced this trope, completely overshadowing Isaac Asimov’s pacifistic “Laws of Robotics” – which he introduced in 1942, but about which even he wrote with ambivalence.
Around the same time that Arnold Schwarzenegger was first terrorizing humanity, scientist/novelist Michael Crichton was articulating the position that
When the super-intelligent machine comes, we’ll survive . . . The fear that in the coming years we will be replaced by our creations – that we will live with computers as our pets live with us – suggests an extraordinary lack of faith in human beings and their enterprise. . . . Our ancestors were threatened