The pandemic has seen many technological trends accelerate, from remote work to online education. For me, one of the most dangerous is the arms race between world superpowers to develop lethal AI and automate their militaries. This piece covers this issue along with the UN’s attempts to curtail it.
On the otherwise celebratory occasion of the United Nation’s 75th anniversary in January this year, Antonio Guterres — the UN’s Secretary-General — gave a grim address best summarised by his description of the world as “off-track”. On Guterres’ list of existential threats was the climate crisis, geopolitical tensions and the abuse of new technologies — naming one in particular:
“Lethal autonomous weapons — machines with the power to kill on their own, without human judgment and accountability — are bringing us into unacceptable moral and political territory.”
While states might debate whether lethal autonomous weapon systems (or ‘killer robots’ in the popular imagination) are “unacceptably immoral”, there can be no doubt that Guterres is right on the urgency of the risk: development and use of autonomous weapons are both accelerating, and the stakes — ethical and political — are high.
The world’s military powers have been competing to dominate this new class of intelligent weapons for years, with this AI arms race occurring against a contentious global background where an advantage in military AI could make a real difference to the balance of power. This geopolitical game theory driving advancement in the sophistication of war machines has an unwanted blind spot — historically, human rights factor little into strategic calculations.
With Covid-19, the acceleration of automation has taken on greater speed across a variety of different fronts. Military operations have had to be completely re-thought — physical distancing on a submarine is much harder than physical distancing in a supermarket. Lethal AI already had some mounting advantages over human equivalents, and can now add ‘immunity from catastrophic viruses’ to that list. For all of these reasons, keeping track of the AI arms race is more vital than ever.
If it’s a race, who’s winning?
Almost every month, another innovation in autonomous weapons leaps off the headlines in military news — the autonomous Chinese Blowfish A3 helicopter drone equipped with machine guns or the Russian army of unmanned ‘Marker’ ground vehicles armed with mortars and grenade launchers. There is no question that new inventions in the world of military AI abound, but it is far less clear which country boasts the strongest tech.
Key figures in the United States military have been forthright in warning of China’s strength in this area. The US Defense Department’s relatively new Joint Artificial Intelligence Center is building command-and-control AI capability for the first time, explicitly citing the Chinese threat as the reason for the department’s urgency. The Center’s director Lt. Gen. Jack Shanahan has been clear about his desire to automate as much of the American military machine as possible:
“What I don’t want to see is a future where our potential adversaries have a fully AI-enabled force and we do not.”
In the last year, officials as senior as the US Defense Secretary have warned that Chinese technology may, in fact, already be more advanced than America’s. Secretary Mark Esper predicted that China might have “leapfrogged” existing American technology. With the military establishment suitably concerned, spending on lethal autonomous weapons in all branches of the American military seems set to go to another level in 2020 after already increasing in 2019.
For China’s part, mounting investment in autonomous weapon development is a key plank in its ongoing effort to usurp American military dominance. Almost all large-scale AI programs in China benefit from massive governmental support and a huge trove of data, and its autonomous weapons program is the jewel in Beijing’s AI crown. China’s huge investment in lethal autonomous weapons predates other militaries, and its military theorists are ahead of the rest of the world in building futuristic “intelligentized” models of human-machine operations.
A further dimension to China’s AI strategy is economic, with Beijing seemingly interested in profiting from its autonomous weapons program as a new export product. Already, China appears to be exporting many of its most high-tech aerial drones to wealthy buyers in the Middle East, explicitly marketing them as capable of advanced autonomous operations like assassinations. Last year, Zeng Yi, a senior executive at Norinco, China’s third-largest defense company, predicted that as early as 2025, “there will be no people fighting in battlegrounds”.
As an arms race, autonomous weapons and AI more broadly is much harder to track than something like nuclear weapons with countable objects (eg stockpiles of warheads and launchers). How should we compare China’s development of autonomous aerial drones and hypersonic ‘smart’ missiles with America’s recent $2.7bn unveiling of huge unmanned ships? What about Russia’s autonomous tanks, deployed for the first time into combat in Syria? The short answer is that short of more extensive deployment of autonomous weapons, we cannot know for certain.
Elsa B. Kania, an Adjunct Senior Fellow with the Technology and National Security Program at the Centre for a New American Security, agrees that the question of who leads an AI arms race between the great powers is not an easy one:
There may be no single answer. For instance, potentially, the U.S. military may become more capable in leveraging AI in cyber operations, but the Chinese military could achieve greater advances in hypersonic weapons systems that can operate autonomously, and the Russian military may possess more experience in integrating unmanned systems in urban warfare.
Given the most advanced autonomous weapons are developed behind closed doors in the world’s most secretive military labs, it is difficult to gauge the objective progress of military AI, let alone which country is closest to deploying sophisticated AI in all aspects of its military. However, there can be no doubt at this stage, with public declarations from American and Chinese officials alike, that this is the finish line: the automation of an entire military.
”Unacceptable” is the new acceptable
For all Guterres’ stirring words about lethal autonomous weapons being an unacceptable global threat, the United Nations have been painfully slow in their response. There are no restrictions on nation-states or companies building, developing and integrating lethal autonomous weapons; and no sign that this will change in 2020.
To be clear, there is an increasing groundswell of public pressure for autonomous weapons (or ‘killer robots’) to be regulated. Almost three in every four Europeans would like an international treaty prohibiting lethal autonomous weapons, according to a poll conducted in November by Human Rights Watch. 28 countries are advocating for the United Nations to adopt a similar measure.
However, a small number of powerful countries are doing an effective job of stalling (if not derailing) progress at the international level. It will be no surprise that these countries — Russia, China and the United States — have by far the most advanced military AI. TIME has detailed how Russia has “steamrolled” the process by holding up discussion documents and postponing meetings. China and the United States have been more subtle, but no less obstructionist, in their own efforts to delay even the discussion of a substantive ban.
Despite this, the UN’s disarmament chief Izumi Nakamitsu is optimistic that “within two years” the UN Convention on Certain Conventional Weapons (CCW) forum could release a definitive proposal for how lethal autonomous weapon systems could be regulated internationally. The CCW regime has previously been successful in limiting or regulating weapons like landmines or laser weapons that have “potentially indiscriminate” effects.
With the potential for lethal autonomous weapons to potentially escape their programmed limitations either accidentally (through a software bug) or on purpose (the software is hacked), it is easy to see how their effects could be indiscriminate, falling within the criteria for CCW restrictions. But a hypothetical fit is only the qualifying mark in a long marathon to regulation. The influential Campaign to Stop Killer Robots is notably guarded about the UN process, warning that “incremental gains achieved to date are not impressive”.
The lack of regulation or applicable international law leads to the grim conclusion that the AI arms race can only end with a global power developing and deploying ‘general’ AI into its military, with consequences we can only really speculate at this stage.
If this truly is an “unacceptable” outcome, a dizzying amount of progress will have to be made on drawing up an agreement to ban or limit lethal autonomous weapons. In the meantime, war machines become ever smarter, and all the more alluring in a post-Covid world.