In Conversation: Abishur Prakash on the future of warfare

 
 
 
 
1.png

In Conversation: Abishur Prakash on the future of warfare

Artificial Intelligence, Cyber Warfare and Unmanned Aerial Vehicles

IN CONVERSATION WITH ABISHUR PRAKASH

30 April 2020

 

In your view does the relatively low risk nature of pitting autonomous weapons against each other mean a greater propensity for violence in areas of long standing contestation such as Kashmir or the Korean Peninsula?

Whenever killer robots are deployed, the potential for conflict increases exponentially. These weapons will be making decisions without any human input. They could literally start wars. In already tense regions, like Kashmir or Korea, the risk is even higher. That’s because of two things: intelligence and mistakes. Just like there have been three waves of industrial robots (the world is currently between the second and third wave), there are going to be multiple waves of killer robots. The first wave, which is just starting, will have the least intelligent killer robots. And, this makes them more prone to making mistakes.

Take Kashmir. The flareup between India and Pakistan in 2019 saw both countries engage in dog fighting, the first time nuclear-armed states have done this in recent history. But, consider what both governments were debating as the conflict unfolded: launching missile strikes against one another. Apply AI to this case. What if AI was in control of India or Pakistan's missiles and decided, on its own, to launch them? This would have likely led to war - possibly even nuclear war. And, it would have been because of an AI-system that was not very sophisticated or “aware" of the implications of what it was doing.

You provide examples of just some of the complications states could face when confronted with the prospect of whether to support or disavow the actions of a fully autonomous robot bearing its flag. How equipped do you see states and the current international system to manage an event involving significant and/or disproportionate harm at the hands of a fully autonomous robot? 

Short answer: nobody is equipped, prepared or ready. The world is walking into this new era of warfare completely blindfolded. Take South Korea. When South Korea announced plans to build a research center for autonomous weapons, it drew blowback from around the world. Researchers boycotted the center until South Korea withdrew support. Some people called this a success. Was it?

The only solution for killer robots right now is to ban them and stop them from becoming reality. That’s what the United Nations is pushing for and that’s what the boycott in South Korea was about. But, this is not realistic. Militaries and defense companies are going to develop killer robots regardless of outcry from the public. If the current international system wants a leg up on killer robots, it must move away from bans and move towards planning for possibilities. There must be frameworks and plans that institutions and governments can use as killer robots make decisions that affect humanity.

Without this kind of global guidance, every nation will be operating independently and this will only add to the chaos and confusion killer robots will create.

You describe a future where China may have the capacity to expand and contract its sea borders at will through the positioning of its underwater drones. If combined with an algorithm designed to protect its near total claim to the South China Sea how realistic do you view the possibility of some form of maritime blockade and do you feel the nature and degree of this kind of challenge is fully understood? In your view what can be done to mitigate this type of event?

I remember sharing this with a client in Asia. Their eyes opened at the possibility that through drones, China could have a new “responsive border” in the South China Sea. This is not a capability any other nation has had. And, because of this, there is little awareness that this is a capability a nation could have. There are two ways that China could use its responsive border: deterrence or blockade.

The first, deterrence, is implicit. It means that Indonesia or Malaysia may think twice about sending vessels into a particular area of the South China Sea because there are hundreds of Chinese armed drones underwater that could swarm together at a moment’s notice. The second, blockade, is more explicit. This means China is actively stopping countries with its responsive border, including adversaries like the US, Japan or India. The latter, blockade, will be used if the geopolitics of Asia shift dramatically. This means everything from Taiwan to North Korea to Covid-19 to geoeconomics changes in a way that make China far more aggressive. Ironically, the only way to mitigate China’s responsive border may be for other nations to create their own “army" of underwater drones. Except, this may raise the risk of conflict even higher. And, this is assuming China does not offer other countries the ability to partake in its responsive border - creating a new robotic, joint border in the South China Sea.

You state the next cyber war could be between companies, not countries. What do you view as the defining characteristics that place companies on opposing sides and what form might a contest like that actually take?

I believe that in this new era of geopolitics, defined by emerging technologies like AI, private corporations will call the shots. And, the stakes for these firms have never been higher. Google is building undersea Internet cables to Asia to compete with China, Facebook is launching a digital currency, Alibaba is supplying “brains" to cities in Malaysia. Because of their footprint, services and revenue, companies are becoming “mini countries.”

I don’t think there are any specific characteristics that will dictate what firms launch cyber attacks or go to “cyber war” with each other. That’s because firms of all sizes and industries may be using AI for cyber security. What do I mean by this? An investment bank in Tokyo may be using AI for cyber security. But, this AI may also be able to launch cyber attacks. The AI could be a defense and offensive tool. If the Japanese bank is attacked, the AI may respond in kind. But, who will the AI target, as most cyber attacks are masked? This leads to questions of AI’s “decision making” and data bias. The AI may attack an investment bank in Vietnam, as the AI may believe that investment bank is responsible. This immediately ropes in the governments of both Japan and Vietnam. All of this is to say that AI-cyber wars between companies are a long tail. But, it also means that firms may not necessarily be on opposing sides, they may simply be using opposing artificial intelligence.

Where success can be measured in real terms i.e. the destruction of a specific number of battlefield assets or denial of service attacks [such as those launched against Estonia in 2007] it is becoming clear that attribution may in fact be the bigger weapon. Do you share this view and how concerned are you that non-state actors now possess the capability through hacking and other means to tip acrimonious states into direct confrontation?

This is a huge risk. And, it is one that grows exponentially with killer robots. That’s because, killer robots will not just be acquired by states, but also non-state actors, be it mercenary groups, terrorists or rogue organizations. There’s also the whole “DIY' aspect. Could a consumer self-driving car or drone be turned into an autonomous weapon through a Youtube tutorial? All of this means, those who want to foment instability in a country or region to meet a specific goal (i.e. regime change, secession), now have new “capabilities" at their fingertips.

Take Indonesia, a relatively neutral nation in Asia. What happens if Indonesia’s immigration algorithms are hacked and all Chinese visitors are rejected for entry? Or, if robot taxis in cities like Jakarta or Bali refuse to pick up Chinese customers because of geopolitical bias that has been purposefully injected in the algorithms? These may seem like small, even irrelevant incidents. But, that’s applying the filters of the past. With killer robots, and artificial intelligence (AI) in general, it is the small incidents, all added up, that could tip countries in a certain direction.

You touch on the twin ideas of nations grouping together to form culture-specific ethics and existing trade blocs being reconfigured to revolve around robots - where do you see the seeds of these being laid?

 This is about power dynamics and relevance. And, the seeds are already being laid. The Catholic Church is working with Microsoft and IBM to build ethics for AI. This is unprecedented. A religious authority is working with companies to program technology the whole world will use. Does this mean AI from Microsoft or IBM will be loaded with “Christian ethics”? If so, it could create a new clash between countries and companies over ethics. Malaysia, a Muslim-country, may want AI with Islamic ethics. But, India may not want AI with Islamic ethics, it may want Hindu ethics. And, Japan and South Korea may want AI with Buddhist ethics. The kind of “culture" that AI is programmed with could force countries to work with one another and it could create new headaches for technology companies selling AI. And, all of this could lead to a re-emergence of religion in the field of technology.

When it comes to trade blocs, they may try to set the “rules" for how killer robots are traded. One way this may happen is through localization of killer robots. The European Union (EU) is already working on creating its own rivals to Google and Facebook. The EU is worried about monopolization and control of data. Tomorrow, the Association of Southeast Asian Nations (ASEAN) may apply the same approach to killer robots. Instead of depending on Israel, the US or China, it could create its own killer robot production firms. Or, political groups like SAARC, could become economic blocs by mandating that killer robots be “Made in India” or “Made in Bangladesh.” Through killer robots, global may become local. And, this may converge with rising nationalism around the world, whereby governments want their nation and their businesses to come first.

From emerging robot ‘personality’ to the potential for bias you outline the enormous problem(s) differing ethics now pose nation states. Does this tip the balance of power in favour of those who can decide on a single framework rather than larger multi lateral bodies such as the United Nations, African Union or NATO who may struggle to utilise similar assets due to ethical, moral or cultural differences between member states?

Absolutely. I think the era of global groups leading and the rest of the world following has come to an end. There is no longer a “one size fits all solution.” The fact that the UN has been trying to ban killer robots but individual members like US, Russia, Israel and China, are moving in the opposite direction is proof that institutions are fast losing their grip on world affairs. The race for ethics is only going to accelerate this. But, there are two points here. First, there are a subset of nations who are truly thinking ahead, for instance, the United Arab Emirates. They may be able to create a blueprint much faster than other nations, like a framework for governing the programming language all killer robots should be created with (so they can all communicate).

Then, there are institutions who may view killer robots as a way to reinvent themselves. For instance, something I touch on in my book is that the United Nations may be able to acquire a robot army that makes the UN less dependent on members for peacekeepers. This could give the UN a new kind of freedom as to the kind of hotspots it wants to operate in and the kind of capabilities it offers.

Looking over the horizon what should people look for when attempting to discern what future defence companies might look like?

There was a time when defense companies were black and white. They created defense equipment and weapons, like missiles, fighter jets, tanks, protective gear. And, they were located in a handful of countries like the US, UK and Israel. Now, with autonomous weapons, the playing field is not as clear cut. For example, technology companies could become the next defense companies.

Take a look at Boston Dynamics or Alphabet, they are each working on sophisticated military technologies. The emergence of autonomous weapons is going to give rise to a brand new ecosystem of companies who offer a spectrum of services. From “robot apps” to maintenance to augmentation, startups and multinationals alike may enter the defense industry and work alongside traditional players like Lockheed or Raytheon. This creates brand new geopolitical realities. And, because the new defense firms will come from around the world, no country “holds all the cards.” The ecosystems that fuel killer robots could disperse power around the world in ways that challenge the current world order.

DISCLAIMER: All views expressed are those of the writer and do not necessarily represent that of the 9DASHLINE.com platform.

Author biography

Abishur Prakash is the world's top authority on geopolitics of technology. He is a geopolitical futurist at Center for Innovating the Future (CIF), where he helps companies succeed in tech-driven geopolitics. He is the author of four books, including Next Geopolitics: Volume 1 and 2, Go.AI (Geopolitics of Artificial Intelligence) and The Age of Killer Robots which is available here. His thoughts have been published in leading outlets like Wall Street Journal, Nikkei Asian Review, Forbes and Scientific American.