The unprecedented pace of technological change brought about by the fourth Industrial Revolution offers enormous opportunities but also entails some risks. This is evident when looking at discussions about artificial intelligence (AI), machine learning (ML) and big data (BD). Many analysts, scholars and policymakers are in fact worried that, beside efficiency and new economic opportunities, these technologies may also promote international instability: for instance, by leading to a swift redistribution of wealth around the world; a rapid diffusion of military capabilities or by heightening the risks of military escalation and conflict. Such concerns are understandable. Throughout history, technological change has at times exerted similar effects. Additionally, human beings seem to have an innate fear that autonomous machines might, at some point, revolt and threaten humanity – as illustrated in popular culture, from Hebrew tradition’s Golem to Mary Shelley’s Frankenstein, from Karel Čapek’s Robot to Isaac Asimov’s I, Robot and the movie Terminator.
The World Economic Forum’s (WEF) annual meeting brings together global leaders from governments, companies, science and international organizations as well as societal actors. Three members of the CSS attended this year’s WEF in Davos. While Myriam Dunn Cavelty and Matteo Bonfanti joined discussions on different aspects of cybersecurity, Sophie Fischer presented on the role of Artificial Intelligence (AI) in international politics.
This graphic maps out a selection of Chinese AI companies and provides an overview of their current projects and collaborative efforts. To find out more about China’s ambitions to become a world leader in artificial intelligence, see Sophie-Charlotte Fischer’s recent addition to our CSS Analyses in Security Policy series here. For more graphics on economics, see the CSS’ collection of graphs and charts on the subject here.
What will advances in artificial intelligence (AI) mean for national security? This year in War on the Rocks, technical and non-technical experts with academic, military, and industry perspectives grappled with the promise and peril of AI in the military and defense realms. War on the Rocks articles discussed issues ranging from the different ways international competitors and military services are pursuing AI to the challenges AI applications present to current systems of decision-making, trust, and military ethics. War on the Rocks contributors added to our understanding of the trajectory of military AI and drew attention to critical remaining questions. A key takeaway is that technical developments in AI probably represent less than half the battle in attempting to effectively integrate AI capabilities into militaries. The real challenge now, both in the United States and abroad, is going beyond the hype and getting the right people, organizations, processes, and safeguards in place.
Disinformation and distrust online are set to take a turn for the worse. Rapid advances in deep-learning algorithms to synthesize video and audio content have made possible the production of “deep fakes”—highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did. As this technology spreads, the ability to produce bogus yet credible video and audio content will come within the reach of an ever-larger array of governments, nonstate actors, and individuals. As a result, the ability to advance lies using hyperrealistic, fake evidence is poised for a great leap forward.