Artificial intelligence (AI) is often portrayed as a single omnipotent force — the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (“2001: A Space Odyssey”), reason with it (“Wargames”), blow it up (“Star Wars: The Phantom Menace”), or be defeated by it (“Dr. Strangelove”). Sometimes the AI is an automated version of a human, perhaps a human fighter’s faithful companion (the robot R2-D2 in “Star Wars”).
Retiring Chinese general He Lei recently made news by suggesting that China’s greatest military weakness compared to the United States was that it has never fought a real war. He noted none of Beijing’s increasingly advanced weapons, jets, and ships have been tested in combat. Moreover, the large People’s Liberation Army continues to rely upon conscripts rather than the long-serving professionals in the U.S. military. He argued the Chinese military “will be ridden with doubts until they get into a real fight.”
The best defense is a good offense — or is it? The answer to this question, along with an understanding of the stronger form of warfare, is the single most important consideration in U.S. space strategy and funding major space programs.
Satellites and other spacecraft have always been vulnerable targets for America’s adversaries. Today, attacking U.S. on-orbit capabilities offers the potential to cripple U.S. conventional power projection and impose significant costs, whether in dollars, lives or political capital.
Many strategists and policymakers have concluded that because space-based systems are seen as exposed to attack — with little way to defend them — that the offense is the stronger form of warfare in space. This conclusion is incorrect and has led to an underdeveloped U.S. space strategy.
Time-tested theory and principles of war underscore that the defense is the stronger form of warfare in space.
Emmanuel Goffi is a specialist in military ethics and security studies. He is currently a research fellow at the Centre for Defence and Security Studies at the University of Manitoba (UofM), in Winnipeg, Canada, and has been an officer of the French Air Force (captain) for 22 years. He is also an instructor in political science at the Department of Political Studies at UofM and at the International College of Manitoba. Emmanuel lectured in International Relations, the Law of Armed Conflicts, and Ethics at the French Air Force Academy for five years before he was appointed as an analyst and research associate at the Center for Aerospace Strategic Studies in Paris for two years.
Emmanuel Goffi holds a PhD in Political Science/International Relations from the Institut d’Etudes Politiques de Paris-Centre de Recherche Internationales (Science Po-CERI). He is the author of Les armées françaises face à la morale : une réflexion au cœur des conflits modernes (Paris : L’Harmattan, 2011). He co-edited and contributed to an edited volume of more than 40 contributions about drones: Les drones aériens : passé, présent et avenir. Approche globale [Unmanned Aerial Vehicles: past, present, and future. A global approach] (Paris: La Documentation française, coll. Stratégie aérospatiale, 2013). Emmanuel’s current researches focus on the ethical aspects of the dronization and robotization of the battlefield, and on the constructivist approach of security studies.
Where do you see the most exciting research/debates happening in your field?
I would say that the most exciting aspects regarding international relations and security studies are to be found in the philosophical perspective. Morality and ethics are growing concerns in political science. The evolution of conflicts, the rise of new actors, globalization, and new technologies, have slowly led to the obsolescence of international laws. Warfare and laws of armed conflicts are the perfect illustration of this. This is why ethics is becoming more and more important. When you cannot rely on formal legal norms you turn towards informal moral ones.
In this field of moral philosophy, warfare and the new forms of confrontations are endless topics. The use of drones and robots on the battlefield, the changes in the way we approach defense issues, the evolution in the sociology of the military are some of the most thrilling subjects to address. Besides, moral philosophy applied to political science opens doors to an infinite number of perspectives and offers an undreamt playground to free spirits.
Wargaming is enjoying a renaissance within the Department of Defense, thanks to high-level interest in wargaming as a way to foster innovation. However, for this surge of wargaming to have a positive impact, these wargames must be designed well and used appropriately. For decision-makers with limited wargaming experience, this can be a daunting challenge. Wargames can be deceptively simple — many do not even use complicated computer models — so it is all too easy to assume that no specialized skills are needed for success. At the same time, wargames are hugely diverse: interagency decision-making seminars that involve conflict without fighting, crisis simulations adjudicated by subject matter experts, and operational warfare in which outcomes are determined by complex computer models. For sponsors who may have only seen one or two games, it can be hard to understand the full range of wargaming possibilities and the common approaches that underpin them all. How can a sponsor discern whether wargames and the resulting recommendations are actually worthwhile?