How to Train Your AI Soldier Robots (and the Humans Who Command Them)

Image courtesy of Devin Rumbaugh/DVIDS.

This article was originally published by War on the Rocks on 21 February 2020.

Artificial intelligence (AI) is often portrayed as a single omnipotent force — the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (“2001: A Space Odyssey”), reason with it (“Wargames”), blow it up (“Star Wars: The Phantom Menace”), or be defeated by it (“Dr. Strangelove”). Sometimes the AI is an automated version of a human, perhaps a human fighter’s faithful companion (the robot R2-D2 in “Star Wars”).

Categories
Cyber

Beyond Killer Robots: How Artificial Intelligence Can Improve Resilience in Cyber Space

Image courtesy of orihaus/Flickr. (CC BY 2.0)

This article was originally published by War on the Rocks on 6 September 2018.

Recently, one of us spent a week in China discussing the future of war with a group of American and Chinese academics. Everyone speculated about the role of artificial intelligence (AI), but, surprisingly, many Chinese participants equated AI almost exclusively with armies of killer robots.

Popular imagination and much of current AI scholarship tend to focus, understandably, on the more glamorous aspects of AI — the stuff of science fiction and the Terminator movies. While lethal and autonomous weapons have been a hot topic in recent years, this is only one aspect of war that will change as artificial intelligence becomes more sophisticated. As Michael Horowitz wrote in the Texas National Security Review, AI itself will not manifest just as a weapon; rather, it is an enabler that can support a broad spectrum of technologies. We agree: AI’s most substantial impacts are likely to fly under the radar in discussions about its potential. Therefore, a more holistic conversation should acknowledge AI’s potential effects in cyber space, not by facilitating cyber attacks, but rather by improving cyber security at scale through increased asset awareness and minimized source code vulnerabilities.

Artificial Intelligent Agents: Prerequisites for Rights and Dignity

Image courtesy of ITU Pictures/Flickr. (CC BY 2.0)

This article was originally published in Volume 2, Issue 2 of Age of Robots magazine on 6 March 2018.

IBM’s Deep Blue was a chess-playing computer that achieved remarkable success in 1997 when it defeated the world champion Gary Kasparov in 19 moves. Kasparov had never lost a match to a human in under 20 moves. He managed to beat Deep Blue in the next games but was again defeated the following year after Deep Blue received an upgrade—and the unofficial nickname “Deeper Blue”. This was a landmark moment in artificial intelligence, but at no point was the genius chess machine deemed worthy of “rights”. Although theoretically able to visualize 200 million chess positions per second, Deep Blue had limited general abilities and could not work on other tasks beyond what it was programmed to do—such as playing chess, in this case.

The Moral Code: How To Teach Robots Right and Wrong

“PackBot”, a battlefield robot used by the US military. Image: Sgt. Michael J. MacLeod/Wikimedia


This article was originally published by Foreign Affairs on 12 August 2015.

At the most recent International Joint Conference on Artificial Intelligence, over 1,000 experts and researchers presented an open letter calling for a ban on offensive autonomous weapons. The letter, signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind CEO Demis Hassabis, and Professor Stephen Hawking, among others, warned of a “military artificial intelligence arms race.” Regardless of whether these campaigns to ban offensive autonomous weapons are successful, though, robotic technology will be increasingly widespread in many areas of military and economic life.

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations. For example, a robot is not currently able to distinguish between combatants and noncombatants or to understand that enemies sometimes disguise themselves as civilians.

What Are the Ethical Implications of Emerging Tech?

Hal 9000, the intelligent computer of Stanley Kubrick’s 2001: A Space Odyssey. Image: OpenClips/Pixabay

This article was originally published by Agenda, a blog operated by the World Economic Forum, on 4 March, 2015.

In the past four decades, technology has fundamentally altered our lives: from the way we work, to how we communicate, to how we fight wars. These technologies have not been without controversy, and many have sparked intense debates, often polarized or embroiled in scientific ambiguities or dishonest demagoguery.

The debate on stem cells and embryo research, for example, has become a hot-button political issue, involving scientists, policy-makers, politicians and religious groups. Similarly, the discussions on genetically modified organisms (GMOs) have mobilized civil society, scientists and policy-makers in a wide debate on ethics and safety. The developments in genome-editing technologies are just one example that bio research and its impact on market goods are strongly dependent on social acceptance and cannot escape public debates of regulation and ethics. Moreover, requests for transparency are increasingly central to these debates, as shown by movements like Right to Know, which has repeatedly demanded the labelling of GMOs on food products.