“PackBot”, a battlefield robot used by the US military. Image: Sgt. Michael J. MacLeod/Wikimedia
This article was originally published by Foreign Affairs on 12 August 2015.
At the most recent International Joint Conference on Artificial Intelligence, over 1,000 experts and researchers presented an open letter calling for a ban on offensive autonomous weapons. The letter, signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind CEO Demis Hassabis, and Professor Stephen Hawking, among others, warned of a “military artificial intelligence arms race.” Regardless of whether these campaigns to ban offensive autonomous weapons are successful, though, robotic technology will be increasingly widespread in many areas of military and economic life.
Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations. For example, a robot is not currently able to distinguish between combatants and noncombatants or to understand that enemies sometimes disguise themselves as civilians. » More
European map with Ukraine in focus. Image: ClkerFreeVectorImages/Pixabay
Anyone remotely familiar with EU foreign policy will be no stranger to invocations of European values underpinning, and, indeed, driving, European external action. From policies on climate change and agriculture to trade to defence and security, the rhetoric generated by various EU bodies typically elucidates a “set of common values” that the respective policies promote or embody. A crucial nuance is that ‘values,’ which have been incorporated into the primary law of the European Union through the Lisbon Treaty, are juxtaposed to ‘interests.’ This juxtaposition often means that if and when the EU fails to live up to its much-touted values, it is charged with ‘hypocrisy.’ The inconvenient truth, however, is that like all actors, Europe has interests as well as values, and these are frequently at odds with each other across virtually every policy area. More often than not, interests, far from being ‘inspired’ by values, have proven insular, short-sighted, and at times downright mercenary. At the same time, it is naïve to expect Europe’s policymakers to pay more attention to the plight of Syrian refugees than domestic populations’ preoccupation with keeping their own welfare and prosperity undisturbed by crises engulfing much of the world outside the Continent. The solution, it would then seem, lies in doing away with the gratuitous narrative emanating from Brussels that continues to raise unjustified expectations by placing values at the rhetorical heart of European foreign policy. » More
Hal 9000, the intelligent computer of Stanley Kubrick’s 2001: A Space Odyssey. Image: OpenClips/Pixabay
This article was originally published by Agenda, a blog operated by the World Economic Forum, on 4 March, 2015.
In the past four decades, technology has fundamentally altered our lives: from the way we work, to how we communicate, to how we fight wars. These technologies have not been without controversy, and many have sparked intense debates, often polarized or embroiled in scientific ambiguities or dishonest demagoguery.
The debate on stem cells and embryo research, for example, has become a hot-button political issue, involving scientists, policy-makers, politicians and religious groups. Similarly, the discussions on genetically modified organisms (GMOs) have mobilized civil society, scientists and policy-makers in a wide debate on ethics and safety. The developments in genome-editing technologies are just one example that bio research and its impact on market goods are strongly dependent on social acceptance and cannot escape public debates of regulation and ethics. Moreover, requests for transparency are increasingly central to these debates, as shown by movements like Right to Know, which has repeatedly demanded the labelling of GMOs on food products. » More
Two US Army soldiers during an exercise at Fort McCoy, July 15, 2009. Image: Sgt. 1st Class Mark Bell/Flickr
This article was originally published by War on the Rocks on 10 March 2015.
Leaders lie “in the routine performance of their duties,” and “ethical and moral transgressions [occur] across all levels” of the organization. Leaders have also become “ethically numb,” using “justifications and rationalizations” to overcome any ethical doubts. This “tacit acceptance of dishonesty… [facilitates] hypocrisy” among leaders.
These quotations sound like they are ripped from the headlines about some major corporate scandal. But they’re not describing Enron before its collapse in 2001, or firms like Lehman Brothers and Countrywide before the 2008 financial crisis. Instead, they describe one of the country’s most respected institutions: the U.S. Army. » More
«Drones: From Technology to Policy, Security to Ethics». Poster for the conference organized by the ISN and ETH Global. Image: ISN
Rapid technological advances are making drones cheaper, more accessible and highly adaptable. Once the exclusive preserve of the world’s most advanced armed forces, unmanned platforms are now being used by civilian actors for a wide range of applications. Yet, while members of the technical community have tended to emphasize the opportunities that this technology offers, their counterparts in international relations and other fields have increasingly raised questions about the legal, ethical, humanitarian and security implications of unmanned aerial systems (UAS). Against this backdrop, ETH Global and the ISN recently hosted a one-day conference that brought together over 160 experts from the fields of robotics, environmental science, law and ethics, and international relations and security. Since ETH Zurich is considered one of the world’s leading ‘competence centers’ in the field of robotics systems and control, its activities offer a glimpse into emerging UAS technologies and their potential social impact in the future.