Courtesy cea + / Flickr
This article was originally published by War on the Rocks on 19 August 2016.
When earlier this month the Obama administration released a newly declassified memorandum detailing the U.S. government’s policy on drone strikes, there was little new to be found. It mainly repeats the policies that were released in 2013, to include the vastly-more-than-what-the-law demands requirement of a “near certainty” that there would be zero civilian deaths in a given strike. What is glaringly missing is any formal appraisal of the civilian casualties likely to occur if a strike is not conducted.
Whatever political or even moral imperative there may be for the administration’s extralegal no-civilian-casualty drone policy, it is not the only ethical issue these strikes engage. After all, British philosopher John Stuart Mill observed in his 1859 essay that a “person may cause evil to others not only by his actions but by his inaction, and in either case he is justly accountable to them for the injury.”
A failure to formally include any evaluation of the consequences of not striking raises what I would call a “moral hazard.” Traditionally, “moral hazard” is an economics term defined as “the lack of any incentive to guard against a risk when you are protected against it (as by insurance).” However, as applied to drone operations (and other use-of-force situations), I would interpret it as decision-makers having a lack of any incentive to guard against the risk to civilians who might be killed if a targeted terrorist is not struck, because they are protected against the risk of criticism in the absence of a strike.
Courtesy the lost gallery / flickr
This three-part series was originally published by the Lowy Institute’s The Interpreter between the 16th and 18th August 2016.
When US Secretary of State John Kerry visited Moscow in March, looming over his meeting with Russian President Vladimir Putin in the Kremlin was a statue of Russian Emperor Alexander II (1855-81). Known as the ‘Tsar-Liberator’, Alexander freed the serfs, introduced trial by jury, relaxed press censorship and created elected regional assemblies that might, but for his assassination, have laid the foundation for bolder constitutional experiments.
But isn’t Alexander the wrong autocrat? Russia, we are told, is in the grip of Stalin-mania. Over the past 12 months, The New York Times, The New Statesman, The Independent and Foreign Policy have reported on an unspoken Kremlin policy to rehabilitate the Soviet tyrant.
Dubbed ‘re-Stalinisation’, its alleged aim is to return Russia to the fear and suspicion that characterised life until Stalin’s death in 1953 and to secure what are asserted as having ever been Putin’s twin goals: the consolidation of absolute personal power and restoration of the Soviet Union (or something like it) in Eastern Europe.
This piece was originally published by Political Violence @ a Glance on 2 August 2016.
What makes a person choose to support or fight for a non-state armed group (NSAG)? This is a question that social science scholars have been asking for years. Work from political science and international relations has crystallized around two overarching reasons why individuals participate in organized political violence. The first, stemming largely from work by Ted Gurr, deals with grievance. This notion predicts that rebellion is not solely a rational act, but that it also requires feelings of frustration, exclusion, and/or relative deprivation. Scholars building on this idea have usually operationalized this to mean a grievance centered on political, economic, ethnic, and/or religious factors. The second school of thought, based on rational choice and economic models, predicts that individuals will choose to join a rebellion only when there is a perceived personal benefit–like power, money, or loot.
As noted in my recent article in the Journal of Global Security Studies, prior works that look at “greed” and “grievance” as motivating factors have found that both of these explanations are at least partly right. However, these examinations have usually been undertaken with the assumption that the default recruitment pool is made up of men and boys. Recent work has challenged this assumption, showing that women have contributed to the majority of NSAGs active since 1990, that they have contributed to rebellions in about 60 countries, and that women are more likely to be present in groups that use terrorism to further their aims (here, here, and here). Given this new knowledge, previous research using male-focused economic indicators or surveys that over-sample men seems to only tell part of the story.
Courtesy Peter Roan/Flickr
This article was originally published as “On the Origin of a Hunger-Free Species By Means of Enforceable Natural Law” by the Harvard International Review (HIR) on 11 August 2016.
Had Charles Darwin been blessed with precognition while conjecturing about finch beak differentiation over millions of years, he would have envied us. We in the early twenty-first century — within a single lifetime — can observe homo sapiens evolving a transformative new trait with unprecedented strength through the international justice system.
Contrary to common perceptions of his work “On the Origin of Species by Means of Natural Selection,” Darwin’s view of evolution was not confined to physiology alone. In his later book, “The Descent of Man,” he entertained a broader view that included the ways in which more fortunate humans treat the less fortunate, contending, “The aid we feel impelled to give to the helpless is mainly an incidental result of the instinct of sympathy, which was originally acquired as part of the social instincts…”
It’s a safe bet, therefore, that Darwin would have taken great interest in the emergence of the International Food Security Treaty (IFST), an initiative of international law that could equip humanity to eradicate hunger, the world’s most widespread and severe form of suffering.
Courtesy Patrick McDonald / Flickr
This article was originally published by the War on the Rocks on 12 August 2016.
From America’s first major overseas military intervention in 1801 against the Barbary States to today’s on-going military presence in the region, the United States has often relied on a tiny piece of the United Kingdom located in the Mediterranean Sea.
Gibraltar, commonly referred to simply as “the Rock,” is a rocky headland covering just over 2.7 square miles on the southern coast of the Iberian Peninsula. It is strategically located at the western entrance to the Mediterranean Sea, where the strait between Europe and Africa spans a mere 7.7 nautical miles at its narrowest point.
After being captured from the Moors in 1462, Gibraltar was part of Spain until it was captured in 1704 by a joint Anglo-Dutch-Catalan force during the War of the Spanish Succession. The Rock was formally ceded to the United Kingdom in 1713 as part of the Treaty of Utrecht “…forever, without any exception or impediment whatsoever.”
Since losing Gibraltar in 1704, the Spanish have sought to take it back. Examples abound through the last three centuries. They unsuccessfully laid siege to Gibraltar on three separate occasions in the 18th century and have since used a combination of military, diplomatic, economic, and plain harassing tactics in an attempt to get the Rock back. More recently, after the Gibraltarians approved a new constitution in 1969, Spain’s fascist dictator Francesco Franco closed the land border and blocked telecommunications between Spain and Gibraltar until the border was reopened in 1985.