Image courtesy of ev/Unsplash
Mediation Perspectives is a periodic blog entry that’s provided by the CSS’ Mediation Support Team and occasional guest authors.
How is artificial intelligence (AI) affecting conflict and its resolution? Peace practitioners and scholars cannot afford to disregard ongoing developments related to AI-based technologies – both from an ethical and a pragmatic perspective. In this blog, I explore AI as an evolving field of information management technologies that is changing both the nature of armed conflict and the way we can respond to it. AI encompasses the use of computer programmes to analyse big amounts of data (such as online communication and transactions) in order to learn from patterns and predict human behaviour on a massive scale. This is potentially useful for managing corporations and shaping markets, but also for gaining political influence, conducting psychological warfare and controlling populations.
I argue that peace practitioners need to engage AI instruments proactively rather than reactively to be strategic about dealing with AI-related issues in peace processes. To the extent that peace practitioners use these methodologies, they also need to develop ethical, constructive and transparent uses of AI while constantly reflecting upon possible negative consequences of these methods. New technologies will never make the human-to-human interaction of mediation irrelevant, and their use is limited by the mediator’s need to gain the consent and trust of the conflict parties. At the same time, it is important for mediators to consider if and when AI-based technologies are being used in the environments where they operate, and how this impacts their own role and function in the wider context.
AI is Changing Conflict Dynamics
As AI is a fluid technology with a near unlimited reach, conflict parties’ use of AI is extremely challenging to monitor and yet can have profound consequences for the conflict dynamics that peace and ceasefire agreements must address.
With a view to military and counter-insurgency, AI technology can alter the costs of conflict, increase the perceived risks of surprise attacks, enhance access to intelligence among warring parties, and shift public opinions about involvement in armed conflicts. It also leads to new actors – such as technology companies – becoming stakeholders in conflict or post-conflict monitoring processes, owing to their technical expertise in methodologies applied. As AI requires access to big and consistent data, authoritarian states with less restrictions on data gathering are likely to benefit more from AI in warfare than non-state actors. Moreover, it is likely that we will witness conflicts over who controls the gathering of and access to online data.
One of the most central implications of AI on conflict dynamics has to do with the changing conditions for information flows. AI tools might be used by a conflict party to reveal facts about the other party, to which it would otherwise not have access. Meanwhile, mediation efforts often aim to address asymmetric information, which arises when conflict actors have imperfect information about the other side’s resources and resolve. It follows that mediators need to have an accurate picture of the extent of information available to conflict parties, including whether they are benefitting from AI-based tools to access intelligence.
Peace practitioners need to be aware of these developments in order to reflect on the changing political will of conflict parties to engage in peace negotiations, support parties in addressing the relevant issues in peace agreements, and help them design implementable agreements that assuage their security concerns.
AI in Conflict Resolution
In addition to AI affecting conflict dynamics in and of itself, some suggest that mediators may be able to use AI as a tool to increase the efficiency of their work. A recent study on the possible use of AI by mediators clustered its potential applications into three areas: One, deploying AI instruments for knowledge management and background research (e.g. finding patters in global data related to conflict, ceasefires and peace agreements). Two, improving practitioners’ ability to understand the specific conflict and actors involved (e.g. through natural language processing-based “sentiment analysis” of social media data). Three, seeking to create wider inclusivity in the peace process by gauging the views and opinions of the wider population.
Of the three potential uses of AI by mediators, the first one seems the most likely and the least ethically problematic under certain parameters. The latter two are also severely limited by the definitional core of mediation, which is to seek the consent of the conflict parties. Conflict parties’ “consent” is essential ethically and practically, as it is they who have to live with and implement any peace agreement, not the third party or any regional or international actor. Research indicates that conflict solutions imposed from the outside are less likely to hold.
It is unlikely that conflict parties will give their consent to a third party’s use of AI if it is seen as undermining their interests. Systematically analysing the online communication of a population affected by conflict, for example, can be viewed as an instance of mass surveillance and as such undermine confidence in mediators. Furthermore, the AI methods that are being developed can be used for the purpose of political control and manipulation. From a conflict party’s perspective, the way a third party handles this question is likely to be seen as an indicator of whether the third party is working with their consent (i.e. is a mediator) or is engaging in power politics. Mediation requires discretion and trust, which risks being undermined when AI-fuelled surveillance technologies become readily available.
Despite these ethical and practical considerations, and while public information on the use of AI technologies in active conflicts and peace processes is scarce, it is beyond doubt that they are being used. In Somalia, the UN’s Innovation and Technology Unit of the Field Technology Section has launched a Big Data Analytics and Digital Media Support project. The project aims to use “a variety of analytics tools such as Talkwater, Meltwater, as well as various social media analysis techniques (sentiment analysis, data mining processes)” to inform UN operations. On an operational level, the UN’s DPPA Middle East Division (MED) has already been working with a machine learning-based system for detecting and analysing public opinion in the Arab world.
While actors such as UN agencies are deploying AI-methodologies to inform operations, it is yet to be determined what AI-based tools can contribute, substantially, to mediators’ understanding of conflicts. Beyond ethical constraints, it is valid to ask if deep-seated preferences in conflict situations in fact are so constituted that they can be detected through “sentiments analysis” based on social media data. There is a lack of theory and empirical evidence on what we may realistically find out based on sentiment analysis and how this is relevant for the peace process.
Similarly, AI’s promise to reach out to parts of the population not directly accessible to mediators may not hold true if it gives undue weight to active social media users who are not representative of the overall population. While it therefore may still be possible to use AI for general knowledge management and research, this may seem like a more mundane development, not worthy of the hype AI is currently experiencing.
Outlook
Given the controversy surrounding AI-based analysis and the grounds for caution, it would be understandable to discount the relevance of these changes to mediation – which remains a ‘human centric’ trade from the practitioner’s perspective. However, even when mediators will not themselves have access to or use AI-based technologies, they will need to contend with their use in their operating environments. While it is likely that these developments will be more pronounced in some contexts compared to others, they may become more common once the technology becomes cheaper.
Today, the field of AI and technologies such as sentiment analysis are still in their infancy, and it is too early to fully assess the implications for armed conflicts and conflict resolution. There is no need to be overly pessimistic about implications that are yet to transpire. At the same time, the scene does seem to be set for an unhealthy alliance between AI warfare, counter-insurgency tools and conflict resolution approaches if norms for the ethical, constructive and transparent uses of AI are not developed on both global and local levels.
Mediation Perspectives is a periodic blog entry that’s provided by the CSS’ Mediation Support Team and occasional guest authors. Each entry is designed to highlight the utility of mediation approaches in dealing with violent political conflicts. To keep up to date with the Mediation Support Team, you can sign up to their newsletter here.
About the Author
Marta Lindström is a master’s student in Peace and Conflict Studies at Uppsala University, Sweden, and a consultant researcher.
For more information on issues and events that shape our world, please visit the CSS website.
One reply on “Mediation Perspectives: Artificial Intelligence in Conflict Resolution”
In what way does Al guarantee confidentiality and how will the mediator get the eye contact of the conflicting parties?