This article was originally published by IPI Global Observatory on 13 June 2017.
Central Europe received a major increase in refugees fleeing Syria in 2015. With the region’s politicians initially overwhelmed and claiming the situation was unforeseeable, civil society had to step into the breach on humanitarian assistance. Eventually, politicians did propose a broad range of solutions to cope with the phenomenon, typically informed by their political persuasions. Naturally, these were widely debated, and none were able to be categorically proven as effective.
But what if there was a way to evaluate the proposed solutions? What if the means existed to analyze the challenges faced and provide support for decision-makers? Existing computer simulation models are, in fact, quite capable of doing just that in a range of fields. Though their capabilities are not taken full advantage of at present, the situation appears to be changing.
One field—and a big one at that—starting to adopt large-scale computer modeling is healthcare. With many national health insurance programs facing the challenges of demographic shifts (an aging population and fewer contributors to the pool of available funds), the quest for cost efficiency has opened the door to healthcare technology assessment (HTA).
Perhaps the most prominent adopter is the United Kingdom’s National Health Service (NHS). It devotes 1.5% of its annual budget to HTA and incorporates two of the most prominent players in the field: the National Institute for Health Research and the National Institute for Health and Care Excellence. The first primarily takes care of distributing funds for research, while the latter develops guidelines for the use of health technologies within the NHS and conducts research itself. This research focuses primarily on meta-studies (of clinical studies) to evaluate the effectiveness of therapies and develop recommendations on which services should be included in and payed for by the health system. Subsequently, the focus is more on “traditional” methods such as statistics.
Smaller players, such as Austria’s Decision Support for Health Policy and Planning, meanwhile, attempt to evaluate the quality of healthcare services on the basis of data collected by the healthcare system itself. They also use computer simulation to evaluate the effectiveness of measures (e.g. vaccination) and provide their findings to support decision makers. The simulation approaches used are mainly agent-based models, system dynamics, and Markov chains.
HTA shows what computer algorithms can and should be for policymakers: a decision support tool, not answers in themselves. It remains up to humans to put results into perspective and context, including ethically. Computers, for example, cannot be left to decide whether or not an expensive therapy is incorporated into a public healthcare program. This is something that should be subject to public debate.
The question then arises if and how mathematical modeling could support and benefit decision making in peace and security issues. Returning to the refugee challenge of 2015, the argument about being caught by surprise was clearly only a politically expedient excuse. After all, the United Nations High Commissioner for Refugees closely monitors the numbers and movements of refugees around the world. By consulting this data, politicians could have roughly determined that people leaving Syria were heading toward Central Europe and approximately when they would arrive.
Nonetheless, harnessing the power of mathematical simulation in this case could have by far improved that largely intuitive information. At the Vienna University of Technology, the Computational Complex Systems group (COCOS) did indeed develop a mathematical model to test the question of “what if politicians had acted differently?” toward the 2015 refugee situation. I oversaw the implementation and data assessment capabilities of the model, which began to produce its first results in early 2016.
It must first be acknowledged that it is difficult and delicate to develop simulation models such as this, which depict “fuzzy” or imprecise processes, such as human behavior. Subsequently, the greatest care must be applied and with close collaboration between domain experts (i.e. those people who are experts in the particular field independent of the software/simulation domain) and simulation experts, in order to produce reliable results. In the case of the model developed at COCOS, it is “only” a proof of concept for the methodological approach and without active input from domain experts. This is crucial when we consider that models, by definition, are a simplification of the real world. It is the job of simulation experts to build them as simple as possible so that their “parametrization” (the process of finely adjusting all settings of the model) and obtaining the required input data will not be too costly or impossible. If the model, on the other hand, is too simple the results will be useless. Subsequently, creating a good model generally requires the know-how of domain experts to identify and incorporate all relevant factors, while omitting those that are unimportant.
Despite these caveats, the results produced from the refugee modeling were solid—and interesting—enough for the Austrian Broadcasting Corporation (ORF) to highlight them at length before the summer of 2016, when the next wave of flight from the Middle East was expected. The most interesting finding was in response to the scenario, “how will the closure of the ‘Balkan Route’ affect migration patterns during the upcoming summer?” (This simulation can be viewed on the ORF website.)
The results were a stern rebuke to those in favor of closing the route: The Balkans’ closed borders simply diverted the migration routes elsewhere and the refugees now chose the much more dangerous path across the Mediterranean Sea. History ultimately proved that prediction right: In 2016, the route across the Mediterranean was more highly frequented than ever before, with “illegal border crossings” between Libya and Italy alone higher than all other illegal entries into the European Union.
As mentioned, this model’s results must be taken with a pinch of salt. But the results still show the potential for such applications, primarily the ability to test scenarios which otherwise cannot be evaluated, whether because of cost, ethical issue, logistics, or otherwise. Such models can also be reused with minimal input. In the case of the refugee model, the data was easy to obtain, as it consisted of more or less publicly available figures such as gross domestic products, the Fragile States Index, asylum application numbers, acceptance rates of asylum applications, and numbers of refugees already in the affected countries.
And refugee migration patterns are only one possible application of computer simulation for peace and security. The limits are defined by human imagination, the availability of data, and the formalization of rules to input into the model. For example, while the simulation of the spread of disease epidemics has specific statistical rules to build upon (infection rates, incubation times, effectiveness of vaccination, etc.), social behavior (cooperation in conflict, individual decisions, etc.) is much less predictable, and hence such models are potentially less reliable. Further, the nature of the problem determines the modeling approach(es) that can be applied.
For the refugee pattern a very much enhanced “gravity model”—a concept derived from physics and describing the power of attraction between bodies—was chosen for two reasons: 1) limited access to data and 2) the unknown behavior of individuals. The latter excluded the use of a “bottom up” approach, which would involve the creation of a model defined by the smallest entities that compose it and their interactions. A “top down” approach was instead chosen.
To be able to describe patterns of migration along a multitude of paths and with many “bodies of attraction” (i.e. countries), the model needed major additions. This was achieved by setting up a graph network in which the model defines the attracting or repelling factor along the edges of the graph, in discrete time. This setup was able to be parametrized and filled with input data of refugee movements over time from official UNHCR statistics and public sources such as media reporting. This approach worked well in this case. For a different problem a different approach might be needed.
Future Challenges
To a hammer everything looks like a nail. The same holds true for a lot of “simulation experts.” Many are only trained in, and capable of, applying a single simulation approach. This is often agent-based simulation, which is a “bottom-up” technique where systems are described by defining its entities (agents), their inherent behavior, and interactions. An example for this would be the modeling of epidemic spread within a heterogeneous population. If the problem calls for an agent-based approach then it can work well; if not, then the outcome will be unsatisfying. This is one of the many reasons for poor outcomes of simulation projects to date.
Another common problem is availability of data. A simulation model requires not only input data, but data for defining parameters. The more complex a model becomes the more data will be necessary. If this data is not available, a different modeling approach might be better suited, or the posing of a simpler question. It makes no sense to build a sophisticated model when the outcome will be of no use. It is also not the most efficient allocation of time, money, and labor to simulate everything that can be simulated. There might already be more effective answers in existence, for one.
Computer simulation should be seen as a tool. As with every tool, it can only be used efficiently if those using it know how to do so. This requires basic understanding of computer simulation (and its limitations) by domain experts, a basic understanding of the problem by the simulation experts, a common “language” between these two groups of experts, and trust in the capabilities of each other. This mutual understanding and trust will be necessary to tackle the challenges of our increasingly interconnected and complex world.
After they were “surprised” by the amount of people escaping the horrors of war in 2015, European politicians chose to close the Balkan route for refugee migration. As shown by the COCOS conceptual model and later confirmed by real events, this did not stop refugees from attempting to apply for asylum within the EU. Misreading the situation not only turned out to be ineffective, but also extremely costly. It undermined, and continues to undermine, the EU’s Schengen Agreement on free movement, and continues to cost thousands of human lives.
An area with one of the longest histories of mathematical simulation is weather forecasting. While it took many years before these became reliable and trusted, today we would not want to live without them. It will also take some time until computer simulation models become accepted as supporting decision-making for peace and security policies. The lack of trust in mathematical models and the continued blind trust in existing formulae will be the biggest obstacle to this.
About the Author
Štefan Emrich is a data analysis, visualization, and communication specialist and a consultant to the International Peace Institute.
For more information on issues and events that shape our world, please visit our CSS Security Watch Series or browse our Publications.