Artificial Humanitarian Intelligence

Artificial Humanitarian Intelligence

It’s impossible to work in science and technology without constantly hearing about AI. But how do we create a truly humanitarian artificial intelligence that helps make the world a better place?

Sean McDonald

IN APRIL OF THIS YEAR, THE UNICEF Innovation Fund announced its first round of start-up investments, half of which focused on applying artificial intelligence to humanitarian challenges. This funding is a sign of the times— artificial intelligence is one of the most exciting (and hyped) technologies in the world, so it’s natural to want to apply it to our most pressing and complicated problems. At the same time, the complexity and vulnerability inherent in humanitarian interventions raise significant questions about the practical impact, ethics, and legality of using artificial intelligence in this way. AI can create efficiencies as well as introduce troubling biases into systems, both of which affect power relationships in unstable environments. 

The potential of an artificial humanitarian intelligence understandably sparks a significant amount of enthusiasm—and even more questions about what it means to be a humanitarian. But before either field can begin answering those questions, each will need to work on building something equally challenging: a shared understanding. 

Artificial intelligence and humanitarianism are both changing at a rate that is hard to track, even for experts, let alone the institutions trying to shape each industry. AI began as a field in the 1940s with aspirations to model human cognition in machines, and has since splintered into a wide range of disciplines. The actual term “artificial intelligence” was coined in 1956, but it drew on a range of work from earlier luminaries, like Alan Turing. For our purposes, the aspiration to model human cognition in machines is the definitional characteristic of the field. 

Formal humanitarianism has its roots in the early nineteenth century, and was originally conceived as a drive to create institutions and legal frameworks that limit and respond to the atrocities of war. There has been a significant amount of definitional and mission creep, however, as more organizations have developed the capacity to deploy operations into various situations. Since its origin as an idea, humanitarianism has grown to address many other kinds of disasters and institutional failures. In the contemporary era, both humanitarianism and artificial intelligence find themselves recognizing tremendous potential benefits and approaches to problems posed by our increasingly machine-connected world. Both are also finding their processes and principles drawn into many more contexts and interventions than originally imagined. 

As the U.S. Office of Science and Technology Policy described it in a 2016 report, artificial intelligence is distinguished by the role that data plays in the way it learns and evolves. At a high level, it’s useful to distinguish between three types of artificial intelligence: narrow, where the artificial intelligence makes decisions in a defined system, like the games chess or go, and develops proficiency by testing successful outcomes over millions of iterations; machine/deep learning, where an artificial intelligence interprets the rules of a system based on a large dataset, like image recognition, and then applies those rules to new datasets or predictions; and general, when artificial intelligence is able to imitate a human-level intelligence across many different tasks. Most experts agree that the first two types are well developed and in some cases already deployed, whereas the third may be decades off. The two practical observations to draw from the current state of artificial intelligence are how important it is for data to be both structured and complete in order to realize its full potential. Even when there is complete, structured data, there are significant risks of replicating historical biases and values.

“AI and humanitarianism are both changing at a rate that is hard to track.”

As with artificial intelligence, the term “humanitarianism” was initially used to refer to something specific—in this case, the international institutions and legal frameworks built to protect the values of humanity, independence, impartiality, and neutrality, during wartime. Over the course of the last 150 years, however, the term has grown in use and application to often include a broad group of state, nonprofit, and corporate actors, whose defining characteristic is that they seek to alleviate suffering in line with humanitarian principles. As a result, the field of humanitarianism is unbundling, not only in the groups it refers to, but also in the contexts in which it’s used, the problems it may address, the groups of people it benefits, and the tools it has available to achieve humanitarian goals. The characteristics of each aspect of that unbundling are important—not only for the way we understand what modern humanitarianism means, but for how we might train artificial intelligence to help. The most important practical observations to make, based on the current state of humanitarianism, is that the field is increasingly defined by a set of contextually interpreted principles, and it is decreasingly manageable through single institutions, or even federations of institutions. 

So what does that mean for the potential of artificial intelligence in humanitarian response? It points to at least three key issues and related next steps. 

Artificial Humanitarian Intelligence

DATA. One of the critical challenges in humanitarian response is having a sufficient amount of structured, high-quality data to inform action. Digitization is increasing the amount of available data, as well as the complexity of data structure, quality, and completeness issues. In order to build an effective artificial humanitarian intelligence, we’ll need to invest in harmonizing data structures, developing legal and technical infrastructure for sharing data, and performing independent verification of contextual variables and completeness. There’s initial progress on this area, represented by efforts like the Humanitarian Data Exchange, but there’s a long way to go to get to the scale necessary to train artificial intelligence. 

GOVERNANCE. Both humanitarianism and artificial intelligence operate based on clear definitions of values—both in the absolute sense and in context. The way that those values are defined in objective functions, and weighed against each other as competing priorities in resource-constrained contexts, are fundamentally human problems that will require human solutions. There are a number of efforts to articulate digital humanitarian values, such as the International Committee of the Red Cross’s Handbook on Data Protection in Humanitarian Action and the Harvard Humanitarian Initiative’s Signal Code. But such efforts will need to go a lot further, and include active interpretation and operationalization by a much greater number of actors, before they’re clear or tested enough to guide an artificial intelligence. 

EXPERIMENTATION. Neither artificial intelligence nor humanitarian response have a systematic understanding of how to define or achieve the perfect end goal. Both artificial intelligence and humanitarianism are trying to solve the difficult, contextual problems that come from trying to structure, interpret, and optimize a response to an incredibly broad range of circumstances. Not being able to clearly articulate the ideal outcome, however, raises significant concerns about how to train a narrow artificial intelligence or have a sufficiently complete data set to use machine learning, especially in vulnerable contexts. Instead, both communities will gain from building shared experimentation infrastructure, toward developing a common base of data sources, variable indicators, and ethical values from which to build an artificial humanitarian intelligence. At present, there are a number of public-private partnerships, academic institutions, and ethical artificial intelligence labs tackling pieces of this, but there aren’t any institutions building the foundations of a field that focuses specifically on humanitarian values. 

The worth of humanitarianism, like that of artificial intelligence, lies in its ability to set boundaries around, and optimize for, the values that we hold most dear. Until we can confidently articulate what those are in the form of data, the promise of improvements that artificial intelligence could bring to humanitarianism will remain difficult to predict.