University Post
University of Copenhagen
Independent of management

Science

New AI model to transform how asylum cases are judged

Transparency — In a few years time, and with an AI model under development at the Faculty of Law, we can predict how religion, nationality or education can influence the outcome of an asylum case. The aim is to prevent bias in case processing.

Lawyers and caseworkers in Denmark will soon be able to use AI when making decisions in asylum cases.

This is the purpose of a five-year project titled Explainable Artificial Intelligence and Credibility in Asylum Decision-making, led by two professors.

One is Professor of Law Thomas Gammeltoft-Hansen at the Nordic Asylum Law and Data Lab and head of the Centre of Excellence for Global Mobility Law at the University of Copenhagen (UCPH). The other is Thomas Moeslund, one of the world’s leading researchers in ‘Explainable AI’, based at Aalborg University. Explainable AI is a set of tools and frameworks that make artificial intelligence systems transparent and comprehensible.

Together, they have secured DKK 12 million from the Villum Foundation to develop an AI that, according to Thomas Gammeltoft-Hansen, could open up entirely new options for support of asylum decision-making.

»Our AI is designed to provide transparency on how caseworkers or judges arrive at the most legally sound decisions in asylum cases. We also hope that newly hired lawyers and attorneys can use it as a training tool,« says Thomas Gammeltoft-Hansen.

The artificial intelligence is not intended to replace human decision-making, but to support lawyers and caseworkers — and to minimise bias in assessments.

Bias is a major challenge in AI

From his office on South Campus, Thomas Gammeltoft-Hansen explains that a core issue in asylum law is the verification of the claims of asylum seekers: the outcome often hinges entirely on the applicant’s credibility.

»The practice of processing asylum cases varies greatly from country to country, and there are no shared standards for assessing credibility. This makes it easy for individual, systemic, and institutional bias to creep in,« says Thomas Gammeltoft-Hansen.

AI is trained on past decisions, and this introduces the risk of reproducing and reinforcing existing biases and distortions. For this reason, Thomas Gammeltoft-Hansen does not believe that AI can be used to make independent asylum decisions in an ethically responsible manner — or certainly not at present.

A study from Canada shows that the rate of granted residence permits ranged from 13.8 to 95.1 per cent, depending on the judge assigned to the case.

PROFILE

Thomas Gammeltoft-Hansen has a master’s degree in refugee studies from the University of Oxford (2003), a degree in political science from the University of Copenhagen (2005), and a PhD from Aarhus University (2009).

After completing his PhD, he was hired by the Danish Institute for International Studies (DIIS) in 2009.

From 2013, he served as research director at the Danish Institute for Human Rights and, during the same period, was appointed member of the Danish Refugee Appeals Board.

From 2016, he was research director at the Raoul Wallenberg Institute before joining UCPH in 2018.

According to Thomas Gammeltoft-Hansen, the problem is less pronounced in Denmark. There is no gender bias here, as seen in other countries, but religion can play a larger role in case outcomes. These are precisely the kinds of underlying — and often invisible — patterns the project aims to investigate and expose.

»Asylum law almost always involves a subjective assessment of how much the caseworker believes and weighs the applicant’s explanation. One way we can work with this issue is to acknowledge that the subjective element is ever-present. When training our AI, we have to recognise that we are working with a dataset in which bias is constantly embedded,« says Thomas Gammeltoft-Hansen.

AI for asylum must be subject to review

According to Thomas Gammeltoft-Hansen, he and Thomas Moeslund are using different technologies to address the challenges — and have already tested a pilot model.

»Based on the pilot results, I feel confident saying our AI model is world-leading in terms of its ability to accurately predict asylum decisions,« he says.

To achieve this, they had to use the latest developments in artificial intelligence:

»Thomas and I have worked to make AI models more transparent and controllable, but our field is still relatively new in the AI world. Most recent breakthroughs in AI have prioritised predictive power or output quality at the expense of understanding how algorithms arrive at their conclusions,« says Thomas Gammeltoft-Hansen.

»This phenomenon is often referred to as the black box problem in AI. But with the Explainable AI approach we use, we are going in a different direction: here, the algorithms help open the black box and give users insight into how large language models reach their conclusions,« he says.

In legal contexts, this approach is absolutely necessary, according to Thomas Gammeltoft-Hansen.

»When using algorithms to process sensitive personal data, or when AI supports administrative or legal decisions, there must be an individual and well-reasoned justification. So, opening the black box is crucial,« he says.

Secured access to unique datasets

Thomas Gammeltoft-Hansen has always been interested in how legal practices evolve over time. That’s also what drew his attention to the growing use of AI systems — especially in the United States.

But to conduct research in this area, he needed access to a large dataset of asylum decisions — something that is notoriously hard to obtain in an international context.

»Together with Nordic colleagues, we gained access to extensive datasets from the Danish Refugee Appeals Board, the Norwegian Directorate of Immigration, and the Swedish Migration Agency. Today, we have access to over 800,000 decisions, and our data is truly unique,« says Thomas Gammeltoft-Hansen.

What makes it unique is that these are so-called deep datasets. They are not merely short summaries of decisions — they include information on what asylum seekers were asked during processing, how they responded, and what arguments the attorneys presented, according to Thomas Gammeltoft-Hansen.

»It’s only because of our strong collaboration in the Nordic region that this has been possible. It has taken a long time to build trust and demonstrate that we can handle data securely and responsibly. This kind of access would not have been possible in many other parts of the world«.

Latest