Research exposes the dangers of using AI to process asylum seekers
The increased use of AI and algorithms to process refugee and asylum claims across the world is problematic and fraught with dangers, researchers say.
Britain has said it will use artificial intelligence to speed asylum decisions, arming caseworkers with country-specific advice and automated summaries of key interviews.
The United States is ramping up the use of surveillance and AI tools – from facial recognition to robotic patrol dogs – in its crackdown on illegal immigration.
And Germany is now using dialect recognition to identify where the person comes from.
In many countries across Europe, mobile phone data is being extracted. For instance, the data from asylum seekers is used to automatically generate a report to see whether they are telling the truth about the route that they took to come to the country.
But researchers at the Algorithmic Fairness for Asylum Seekers and Refugees project, based at the UK’s University of Warwick, looked at the types of different uses of algorithms that immigration authorities have been using or have started piloting again across Europe.
They say there are alarming signs.
Researcher Dr Derya Ozkul said it was good that governments were trying to find solutions to make the asylum process speedier and give people decisions faster.
“But it’s a temporary fix. They are not actually solving the problem. It can actually cause lots of problems, like application rejections for people whose claims were legitimate,” Dr Ozkul said.
“With technologies like automated summaries of statements, it’s quite dangerous, and it may not be as effective as they say because if the summary doesn’t really reflect the nuances of what the person has said in the interview, then it will lead to more delays in the appeal process.
“Britain also made it compulsory for all migrants in the country to digitalise their status, and a lot of people faced technical problems on the system. So basically, anything that goes wrong with your digital status will impact your access to employment, housing and almost everything else,” she said.
The researchers say citizens should also be worried about the rise of AI in assessing access to services or on legal issues.
“Migrants are the population that governments can play with because they don’t have many rights. They don’t have the same capacity to resist as citizens would. This (AI) is being experimented on with migrants,” Dr Ozkul said.
“We need to make people understand that these same technologies can be applied to them as well in the very near future like in courts, GP appointments or anywhere else.
“They can have an impact on their lives too. So, we have to think about what these tools are doing in general as a society. Even if they don’t care about asylum seekers, I think everyone should be thinking about what this tech is doing,” she said.