‘Smart’ border technology harming migrants – UN
The use of ‘smart’ border technology by governments and agencies is having negative impacts of refugees and exacerbating their pre-existing vulnerabilities, according to a new report from the United Nations.
The report says automated decision-making processes add risks such as bias, error, system failure and theft of data, all of which can result in greater harm to migrants and their families.
“A rejected claim formed on an erroneous basis can lead to persecution,” the report says.
Compiled by the UN’s Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance Ms E Tendayi Achiume, it says governments and UN agencies are developing and using emerging digital technologies in ways that are uniquely experimental, dangerous, and discriminatory in the border and immigration enforcement context.
“By so doing, they are subjecting refugees, migrants, stateless persons and others to human rights violations, and extracting large quantities of data from them on exploitative terms that strip these groups of fundamental human agency and dignity.
The report claims digital technologies are being deployed to advance the xenophobic and racially discriminatory ideologies that have become prevalent in some countries.
It says this is in part due to widespread perceptions of refugees and migrants as threats to national security. And that in others cases discrimination and exclusion occurs “as a result of the pursuit of bureaucratic and humanitarian efficiency without the necessary human rights safeguards”.
The report also says the huge profits associated with border securitisation and digitisation are also part of the problem.
Titled Racial Discrimination and Emerging Digital Technologies: A Human Rights Analysis, the report recommends an equality-based approach to human rights governance of emerging digital technologies, with a focus on racial discrimination resulting from the design and use of these technologies.
The Rapporteur urged state and non-state actors to move beyond “colour-blind” or “race neutral” strategies that ignore the racialized and ethnic impact of emerging digital technologies, and instead to confront directly these forms of discrimination.
The report says new technologies are helping border agencies to stop and control the movement of migrants but are also ignoring the fundamental rights of people to seek asylum.
They also collect all data without taking the consent of migrants – factors that in other circumstances would likely be criminal if deployed against citizens.
“This is a good example of how algorithmic technology more generally can be influenced through the biases of its creators to discriminate against the lower classes of society and serve the privileged ones,” the report says.
“In the case of refugees, people who have had to flee their homes because of war are now being subjected to experiments with advanced technology that will increase the risks carried by this already vulnerable population,” it says.
The report recognises many governments and UN agencies dealing with refugees increasingly prefer to employ tech-based solutions, for example to assess people’s claims for aid, cash transfer and identification.
And refugees can benefit from the increasing use of digital technology, such as smartphones, and social media can help them connect with humanitarian organisations and stay in touch with families back home.
But it also creates a power imbalance and a loss of rights as a result of using such technology. And refugees do not have the same political agency as domestic citizens to organise and oppose government actions, the report says.