Governments' use of Al in immigration and refugee system needs oversight.

AuthorMolnar, Petra
PositionSpecial Report: Immigration Law

In an effort to bring innovations to its immigration and refugee system, Canada has begun using automated decision-making to help make determinations about people's applications.

A report released in September 2018 by the University of Toronto's International Human Rights Program and the Citizen Lab at the Munk School of Global Affairs and Public Policy finds that Canada is experimenting with using artificial intelligence (Al) to augment and replace human decisionmakers in its immigration and refugee system. This experimentation has profound implications for people's fundamental human rights.

Use of Al in immigration and refugee decisions threatens to create a laboratory for high-risk experiments within an already highly discretionary system. Vulnerable and under-resourced communities such as noncitizens often have access to less robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may only serve to exacerbate these disparities.

The rollout of these technologies is not merely speculative: the Canadian government has been experimenting with their adoption in the immigration context since at least 2014. For example, the federal government has been developing a system of "predictive analytics" to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. The government has also quietly sought input from the private sector in a 2018 pilot project for an "Artificial Intelligence Solution" in immigration decision-making and assessments, including for applications on humanitarian and compassionate grounds and applications for Pre-Removal Risk Assessment. These two application categories are often used as a last resort by people fleeing violence and war who wish to remain in Canada. They are also highly discretionary, and the reasons for rejection are often opaque.

In an immigration system plagued with lengthy delays, protracted family separation and uncertain outcomes, the use of these new technologies seems exciting and necessary. However, without proper oversight mechanisms and accountability measures, the use of Al can lead to serious breaches of internationally and domestically protected human rights, in the form of bias or discrimination; privacy violations; and issues with due process and procedural fairness, such as the right to have a fair and impartial...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT