We live in mobile times. Never before in human history have people travelled so much and so fast from one part of the world to another. Globalisation — the international flow of goods and people — is one part of the story. Unprecedented levels of forced displacement is the other. According to the United Nations High Commissioner for Refugees, over 82 million people are now living as refugees or internally displaced people, a figure which has doubled in the last twenty years.
This makes borders, and the immigration and security apparatus behind them, busy places. And fraught. At borders, individuals knock on the door to the state. It decides whether to grant them entry and on what terms. Those decisions, which can have existential consequences, are increasingly being made with the help of machines.
It's easy to see why. In 2019, Canada received over 64,000 asylum claims, the highest number on record. In that same year, Canada admitted 341,180 new permanent residents, the most in recent history.
Each new application to come to Canada represents a major administrative undertaking: the collection and analysis of a vast amount of information about applicants' personal backgrounds, professional qualifications, language aptitude, security status and family connections.
Immigration services also issue, extend, monitor and revoke visas and work permits. Canada's immigration and refugee processing system, chronically backlogged and delayed, would seem ripe for the kind of optimisation that artificial intelligence is designed to deliver.
Automated decision systems are technologies that aid or even replace human decision-makers. Trained with extant data, these technologies can be used to sort and combine inputted information to predict or shape outcomes, thus lightening the burden on human administrators.
Such systems are finding their place across the public service, for example in helping determine bail terms or sentences in the corrections system or identifying fraud within social services. These systems are also widely used in the private sector: by banks to decide who gets loans; by employers to sort through job applications; by universities to select certain types of students; or by property owners to filter out undesirable tenants.
In the context of an immigration system, automated decision-making can be trained to manage those mountains of applications by categorising them, raising red flags, assigning risk scores, proposing decisions or even making them.
Such systems are marketed on the basis of their objectivity. Machines, so the reasoning goes, know no bias. But this claim overlooks a fundamental truth of machine learning, captured in the phrase "garbage in, garbage out." When an automated system flags the immigration application of a 23-year-old Muslim male engineer from Anatolia, the system can't be charged with discriminatory bias. It has simply done its job. The bias, which is embedded in the historical data that was used to train the system, is now guiding the technology.
It's impossible to know to what extent automated decision systems have already been integrated into Canada's immigration system, as governments don't advertise their use. But investigations by journalists and researchers suggest that Canada, like many other jurisdictions, is testing out these technologies with a view to greatly expanding their use.
Since 2014, Immigration, Refugees and Citizenship Canada (IRCC) has been developing a "predictive analytics" system to classify cases according to their complexity. IRCC is also exploring the use of an automated system to screen cases for legal purposes. It uses case law and litigation trends to determine, for example, what chance a rejected application would have of being successfully challenged.
According to the government tender for this particular "Artificial Intelligence Solution", the system could also be used by "front-end IRCC administrative decision-makers" (humans) "to aid in their assessment of the merits of an application before decisions are finalized."
“People have little recourse. How can they challenge these technologies, especially if they don't even know they're in use?”
This may sound innocuous enough. But human rights advocates would like to know more about what informs these systems: what data they have been fed, what they have been trained to consider "merits" and "liabilities". These questions are critical to ensuring procedural fairness; every immigration and refugee applicant to Canada has the right to a "fair, impartial and open process," as the Supreme Court put it in a 1999 decision.
Human rights advocates also worry about what they don't know. Between July and December of 2016, nearly three million people passed through the border control in Terminal 3 of Toronto Pearson International Airport. None of them will have known that they were on camera. In a pilot project, the federal government positioned 31 cameras in the terminal, capturing traveller's faces and comparing them with a Canadian Border Services Agency (CBSA) data base of 5,000 previously deported individuals. When a match was made, the person was pulled aside for a secondary inspection.
No public notice was provided about the project, details of which only emerged in the summer of 2021 through freedom of information requests by The Globe and Mail. CBSA says that nobody was deported as a result of it, but the technology company that ran the "Faces on the Move" project claims that 47 "real hits" (matches) were made.
"Borders provide the ideal testing ground for surveillance technologies," says Petra Molnar, a Canadian human rights lawyer and international expert on migration and human rights. "The power differential is huge. People have little recourse. How can they challenge these technologies, especially if they don't even know they're in use?"
Molnar is currently researching the use of surveillance technologies on the most powerless: refugees from Africa and the Middle East who have ended up on the shores of Greek islands as they attempt to reach Europe. She spends her days visiting a new class of refugee camp that is becoming a prototype on the fringes of Europe, one that surveils its inhabitants completely, where meals are provided in exchange for fingerprint and eye scans. She considers this the dangerously sharp edge of a "global border industrial complex."
Canada's geography has spared it the mass migration challenge that Europe currently faces; the use of surveillance technologies on our borders is more discrete. But Petra Molnar argues that as one of the first countries to adopt artificial technologies in its public administration, Canada should play a leadership role in the global conversation over its use. And that that conversation should begin with a recognition of the threat they pose to human rights and dignity.