skip to main content

Algorithms in policing

Police forces around the world, including many in Canada, are beginning to experiment with a new technology that uses automated surveillance and mass data processing to oversee populations and anticipate criminal activity. The approach, known as “algorithmic policing,” involves collecting large amounts of information about individuals — their faces, social media activity, networks they belong to — to better track and identify them, and to predict their behaviour.

An algorithm trained on when and where crimes have taken place in the past can anticipate where they may take place in the future. An algorithm fed with personal information — an individual's address, social media activity, circle of friends — can generate a “risk score” of how likely that individual is to be involved in crime. Police forces can then allocate resources on the basis of these predictions.

It is an issue that is playing out in many countries around the world. Those studying it in Canada, like Kate Robertson, a criminal defense lawyer in Toronto, are looking to other countries to understand what risks exist. Take for example a recent story coming out of the United States.

In February of 2019, a 31 year-old Black man named Nijeer Parks was arrested in New Jersey and wrongfully charged with aggravated assault, unlawful possession of weapons, using a fake ID, possession of marijuana, shoplifting, leaving the scene of a crime and resisting arrest. He then spent 11 days in jail. Once released, he began the arduous process of proving that he was entirely innocent.

The technology that led to Park's wrongful arrest: a facial recognition system that matched the photo on a fake driver's license found at the scene of the crime with a photo of Parks' face. Parks had nothing to do with the crime; he was over 40 kilometers away from where it took place. But it was his misfortune to possess facial features that biometric software decided belonged to those of the actual culprit, a defect that cost Parks more than a year of his life. He remains shaken to this day. “You took being comfortable away from me,” he told CNN. The incident was the third documented case of a wrongful arrest based on flawed facial recognition technology in the United States.

Here in Canada, it serves as a cautionary tale, as our law enforcement agencies quietly integrate facial recognition and other forms of machine learning into their work.

While many believe surveillance technologies offer greater efficiency and accuracy, critics point out that they also pose a serious threat to fundamental human rights, including the right to privacy, freedom of expression, equality and liberty.

One obvious danger is that the technologies are imperfect, as illustrated by cases like Nijeer Parks'. Algorithmic policing is only as good — or bad — as the data that informs it. Research conducted in 2019 by the National Institute of Standards and Technology, a federal laboratory in the US, concluded that the facial recognition systems currently in use are up to one 100 times more likely to misidentify non-white faces than white ones, based on the larger number of white faces that they have been trained on.

A further issue is that algorithms fed with existing policing data will reflect, and potentially amplify, the historical over-policing of certain populations, typically racial minorities. A more subtle impact is the chilling effect that automated surveillance has on minority groups that have traditionally been targeted by police. The freedom of association grants everyone the right to gather without fear or hindrance. But if, for example, members of a Black protest movement know that their online mobilisation is being tracked and their participation recorded — and that an algorithm could use that data against them one day — they may be less inclined to take part.

“Law enforcement agencies argue that these technologies may be useful,” says Canadian expert, Kate Robertson. “But ‘being useful' is not enough to justify a practice that infringes on human rights.” She compares the use of these technologies with wiretapping. While police could learn a lot by listening in on private conversations, they are not allowed to do it as a matter of course. Before violating an individual's right to privacy, police must demonstrate to a judge the legal necessity to do so.

Kate Robertson

“Being ‘useful’ is not enough to justify a practice that infringes on human rights.”

— Kate Robertson

While Canada has been slower to adopt algorithmic policing technologies than the US, it's impossible to know the actual extent of their use in this country. Police forces prefer not to discuss their techniques and can refuse to on legal privilege. Researching the question for a report published by the University of Toronto's Citizen Lab in 2020, Kate Robertson learned that many law enforcement agencies — including both federal and municipal police forces of Saskatchewan, Calgary, Vancouver and Toronto — have obtained, are testing or are already using algorithmic policing technologies.

In a 2021 report to Parliament, Privacy Commissioner Daniel Therrien demonstrated that the RCMP had — despite its claims to the contrary — been using facial-recognition software purchased from the US technology company Clearview AI, in contravention of the federal Privacy Act. A further joint investigation by select federal and provincial privacy and access to information commissioners revealed that Clearview AI had accounts with 48 law enforcement agencies across Canada.

Further, some provincial governments are weighing the use of algorithmic assessments in their corrections systems. In this context, algorithms can be employed to make decisions on bail, sentencing and parole. It was one such algorithm that consigned Nijeer Parks to 11 days in a New Jersey jail before he was eventually released into a pre-trial monitoring program.

Algorithmic technologies have worked their way into Canada's policing landscape. Even their fiercest critics acknowledge that they may have a productive role to play, but only if the necessary safeguards are in place. Currently they are not, and the communities that have the most to lose are the ones that have historically faced discrimination and over-policing and that may fear speaking up.

“These technologies require government oversight. The lack of transparency, and the recognition that these tools are inaccurate, flawed and discriminatory raise serious questions about whether they can ever be lawfully used,” says Kate Robertson. Until those questions are resolved, she — and many others — are calling for a moratorium.

Kate is aware of at least one racialized person in Ontario who was wrongfully arrested on the basis of a false facial recognition identification. “We can't be reacting after the fact. We need to ensure that violations will be prevented, not remedied.”