Is it Unfair for AI to Make Administrative Decisions?
Overview
One of the tenets of procedural fairness is the right to reasons for an administrative decision.
Canadian tribunals and decision-makers have an obligation to explain why a particular outcome is reached and the rationale for it.
The use of artificial intelligence (“AI”) as a tool in tribunal adjudication is challenging the common law standard for what amounts to adequate reasons for decision.
For example, does the right to reasons established in the Supreme Court of Canada’s 1999 decision, Baker v. Canada (Minister of Citizenship and Immigration), [1999] 2 S.C.R. 817, include the duty that a human tribunal member draft or arrive at the decision in question? To what extent can a tribunal delegate its decision-making authority to AI, in the interests of efficiency and to avoid delay? Will Courts scrutinize the inherent or potential biases of AI that could taint a particular tribunal result?
Most of these questions remain unresolved.
A recent decision of the Federal Court, Haghshenas v. Canada (Minister of Citizenship and Immigration), 2023 FC 464, sheds light on how Canadian Courts may approach the fairness or reasonableness of administrative decisions written with the assistance of AI.
A Powerful Chinook
Haghshenas involved an application for judicial review of a decision by an immigration officer (the “Officer”) at the Canadian Embassy in Turkey. The Officer denied the applicant a work permit designed for entrepreneurs and self-employed foreign nationals seeking to operate a business in Canada (the “Work Permit”).
One of the requirements for the Work Permit under paragraph 200(1)(b) of Canada’s Immigration and Refugee Protection Regulations, SOR/2002-227 (the “Regulations”) is that the Officer be satisfied that the applicant “will leave Canada by the end of the period authorized for their stay”.
In this case, the Officer concluded that the applicant would not leave Canada at the end of their stay under the Work Permit. That is, the applicant’s intended aspiration of starting an elevator / escalator business in Canada did “not appear reasonable” given the speculative revenue projections for the business and the fact that the company had not obtained the appropriate licenses, among other reasons.
In reaching this decision, the Officer employed Chinook, a Microsoft Excel-based tool developed by Immigration, Refugees and Citizenship Canada (“IRCC”).
According to the IRCC website, Chinook helps with “temporary resident application processing to increase efficiency and to improve client service”, with the goal of assisting in the backlog of work permit applications. It “does not utilize artificial intelligence (AI), nor advanced analytics for decision-making, and there are no built-in decision-making algorithms”.
These statements notwithstanding, the applicant challenged the Officer’s use of Chinook on judicial review, arguing that employing AI to reach an administrative decision was both procedurally unfair and substantively unreasonable.
The Federal Court dismissed the applicant’s position, largely on what appears to be an assumption that the Chinook tool constitutes a form of AI.
In so doing, the Court hinted at a number of important principles about how it may scrutinize the use of AI in administrative decision-making in future.
1. Decisions Made by Human Decision-Makers Are Not Procedurally Unfair
In rejecting the argument that the use of AI was procedurally unfair, the Court appears to have drawn a line in the sand about the proper role of AI mechanisms in administrative decision-making.
The Court noted that in the applicant’s case, AI did not reach the final decision regarding his Work Permit – the Officer did.
Inherent in the Court’s reasoning is the presumption that it is procedurally fair for AI to assist an administrative decision-maker in rendering reasons for decision. AI assists the administrative State in the goal of promoting more efficient and timely outcomes.
What appears unfair, however, is the State’s delegation of its decision-making authority to AI. The Court held:
As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with [the Supreme Court of Canada’s decision in] Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.
2. Not Unreasonable to Use AI in Administrative Decision-Making
Separate from the issue of whether the use of AI is procedurally unfair, the Court also rejected the argument that the Officer’s reliance on Chinook rendered the decision substantively unreasonable.
According to the Court, there is nothing inherently unreliable or ineffective about the use of AI, at least in this particular case.
The Court did not deem it necessary to delve into the inner-workings of the Chinook software to determine if its mechanics were inappropriate or would lead to unreasonable results in the immigration assessment process:
Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy … the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness and found the use of [AI] is irrelevant …
So in this particular context, the government’s use of AI survived reasonableness scrutiny.
Will AI Replace Tribunal Decision-Making?
The Court’s approach above reflects a willingness to accept machine learning as a limited component of administrative decision-making, with the caveat that ultimate adjudicative authority must reside in a human tribunal.
Haghshenas just scratches the surface, however, about the implications of the Canadian administrative State delegating its roles and responsibilities to machine learning in the interests of efficiency.
As we are learning, AI comes with its own set of inherent biases and problems.
There will no doubt be new circumstances in which an enterprising lawyer will argue that the tribunal’s reliance on AI tainted the outcome of the decision or rendered it procedurally unfair.
Tribunals and agencies across Canada must therefore approach the question of whether and how to adopt AI in the decision-making process with a degree of caution and with significant legal and ethical training.
This is the only way to ensure that the use of AI remains a fair and reasonable tool in administrative adjudication.
Marco P. Falco is a Partner in the Litigation Department at Torkin Manes LLP. For more information about the proper use of artificial intelligence in tribunal decision-making, you may contact Marco at mfalco@torkin.com. Please note that a conflict search will need to be conducted.