
Canadian Businesses and AI: Why an AI Policy is No Longer Optional
Overview
As AI becomes central to Canadian business strategy, having a robust AI policy is essential.
Recent advancements in artificial intelligence (“AI”) technology have led to significant scientific breakthroughs. Canadian businesses are well placed to take advantage of this progress by using AI tools to increase efficiency and decision making. However, these advancements are not without potential risks such that a comprehensive AI policy is critical for Canadian businesses. A well-drafted AI policy will not only reduce legal risk, but also enhance a company’s reputation as a responsible AI adopter. Below we have outlined reasons why Canadian businesses should consider implementing an internal AI policy.
1. Intellectual Property Protection and Confidentiality
AI tools used in the workplace have the potential to increase productivity and streamline workflow. However, AI tools must be used with caution. If an entity is using a publicly accessible AI tool, like ChatGPT, any data entered into that tool could result in the sharing of proprietary company data with the AI vendor. If not carefully managed by implementing an AI policy, the use of public AI tools that have not been vetted by a company could lead to unintentional disclosure or the loss of the confidential nature of valuable company information.
Additionally, AI tools are trained on large data sets. AI tools may use copyrighted information to train its system and generate outputs. A company’s use of an output from an AI tool that contains copyrighted information belonging to a third party could result in the infringement of third-party intellectual property, for which the company may be liable.
It is critical that Canadian companies use robust data protection measures to safeguard their confidential information that may be processed by AI tools. Companies must educate their employees on the risks of using AI tools, how to take precautions to protect intellectual property and how to comply with internal policies to review AI vendors. Finally, it is critical to regularly audit the data used to train AI systems to maximize efficiency and minimize risk.
2. Compliance with Applicable Privacy Laws
An AI policy helps a company increase the likelihood of complying with applicable Canadian privacy law[1] by establishing clear guidelines that prioritize data protection and data minimization. An AI policy can set standards for anonymizing personal information prior to being entered into an AI tool to reduce the risk of potential privacy breaches.
Employers may also implement AI tools to conduct surveillance and collect a range of data about their employees, which could include work-related performance metrics, behavioural patterns or even biometric data. In such cases, organizations should ensure that data collection through an AI tool is reasonable and necessary for the identified purpose and that the appropriate consent is obtained from employees that allow the organization to collect and process the data if legally required.
3. Cybersecurity and Risk Management
AI tools handle large volumes of sensitive data, including personal information, customer information and proprietary business data. Cybersecurity measures are essential to protect this data from unauthorized access or potential breaches. Implementing a robust AI policy can mitigate this risk and help ensure that appropriate security protocols are in place to safeguard company data and maintain privacy compliance when using an AI tool.
4. Prevention of Bias and Discrimination
AI tools may produce output that reflects the presence of bias and discrimination in its training data. Additionally, AI tools may also provide output that has no basis in reality, which are known as “hallucinations”.[2] An AI policy can work to mitigate this risk by limiting employees’ use of AI tools to exclusively those AI tools that have been vetted by a company’s information technology department.
By way of example, if a business is using an AI tool to streamline hiring processes in its human resources department, the tool may be trained on historical data that reflects unconsciously biased hiring practices. Canadian human rights legislation does not yet expressly address the use of AI technology. Instead, employers must be guided by best practices to ensure hiring processes are non-discriminatory even when using an AI tool.
5. Liability for AI-Decision Making
Companies in Canada may be held accountable for outcomes of decisions made by AI systems, especially where those decisions result in harm to the individual. It is best practice that employers implement an internal AI policy to ensure that adequate due diligence is undertaken prior to using an AI tool to ensure that it is accurate, reliable, ethically designed and secure.
If you have any questions or require more information on AI policies, please contact any member of our Labour & Employment, Technology and/or Privacy & Data Management Groups at Torkin Manes LLP.