Abstract-view-of-Toronto-City-Hall

Regulating Generative Artificial Intelligence: Balancing Innovation and Risks (Updated)

Torkin Manes LegaPoint
 

 

The article is an updated version of the article originally published on June 20, 2023, which can be found here. We have updated the article since there have been many updates to the legislation and laws discussed in the original article. 

Introduction

In a matter of months, generative artificial intelligence (“AI”) has been adopted ravenously by the public, thanks to programs like ChatGPT. The increasing use (or proposed use) of generative AI by organizations has presented a unique challenge for regulators and governments across the globe. The balance between fostering innovation while mitigating risks associated with the technology is a challenge that different lawmakers are trying to strike. This article summarizes some of the key legislation or proposed legislation around the world that tries to strike that balance.  

AI Regulation in Canada

  1. Current Law

While Canada does not have an AI-specific law yet, Canadian lawmakers have taken steps to address the use of AI in the context of so-called “automated decision-making.” Québec’s private sector privacy law, as amended by Bill 64 (the “Québec Privacy Law”), is the first piece of legislation in Canada to explicitly regulate “automated decision-making”. The Québec Privacy Law imposes a duty on organizations to inform individuals when a decision is based exclusively on automated decision-making. 

Interestingly, this duty to inform individuals about “automated decision-making” is also found in Bill C-27, the federal bill to overhaul the federal private sector privacy legislation. Bill C-27 imposes obligations on organizations around automated decision systems. Organizations that use personal information to inform their automated decision systems to make predictions about individuals are required to:

  • Deliver a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them; and
  • Retain the personal information related to the decisions for sufficient periods of time to permit the individual to make a request for access.

In addition to the privacy reforms, the third and final part of Bill C-27 introduced Canada’s first every AI-specific legislation, which is discussed in the next section.

2. Bill C-27: The Digital Charter Implementation Act

On June 16, 2022, Canada’s Minister of Innovation, Science and Industry (“Minister”) tabled the Artificial Intelligence and Data Act (“AIDA”), Canada’s first attempt to formally regulate certain artificial intelligence systems as part of the sweeping privacy reforms introduced by Bill C-27.

Under AIDA, a person (which includes a trust, a joint venture, a partnership, an unincorporated association, and any other legal entity) who is responsible for an AI system must assess whether an AI system is a “high-impact system. Any person who is responsible for a high-impact system then, in accordance with (future) regulations, must:

  1. Establish measures to identify, assess and mitigate risks of harm or biased output that could result from the use of the system (“Mitigation Measures”);
  2. Establish measures to monitor compliance with the Mitigation Measures;
  3. Keep records in general terms of the Mitigation Measures (including their effectiveness in mitigating any risks of harm/biased output) and the reasons supporting whether the system is a high-impact system;
  4. Publish, on a publicly available website, a plain language description of the AI system and how it is intended to be used, the types of content that it is intended to generate, and the recommendations, decisions, or predictions that it is intended to make, as well as the Mitigation Measures in place and other information prescribed in the regulations (there is a similar requirement applicable to persons managing the operation of such systems); and
  5. As soon as feasible, notify the Minister if use of the system results or is likely to result in material harm.

It should be noted that “harm” under AIDA means physical or psychological harm to an individual, damage to an individual’s property, or economic loss to an individual.

If the Minister has reasonable grounds to believe that the use of a high-impact system by an organization or individual could result in harm or biased output, the Minister has a variety of remedies at their disposal. 

You can read more about AIDA here.  

Key AI Regulation, Frameworks, or Guidance Across the Globe

As of the date of the writing of this article, on an international scale, AI-specific laws are few and far between. AI regulation in most countries simply derives from already existing privacy and technology laws that do not explicitly address AI or automated decision-making. Nevertheless, some counties and the European Union (“EU”) have made notable progress in addressing the dawn of AI. For example, on June 14, 2023, the EU based the AI Act, becoming the world's first comprehensive AI law. The EU Parliament adopted the AI Act in March 2024 and the EU Council subsequently gave its approval in May 2024. While it will be fully applicable 24 months after entry into force, some parts will be applicable sooner. For example, the ban of AI systems posing unacceptable risks will apply six months after the entry into force. For greater clarity, “entry into force” refers to the date when the act becomes actual law, and “applicable” refers to when the law can be applied on a practical level.

The EU’s AI Act establishes obligations for providers and users depending on the level of risk from AI. It will be interesting to see whether countries will adopt a similar risk-based approach as they develop their own AI laws.

The following chart is a summary of the progress various countries have made in developing AI-specific legislation:

 

Country / Supranational Union

Laws, Regulations or Frameworks

Commentary

European Union

AI Act – adopted in March 2024 and approved by Council in May 2024

Adopted on March 13, 2024, the EU’s monumental act aims to ensure that AI systems are safe, transparent, traceable, non-discriminatory, environmentally friendly and overseen by people rather than automation.[1] Similar to Canada’s proposed legislation, the extent to which AI systems are regulated is based on the level of risk posed by the system, from limited risk to unacceptable risk. Generative AI, such as ChatGPT, will now have to comply with certain transparency requirements.[2]

United Kingdom

AI regulation: a pro-innovation
approach
– March 2023 (framework for future efforts)

Taking a slightly different approach to its counterparts, the UK is taking a pro-innovation approach to future AI regulation efforts with an emphasis on regulating use rather than the technology itself. It is guided by five principles: safety, transparency, fairness, accountability and contestability and redress.[3] An actual draft form of regulations has yet to enter Parliament.

Brazil

Bill No. 2338 – May 2023

This bill, prepared by a commission of jurists instituted especially for the purposes of AI regulation in Brazil, establishes national rules for the development, implementation and responsible use of AI. It aims to reconcile the protection of rights and fundamental freedoms, appreciation of work, the dignity of the person, and technological innovation represented by AI.[4] AI systems are ranked by risk level, with “excessive” risk systems being completely banned. The bill is currently under deliberation.

United States

Blueprint for an AI Bill of Rights – October 2022 (voluntary framework)

 

Algorithmic Accountability Act – February 2022 (draft bill)

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence – October 2023

 

The voluntary framework applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.[5] Where sector-specific privacy laws and oversight requirements do not already provide guidance, the Blueprint is meant to inform decisions where such guidance is needed. The voluntary framework follows a draft bill tabled in 2022, which takes a similar approach to the UK in its open-ended and more lenient provisions. While 2023 updates on either document have been scarce, there have been multiple voluntary frameworks introduced since such as the AI Risk Management Framework and the SAFE Innovation Framework for AI Policy.[6] Further, towards the end of 2023, President Joe Biden issued an executive order in an effort to mitigate the substantial risks of AI.[7] Recognizing the urgency on governing the development and use of AI, the executive order highlights the United States’ efforts to catch up with its international counterparts.

Australia

AI Standards Roadmap – March 2020 (framework for future efforts)

In 2019, Australia’s Prime Minister acknowledged that the country has “not been as involved as they could be” in regard to regulating AI. In response to this lack of effort, this framework was established, aiming to provide guidance for Australians to help develop standards for AI internationally.[8]

China

Interim Measures for the Management of Generative Artificial Intelligence Services – took effect August 2023

These measures aim to “promote the healthy development and standardized application of generative AI, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons, and other organizations.” They apply to services that use generative AI technology to generate text, images, audio, video and other content to the public.[9] Generative AI” in these measures refers to models and technologies that generate content such as text, pictures, sounds, and videos based on algorithms, models, and rules, which inevitably encompasses tools such as ChatGPT. AI systems are not ranked by risk level, making these regulations broad and all-encompassing.


Evidently, the EU is leading the pack while China and Brazil follow closely behind. The regulation of generative AI in so many of these documents shows the increasing alertness towards AI-driven tools such as ChatGPT.

Interestingly, while potential legislation addressing AI is developing slowly in the United States, some states have already gone ahead and drafted their own state-specific regulations. In California, for example, up to 20 Bills have been proposed in 2024 relating to the regulation of AI. Of particular note is Bill SB-1047, enrolled this month, which requires developers, before beginning to initially train a covered model, comply with various requirements, such as implementing a written and separate safety and security protocol, which manages the risks of developing and operating such models.[10] Individual state efforts such as California’s show a growing recognition as to just how dire the need is to regulate this technology.

Takeaways

On a global scale, awareness of the risks associated with AI and generative models such as ChatGPT is evidently increasing. The inherent complexity and unpredictability of AI and its corresponding tools and models make regulating its use an ongoing challenge. Finding the perfect balance between allowing AI’s benefits to thrive, such as in medicine with early detection and diagnosis of diseases, with combatting AI’s risks, such as bias and discrimination, remains elusive.

While AIDA has yet to be made into official law in Canada, businesses who are using (or are planning to use) AI and its various tools and models should be prepared to comply with the upcoming AI laws. Here are some recommendations that organizations should adopt to get ahead of the upcoming AI laws, such as AIDA:

  • Build a principle- and risk-based AI compliance framework that can evolve with the technology and regulatory landscape. The framework should be built with input from both internal and external stakeholders.
  • Part of the framework should set out clear guidelines around the responsible, transparent and ethical usage of AI technology.
  • Conduct a privacy and an ethics impact assessment for the use of new AI technology. The assessment should answer the following questions:
    • What type of personal information will the AI technology collect, use and disclose?
    • How will the personal information be used by the AI technology?
    • Will the data set lead to any biases in the output?
    • What risks are associated with the AI technology’s collection, use and disclosure of the personal information?
    • Will there be any human involvement in the decision-making?

The core of any AI compliance framework should be the incorporation of privacy-by-design and ethics-by-design concepts into the framework. This means that data protection and ethical features will be integrated into the organization’s system of engineering, practices and procedures. These features will likely allow an organization to adapt to changing technology and regulations.

For more information about the legal implications of the use of ChatGPT or other AI technology, please contact Roland Hung of Torkin Manes’ Technology and Privacy & Data Management Groups.

 

*The author would like to thank Torkin Manes’ Articling Student Yasmin Thompson for her invaluable contributions in preparing this insight.

 

[1] “EU AI Act: first regulation on artificial intelligence” (June 2024) online: European Parliament  <https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence>

[2] Ibid.

[3] Secretary of State for Science, Innovation and Technology “AI regulation: a pro-innovation approach” (2023) at 6, online (pdf): UK, Department for Science, Innovation & Technology  <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146542/a_pro-innovation_approach_to_AI_regulation.pdf>

[4] Senator Rodrigo Pacheco, “Bill N° 2338: Dispõe sobre o uso da Inteligência Artificial” (2023) at 29, online (pdf): Senado Federal  <https://legis.senado.leg.br/sdleg-getter/documento?dm=9347622&ts=1684441712955&disposition=inline&_gl=1*dfh5iw*_ga*NTE3Mjg1OTU4LjE2ODY3NzAyMzY.*_ga_CW3ZH25XMK*MTY4Njc3MDIzNS4xLjAuMTY4Njc3MDIzNS4wLjAuMA..>

[5] Office of Science and Technology, “Blueprint for an AI Bill of Rights” (2022) at 8, online (pdf): The White House <https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf>
[6] “AI Risk Management Framework: Second Draft” (2022) online (pdf): National Institute of Standards and Technology <https://www.nist.gov/itl/ai-risk-management-framework>; Chuck Schumer, “SAFE Innovation Framework” (2023) online (pdf): Senate  <https://www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf >
[7] President Joe Biden, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (October 2023) online: The White House <https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/>
[8] An Artificial Intelligence Standards Roadmap” (2020) online (pdf): Standards Australia <https://www.standards.org.au/getmedia/ede81912-55a2-4d8e-849f-9844993c3b9d/O_1515-An-Artificial-Intelligence-Standards-Roadmap-soft_1.pdf.aspx>
[9] “生成式人工智能服务管理暂行办法” (2023) online: Cyberspace Administration of China <https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm>
[10] Senator Wiener, “SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (September 2024) online: California Legislature <https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047>