AI Technology and Privacy: Canadian Privacy Commissioner Launches Investigation into ChatGPT
Overview
Introduction
On April 4, 2023, Canada’s Privacy Commissioner, Philippe Dufresne, launched an investigation into OpenAI after having received a complaint alleging the collection, use and disclosure of personal information without consent.
Background
Launched in November 2022, ChatGPT is a generative artificial intelligence (“AI”) chatbot that was developed by OpenAI, an artificial intelligence research laboratory whose objective is to promote and develop a friendly AI. Like other forms of AI, generative AI (such as ChatGPT) learns how to take actions from past data. It creates brand new content (such as text, imagery or computer codes) based on training it received from past data, instead of simply categorizing or identifying data like other forms of AI.
Having gained popularity and interest for its language processing abilities, ChatGPT has become a buzzword in the last few months, capturing the public’s fancy and inspiring other technology companies, like Microsoft and Alphabet, to launch their own AI technology that they believe will change the nature of work.
Millions of users worldwide have accessed ChatGPT in the past few months. The debate around AI has changed from being one in the realm of science fiction to a debate on ethical and legal implications, including issues related to privacy, intellectual property and security, among others. Various governments around the world, including the Canadian government, are attempting to find a way to regulate AI that would strike a balance between fostering innovation and the risks (including legal risks) associated with such technology.
Canadian Privacy Commissioner Investigation into ChatGPT
The Office of the Privacy Commissioner (the “OPC”) has not released, at this time, any further details about the investigation beyond what has been disclosed in its announcement. The OPC’s investigation is quite timely as it shines a light on the rapid evolution of technology and the need for regulation, which is something the Government of Canada has been attempting to do through Bill C-27.
In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act as part of Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 is in its second reading in the House of Commons. You can read more about Bill C-27 here and here.
ChatGPT banned in Italy over Privacy Concerns
The OPC’s announcement follows the investigation into ChatGPT by the Italian Data Protection Authority (“IDPA”). On March 31, 2023, Italy took the global stage as it became the first western country to block the use of ChatGPT amidst the IDPA’s investigation into ChatGPT’s compliance with the European Union’s General Data Protection Regulation. The IDPA has ordered OpenAI to temporarily halt the processing of Italian users’ data due to a potential threat to the privacy of its users. Italy, however, is not the first country to raise privacy concerns with the use of ChatGPT. Countries such as North Korea, Iran, Russia, Cuba, Syria and China have restricted the use of ChatGPT. Moreover, the Government of the United Kingdom has begun publishing recommendations for the regulation of the AI industry, which include guidance from the Department for Science, Innovation and Technology, the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority, among other entities.
ChatGPT and Privacy Issues
While the OPC did not give any details about the investigation, it is likely the OPC is looking into issues similar to those raised by the IDPA, which, amongst other things, include the lack of legal basis underpinning the massive collection, use and disclosure of personal information in order to train the ChatGPT algorithms on which the platform relies.
One of the cornerstone privacy issues associated with ChatGPT is its use of web scraping[1] and the collection of personal information without consent. Generally, the provincial and federal private sector privacy legislations in Canada require organizations to obtain consent to collect, use and disclose personal information. For this reason, the large scale collection, use and disclosure of personal information by ChatGPT to train the algorithms without consent may be a violation of the applicable privacy legislation in Canada.
Further, the OPC’s investigation into ChatGPT is reminiscent of its 2021 investigation into Clearview AI. In that case, the OPC found that Clearview AI was scraping images of people from across the Internet, constituting mass surveillance and violating the privacy rights of Canadians. In particular, the OPC determined that the collection of this personal information violated individuals’ reasonable expectations of privacy and included potential harms, such as the risk of misidentification and exposure to potential data breaches. It will be interesting to see whether the ChatGPT investigation will build upon some of the lessons learnt from the Clearview AI investigation.
Recommendations
Here are some recommendations that organizations should adopt to protect personal information and limit privacy liability when using ChatGPT or similar AI technology:
- Build an AI compliance framework that is principle- and risk-based that can evolve with the technology and regulatory landscape. The framework should be built with input from both internal and external stakeholders.
- Part of the framework should set out clear guidelines around the responsible, transparent and ethical usage of AI technology.
- Conduct a privacy and an ethics impact assessment for the use of new AI technology. The assessment should answer the following questions:
- What type of personal information will the AI technology collect, use and disclose?
- How will the personal information be used by the AI technology?
- Will the data set lead to any biases in the output?
- What risks are associated with the AI technology’s collection, use and disclosure of the personal information?
- Will there be any human involvement in the decision-making?
The core of any AI compliance framework should be the incorporation of privacy-by-design and ethics-by-design concepts into the framework. This means that data protection and ethical features will be integrated into the organization’s system of engineering, practices and procedures. These features will likely allow an organization to adapt to changing technology and regulations.
For more information about the legal implications of the use of ChatGPT or other AI technology, please contact Roland Hung of Torkin Manes’ Technology, Privacy & Data Management Group.
The author would like to acknowledge Torkin Manes LLP Articling Student, Valerie Sedlezky, for her contribution to drafting this article.
[1] Web scraping, also known as data or screen scraping, is a method whereby a program can automatically extract data from any source with a human-readable output.