Canadian privacy law 2.0: Artificial intelligence (AI) and Bill C-11, the Consumer Privacy Protection Act

( Disponible en anglais seulement )

7 décembre 2020 | Myron A. Mallia-Dare, David Krebs

In a recent announcement, the Canadian federal Privacy Commissioner of Canada (“OPC”) released a report containing recommendations on how AI should be treated under Canadian privacy law, and what protections need to be in place to ensure AI applications reach their potential without negatively impacting privacy rights of Canadians. The report entitled “A Regulatory Framework for AI: Recommendations for PIPEDA Reform” is the result of the consultations with stakeholders earlier, as discussed in our previous blog article, earlier in the year. The Commissioner received 86 submissions and held two in-person consultations.

Almost concurrently, on November 16, 2020, the federal government announced a tabling of legislation that will overhaul Canadian privacy law, namely, Bill C-11, “An Act to enact the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act and to make consequential and related amendments to other Acts.” We reported on Bill C-11 and the proposed replacement of Canada’s Personal Information Protection and Electronic Documents Act (“PIPEDA”), being the Consumer Privacy Protection Act, in our first article in a series on the potential impact of Bill C-11. The Commissioner released a statement shortly after Bill C-11 was announced, commending many of the proposed changes, such as increased enforcement and order-making powers, but Commissioner Therrien also voiced significant concerns. In particular, the OPC is concerned with how the new law does not place privacy rights in the context of individual and human rights and fails to entrench it as such in the proposed Bill C-11.

AI and Canadian Privacy 2.0

The first thing any business or organization developing, integrating or using AI should know is that AI is widely viewed as having an immense potential to impact the privacy rights of individuals. With an impact to privacy rights comes the potential to impact democratic rights, the right to informational self-determination, and basic and enshrined human rights. In Canada, current privacy laws were not written in the age of big data and cybersecurity threats, ubiquitous computing, social media, or artificial intelligence. Rather, they were drafted in the aftermath of the dot com boom, the emergence of the Internet, and the rise of e-commerce.

The overhaul of Canadian privacy law has been signaled for quite some time but had been delayed by the pandemic and the election earlier in the year. With all this in mind, how should organizations prepare in the present? Bill C-11 is set to give the OPC enhanced enforcement powers as well as highlight the importance of privacy rights. Companies should keep this in mind when designing and bringing to market tools, algorithms, and products. Users of AI should keep in mind their current responsibilities but also look ahead to the impact of what is being recommended.

 In the Commissioner’s report “A Regulatory Framework for AI: Recommendations for PIPEDA Reform,” a number of key considerations are discussed. We would like to highlight some of the most salient aspects of these recommendations.

Consent

 PIPEDA is a consent-based privacy protection law. That is, the law contemplates that a provider, a collector of personal information of a user, obtain the consent of that user before processing any personal information. This consent should be “meaningful” and the user must have the right information in order to make that decision. That is wherein the issue lies when it comes to AI. The Commissioner notes:

“AI highlights the shortcomings of the consent principle in both protecting individuals’ privacy and in allowing its benefits to be achieved. Consent can be used to legitimize uses that, objectively, are completely unreasonable and contrary to our rights and values. Additionally, refusal to provide consent can sometimes be a disservice to the public interest when there are potential societal benefits to be gained from use of data.”

The upshot is that “consent” as a basis for personal information processing cannot always cope with present-day technology. The law must find other ways to not stifle innovation while protecting privacy. The Commissioner proposes exceptions to consent. These exceptions are not intended to undermine privacy rights, but rather to enhance them. They are as follows: research and statistical purposes, compatible purposes, and legitimate commercial interests purposes. These exceptions can already be found in the European General Data Protection Regulation (“GDPR”). In fact, “legitimate interests” are usually the preferred way to allow for legal processing of personal information under the GDPR. It is the most flexible approach, but is not often well-understood by all those affected, and it is certainly not a carte blanche to process data unreasonably, without consent or legitimacy. That is, legitimate interests have to be legitimate, first and foremost, and must pass the balancing test – are the interests outweighed by more pressing privacy rights, which would also be balanced against the interests of society as a whole.

Another exception to consent is for “complimentary interests,” a concept already captured in Quebec’s Bill C-64, An Act to modernize legislative provisions as regards the protection of personal information. The idea being that if a purpose is not the same but complimentary to the original purpose, fresh consent is not required. It is quite apparent why this would be of interest for AI, where quite often data collection and processing is exploratory in nature. Not all purposes will be known in great detail but, so long as the unknown purpose is “complimentary” to the originally stated purpose, the law will not require consent as this would likely make the activities too complex, onerous and time-consuming.

The Canadian legal framework has failed to adequately address how AI systems collect and utilize data and modern day cybersecurity threats facing organizations operating in this space. Compliance with applicable privacy laws continues to be one of the most significant challenges facing developers and customers of AI systems. This is because AI systems are, by their very nature, not designed to address the requirements that arise under privacy laws. This includes the requirement that the individuals for which the data relates provide meaningful consent prior to the collection, use and disclosure of their personal information. This is due to the sheer volume of data AI systems are designed to process and the manner in which data is utilized. AI systems rely on data inputs to learn, make decisions and create outputs. Due to the sheer amount of data that may be utilized, ensuring that all relevant individuals have provided consent for the purposes for which the AI system can utilize the data can be challenging. Tracking the consents poses an even greater challenge, particularly when to provide meaningful consent, the individual must have been made aware – at the time of collection – the purposes for which the data was collected.

Automated Decision Making

In the words of the OPC, “The algorithms used to reach a decision concerning an individual can be a black box, leaving an individual in the dark as to how the decision was determined.[…]. Automated decisions run the risk of being unfair, biased, and discriminatory.”

The privacy issues arising from AI systems are compounded as AI systems continue to become more autonomous. Even today, there are countless examples of AI systems that autonomously select and collect data from its surroundings – including, personal information of individuals. This is easily illustrated in the context of autonomous vehicles which utilize numerous cameras and sensors to interact with their surroundings and make decisions.  In this public sphere, these autonomous vehicles will then collect and process significant amounts of personal information from individuals who have not provided consent for this collection or use. This concern is magnified as AI systems become more prevalent.

The OPC also notes the importance of a “Right to explanation,” a “Right to contest” and how developers must be able to demonstrate accountability when it comes to automated decisions.

Privacy by Design: Designing AI for Privacy and Human Rights

As AI systems are upgraded and improved, the purposes for which they may utilize the data may also change. In addition, developers of AI systems may wish to utilize data to train a new AI system using data which had been previously collected for another purpose. Yet, without meaningful consent from all applicable individuals, these developers may be prohibited from using data which contains personal information without again seeking consent from all individuals to which the information relates.

Therefore, developers and customers of AI systems must have a good understanding of what restrictions are placed on the data which they intend to utilize. These restrictions include the developer’s/customer’s obligations under applicable privacy laws. Developers must also ensure that AI systems are developed to address privacy concerns and integrate privacy into the operation of the AI system from the beginning. Privacy cannot be an afterthought or a patch, it must be integral to the offering. This concept is referred to as “Privacy by Design” and developers of AI systems must ensure that they have built-in these protections and procedures into the AI systems which they develop. This concept is perhaps the most important piece, and most difficult to implement, when designing processes and systems. Data minimization, privacy as a default, and transparency, all form part of the seven “foundational principles” of privacy by design.

The Commissioner has expanded this concept to highlight the design of AI to consider human rights as inexorably connected to privacy in this context. Any new law must recognize privacy as a human right. The failure of the proposed Consumer Privacy Protection Act to put privacy in the context of a human right is cited as a key deficiency of Bill C-11.

Conclusion

Prohibiting AI systems from collecting and using data which may include personal information is impractical and would render certain AI systems inoperable. If certain AI systems are developed relying on only limited data, whether in diversity or volume, there is a significant risk that the outputs they provide will be biased based on the sample size utilized. A balance must be struck between protecting an individual’s privacy and fostering innovation and ensuring that AI systems can provide unbiased and meaningful results. As we are seeing with Bill C-11 and the reactions of the OPC, there are remaining tensions between these two aspects.

Developers of AI systems should be taking steps to look to technological means to limit the personal information they collect as part of a privacy by design approach. Employing means to anonymize data at the time of collection can reduce or eliminate the risk of impacting privacy rights and running afoul privacy and data security laws. Developers should also ensure that the AI system is restricted in collecting only information to what is required, which will assist in reducing the risk of utilizing personal information of an individual without the appropriate consent or other legal basis.

Importantly, developers must monitor closely how the Consumer Privacy Protection Act makes its way through the legislative process, along with other developments in Ontario, BC, and Quebec, to ensure practices are aligned and risks understood.

Our privacy and technology experts are happy to have a discussion about these topics and how you can prepare for these changes.

Avis de non-responsabilité

Cette publication est fournie à titre informatif uniquement. Elle peut contenir des éléments provenant d’autres sources et nous ne garantissons pas son exactitude. Cette publication n’est ni un avis ni un conseil juridique.

Miller Thomson S.E.N.C.R.L., s.r.l. utilise vos coordonnées dans le but de vous envoyer des communications électroniques portant sur des questions juridiques, des séminaires ou des événements susceptibles de vous intéresser. Si vous avez des questions concernant nos pratiques d’information ou nos obligations en vertu de la Loi canadienne anti-pourriel, veuillez faire parvenir un courriel à [email protected].

© Miller Thomson S.E.N.C.R.L., s.r.l. Cette publication peut être reproduite et distribuée intégralement sous réserve qu’aucune modification n’y soit apportée, que ce soit dans sa forme ou son contenu. Toute autre forme de reproduction ou de distribution nécessite le consentement écrit préalable de Miller Thomson S.E.N.C.R.L., s.r.l. qui peut être obtenu en faisant parvenir un courriel à [email protected].