Skip to main content

INDU Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Industry and Technology


NUMBER 096 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Thursday, November 9, 2023

[Recorded by Electronic Apparatus]

(1535)

[English]

     Good afternoon, everyone.

[Translation]

    I call this meeting to order.
    Welcome to meeting number 96 of the House of Commons Standing Committee on Industry and Technology.
    Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.
    Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts.
    I’d like to welcome our witnesses today: Alexander Max Jarvie, partner, Davies Ward Phillips and Vineberg LLP; François Joli-Coeur, partner, Borden Ladner Gervais; Scott Lamb, partner, Clark Wilson LLP; Carole Piovesan, co‑founder and partner, INQ Law; and David Young, principal, privacy and regulatory counsel, David Young Law.

[English]

    Welcome, everyone, and thank you again for joining us this afternoon.
    Without further ado, I yield the floor to Mr. Jarvie for five minutes.
    Good afternoon, and thank you for the invitation to share my thoughts on Bill C-27 with the committee.
    I am a partner at Davies Ward Phillips & Vineberg LLP, practising as a lawyer in the firm’s technology group. I am appearing today in a personal capacity, presenting my own views.
    Recent years have seen significant technological developments related to machine learning. In part, these have come to pass because of another relatively recent development, namely, the vast amount of information, including personal information, that is now generated by our activities and circulates in our economy and our society. Together, these developments hold great promise for future innovation, but they also carry significant risks, such as risks to privacy, risks of bias or discrimination and risks relating to other harms.
    I am, therefore, encouraged that a bill has been introduced that seeks to address these risks while supporting innovation. I will begin by making some remarks on the proposed consumer privacy protection act, CPPA, and by suggesting changes to certain provisions of the bill that could better support innovation involving machine learning while introducing important guardrails. I will then share some observations in relation to the proposed artificial intelligence and data act, AIDA.
    In my view, there could be improvements made to the CPPA consent exception framework that would facilitate personal information exchange among, and collection by, private sector actors that wish to undertake socially beneficial projects, study or research. In particular, proposed sections 35, 39 and, in part, 51 could be combined and generalized so as to permit private sector actors to disclose and exchange personal information or to collect information from the public Internet for those purposes, study or research, provided that certain conditions are fulfilled.
    Those could include conducting a privacy impact assessment, entering into an agreement containing relevant contractual assurances where applicable, and providing notice to the commissioner prior to the disclosure or collection. Noting that de-identified data is sufficient for the training of machine learning models in many cases and noting that de-identification is a requirement in proposed section 39, as currently drafted, but not in proposed section 35, I would note only that whether the information should be de-identified in a given case should be a factor in the proposed privacy impact assessment.
    Suitably crafted, these changes could provide material but appropriately circumscribed support for section 21 of the proposed CPPA, which permits the use of personal information that has been de-identified for internal research and analysis purposes, and for proposed subsection 18(3), which permits use of personal information in its native form for legitimate interests, provided that an assessment has been undertaken.
    With respect to the AIDA, I begin with the definition of the term “artificial intelligence system”. This definition is of fundamental importance, given that the entire scope of the act depends upon it. The current definition risks being overbroad. The minister’s letter proposes to provide better interoperability by introducing a definition that seeks to align with a definition used by the OECD, but the text provided differs from the OECD formulation and introduces the word “inference” in a suboptimal way. We also do not have the final wording.
    There are also different definitions to consider in other instruments, including the European Union’s proposed AI act, the recent U.S. President’s executive order, and the NIST AI risk management framework, among others. Some of these do converge on the OECD’s definition, but in each case the wording differs.
    I would recommend to the committee—or, at least, I would urge the committee—when it begins clause-by-clause review, to make a survey of existing definitions to determine the state of the art and to ensure that the definition ultimately chosen indeed maximizes interoperability yet also remains extensible to account for new techniques or technologies.
    I would also recommend that the purpose clause of the AIDA, as well as other relevant provisions, be amended to include harms to groups and communities, as these may also be adversely affected by the decisions, recommendations or predictions of AI systems.
    Finally, there should be an independent artificial intelligence and data commissioner. The companion document to the AIDA notes that the model whereby the regulator would be a departmental official was chosen in consideration of a number of factors, including the objectives of the regulatory scheme. However, since the scope of what is being left to regulation is so extensive, the creation of an independent regulator to administer and enforce the AIDA will counterbalance skepticism concerning the relative lack of parliamentary oversight and thereby help to instill trust in the overall regulatory scheme.
    I will submit a brief for consideration by the committee, elaborating on the matters raised here. Machine learning technologies are poised to play a significant role in future innovation. Through legislation, we can achieve meaningful support for this potential while providing effective protections for individuals, groups and society.
(1540)
     Thank you for your attention. I welcome your questions.
    Thank you very much.

[Translation]

    We will now hear from Mr. Joli‑Coeur.
    Thank you for inviting me. I'm pleased to have the opportunity to share my thoughts on Bill C‑27 with the committee.
    I am a partner at Borden Ladner Gervais, BLG, and a member of the privacy practice group. I am also the national lead of BLG's artificial intelligence, AI, group. I am appearing today as an individual.
    My remarks will focus on the AI provisions in the bill, in both the artificial intelligence and data act, or AIDA, and the consumer privacy protection act, or CPPA.
    To start, I want to say how important it is to modernize the federal privacy regime, something Quebec, the European Union and some of the world's largest economies have done recently.
    I commend the government's commitment to AI legislation. In spite of the criticisms against AIDA, the bill has the advantage of putting forward a flexible approach. Nevertheless, some key concepts should be provided for in the act, instead of in the regulations. Furthermore, it is imperative that the government consult extensively on the regulations that flow from AIDA.
    The first point I want to make has to do with the anonymized data in the CPPA. The use of anonymized personal information is an important building block for AI models, and excluding anonymized information from coverage by the act will allow Canadian businesses to keep innovating.
    The definition of anonymization should, however, be more flexible and include a reasonableness standard, as other individuals and groups have recommended. That would bring the definition in line with those in other national and international laws, including recent amendments to Quebec's regime.
    The CPPA should explicitly state that organizations can use an individual's personal information without their consent to anonymize the information, as is the case for de‑identified information.
    Lastly, AIDA includes references to anonymized data, but it isn't defined in the act. The two acts should be consistent. AIDA, for instance, could refer to the definition of “anonymize” set out in the CPPA.
    The second point I want to make concerns another concept in the CPPA, automated decisions. Like most modern privacy laws, the proposed act includes provisions on automated decisions. On request by an individual, organizations would be required to provide an explanation of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them.
    An automated decision system is defined as any technology that assists or replaces the judgment of human decision-makers. The definition should be amended to capture only systems with no human intervention at all. That would save organizations the heavy burden of having to identify all of their decision support systems and introduce processes to explain how those systems work, even when the final decision is made by a human. Such a change would increase the act's interoperability with Quebec's regime and the European Union's, which is based on the general data protection regulation.
    Turning to AIDA, I want to draw your attention to high-impact systems. The act should include a definition of those systems. Since most of the obligations set out in the act flow from that designation, it's not appropriate for the term to be wholly defined in the regulations. The definition should include a contextual factor, specifically, the risk of harm caused by the system. For example, it could take into account whether the system posed a risk of harm to health and safety or a risk of an adverse impact on fundamental rights. That factor could be combined with the classes of systems that would be considered high-impact systems, as set out in the act.
    Including a list of classes of systems that would de facto be considered high-impact systems, as the minister proposed in his letter, could capture too many systems, including those that pose moderate risk.
    My last point concerns general purpose AI systems. In his letter, the minister proposed specific obligations for generative AI and other such systems. While generative AI has become wildly popular in the past year, regulating a specific type of AI system could render the act obsolete sooner.
(1545)
    Not all general purpose AI systems pose the same degree of risk, so it would be more appropriate to regulate them as high-impact systems when they meet the criteria to be designated as such.
    Thank you very much. I would be happy to answer any questions you have.
     Thank you, Mr. Joli‑Coeur.
    We will now hear from Mr. Lamb.

[English]

     Thank you, Mr. Chair and members of the committee, for having me here today on the important matter of reform of our privacy legislation and Bill C-27.
    I'm a partner at the law firm of Clark Wilson in Vancouver, and I'm called to the bar in Ontario and British Columbia. I've been practising in the area of privacy law since approximately 2000. I've advised both private sector organizations in a variety of businesses and public bodies such as universities in the public sector. I've also acted as legal counsel before the Information and Privacy Commissioner for British Columbia in investigations, inquiries and judicial review.
    With the limited amount of time we have, I'll be confining my remarks to the proposed consumer privacy protection act, specifically the legitimate interest exception, anonymization and de-identification, and the separate review tribunal. Hopefully, I'll have a bit of time to get into the artificial intelligence and data act, AIDA, with respect to high-impact systems.
    I will of course be happy to discuss other areas of Bill C-27 and questions you may have. Also, subsequent to my presentation, I'll provide a detailed brief on the areas discussed today.
    Starting with the proposed consumer privacy protection act and the legitimate interest exception, it's important to point out that arguably the leading privacy law jurisdiction, the EU with its GDPR, provides for a stand-alone right of an organization to collect, use and disclose personal information if it has a legitimate interest. Accordingly, if Canada is to have an exception to consent based on an organization's legitimate interest, it's important to look, in detail, at how that will operate and the implications of that exception.
    First, to reiterate, the draft provisions in proposed subsection 18(3) are an exception to the consent requirements and not a stand-alone right for an organization as set out in the GDPR.
    What's the significance of this? A stand-alone right generally is not as restrictively interpreted by the courts as an exception to an obligation from a purely statutory interpretation point of view. In short, the legitimate interest exception is very likely to be a narrower provision in scope than the GDPR's legitimate interest provisions.
    A stand-alone right may be a means to circumvent or generally undercut the consent structure of our privacy legislation, which again is at the heart of our legislation and is a part of the inculcated privacy protection culture in Canada. Maintaining the legitimate interest provisions as an exception to the consent structure, on balance, is preferable to a stand-alone right.
    Second, the exception is only for the collection or use of personal information and is not permitted for the disclosure of personal information to third parties. The prohibition on application of the exception to disclosure of personal information that is in the legitimate interest of an organization, in my view, doesn't make sense. While I'm in favour of the first instance of an exception over a stand-alone right, I think you have to expand this to cover disclosure as well.
    The provisions in proposed subsection 18(3) expressly state that the legitimate interest of an organization “outweighs any potential adverse effect”. This is effectively a high standard of protection. The usefulness of this exception, if limited to only collection and use, is significant for organizations. For example, a business may have a legitimate interest in collection and use of personal information to measure and improve the use of its services or to develop a product. However, proposed subsection 18(3) prevents that organization from actually disclosing that personal information to a business partner or third party vendor to give effect to its legitimate purpose.
    Finally, the point is that other jurisdictions allow for a legitimate interest of an organization to apply to disclosure of personal information as well as to collection and use. Specifically, again, that is not only the EU GDPR but also the Singapore law. I note that when you look at those pieces of legislation standing side by side, Singapore also has it as an exception. Singapore also has some case law that has moved forward.
     I think it would give a lot of comfort to this committee if it were to examine some of the case law from Singapore, as well as some of the more current case law from the GDPR regime. It does give some sense of what this means as a legitimate interest, which I can appreciate at first instance may seem rather vague and could be seen as a giant loophole. However, my submission is that's not the case.
(1550)
     The next item I'd like to talk about is anonymization and de-identification. Clarity on this issue has been sought for some time, and it's reassuring that the change from Bill C-11 to Bill C-27 introduced this idea, a concept of anonymization, as separate from de-identification. However, technologically and practically speaking, you're never going to reach the standard set out in the definition of anonymization, so why put it in the act in the first place? There's been some commentary on this, and I am generally in support of the recommendation that you should insert into that definition the reasonableness to expect in the circumstances that an individual can be identified after the de-identification process. Then the data is not anonymized and is still caught by the legislation and the specific requirements for the use and disclosure of such data.
    In terms of use and disclosure, I also note that proposed section 21 confines the use to internal use by the organization. The utility of this provision could be remarkably limited by this, again compared to what our trading partners have, because in modern research and development you have the idea of data pooling and extensive partnerships in the use of data. If it's strictly for internal purposes, we could lose this important tool in a modern technological economy that relies on this. Therefore, I recommend that it be deleted as well.
    Also, proposed section 39 would limit the disclosure of de-identified personal information to, effectively, public sector organizations—this is very restrictive—and consideration should be given to disclosing to private sector organizations that are really fundamentally important to our modern economy and research and development.
    In terms of the separate review tribunal, I know that the Privacy Commissioner has been hostile to this and I recognize that the Privacy Commissioner performs an invaluable role in investigating and pursuing compliance with our privacy legislation. However, given the enormous administrative monetary penalties that may be awarded against organizations—the higher of 3% of gross annual revenue or $10 million—for breaches, clear appeal rights to an expert tribunal and review of penalties are required to ensure due process and natural justice standards and, frankly, to develop the law in this area.
    It is also noteworthy that judicial oversight of the decision of the tribunal would be according to the Supreme Court of Canada's test in Vavilov, which is limited to a review on the reasonableness standard, which is a very deferential and limited review. It's been suggested that you try to limit these things from going on forever and ever. With judicial review, they would be limited. I know there was one suggestion that the ability to seek judicial review should jump right from the tribunal to the Federal Court of Appeal. I think that's fine if you want to expedite this and meet that concern. I think that's probably right, but I do like the structure of a separate review tribunal.
    Finally, on artificial intelligence and the high-impact systems, I think the focus of that, in terms of identifying the concept of high-impact systems, is sound in structure and potentially generally aligned with our trade partners in the EU. However, the concept cannot be left to further development and definition in regulations. This concept needs extensive consultation and parliamentary review.
    It is recommended that the government produce a functional analysis of a high-impact system from qualitative and quantitative impact, risk assessment, transparency and safeguards perspectives.
    It's further recommended that distinctions be made between artificial intelligence research and development for research purposes only and artificial intelligence that is implemented into the public domain for commercial or other purposes. What I would not want to see come out of our AIDA legislation is that we have some sort of brake on research in artificial intelligence.
(1555)
     We are vulnerable and our allies are vulnerable to other international actors that are at the forefront of research in artificial intelligence. We should not have anything in our legislation to break that. However, we should protect the public when artificial intelligence products are rolled out to the public domain, and ensure that we are protected. I think that's a distinction that is missing in the discussion, and it's very important that we advance that.
    Those are my submissions.
    Thank you.
    Thank you, Mr. Lamb.

[Translation]

    We now go to Ms. Piovesan.

[English]

     Thank you, Mr. Chair and members of the committee, for the opportunity to speak to Bill C-27.
    I am the managing partner of INQ Law, where my practice focuses on data- and AI-related laws. I’m here in my personal capacity and the views presented are my own.
    Every day, we are hearing new stories about the promise and perils of artificial intelligence. AI systems are complex computer programs that process large amounts of data, including large amounts of personal information, for training and output purposes. Those outputs can be very valuable.
    There is a possibility that AI can help cure diseases, improve agriculture yields or even help us become more productive, so we can each play to our best talents. That promise is very real, but as you've already heard on this panel, that promise does not come without risk. Complex as these systems are, they are not perfect and they are not neutral. They are being developed at such a speed that those on the front lines of development are some of the loudest voices calling for some regulation.
    I appreciate that this committee has heard quite a bit of testimony over the last several weeks. While the testimonies you've heard have certainly run the gamut of opinions, there seem to be at least two points of consistency.
    The first is that Canada’s federal private sector privacy law should be updated to reflect the increasing demand for personal information and changes to how that information is collected and processed for commercial purposes. In short, it’s time to modernize PIPEDA.
    Second, our laws governing data and AI should strive for interoperability or harmonization across key jurisdictions. Harmonization helps Canadians understand and know how to assert their rights, and it helps Canadian organizations compete more effectively within the global economy.
    The committee has also heard opposing views about Bill C-27. The remainder of my submissions will focus on five main points to do with parts 1 and 3 of the bill.
    Part 1, which proposes the consumer privacy protection act, or CPPA, proposes some important changes to the governance of personal information in Canada. My submissions focus on the legitimate interest consent exception and the definition of anonymized data, much of which you've already heard on this panel.
    First, the new exceptions to consent in the bill are welcome. Not only do they provide flexibility for organizations to use personal data to advance legitimate and beneficial activities, but they also align Canada’s law more closely with those of some of our key allies, including internally within Canada, such as Quebec’s Law 25, more specifically. Critically, they do so in a manner that is reasonably measured. I agree with earlier testimony that you've heard in this committee, that the application of the legitimate interest exception in the CPPA should align more closely with other notable privacy laws, namely Europe's GDPR.
    Second, anonymized data can be essential for research, development and innovation purposes. I support the recommendations put to this committee by the Canadian Anonymization Network with respect to the drafting of the definition of “anonymize”. I also agree with Mr. Lamb's submissions as to the insertion of existing notions of reasonable foreseeability or a serious risk of reidentification.
    As for part 3 of the bill, the proposed artificial intelligence and data act, first, I support the flexible approach adopted in part 3. I caution and recognize that the current draft contains some major holes, and that there is a need to plug those holes as soon as possible. As well, any future regulation would need to be subject to considerate consultation, as contemplated in the companion document to AIDA.
    Our understanding of how to effectively promote the promise of AI and prevent harm associated with its use is evolving with the technology itself. Meaningful regulation will need to benefit from consultation with broad stakeholders, including, importantly, the AI community.
(1600)
     Second, Minister Champagne, in the letter he submitted to this committee, proposes to amend AIDA to define “high impact” by reference to classes of systems. The definition of high impact is the most striking omission in the current draft bill.
    The use of a classification approach aligns with the EU's draft artificial intelligence act and supports a risk-based approach to AI governance, which I support. When the definition is ultimately incorporated into the draft, it should parallel the language in the companion document and provide criteria on what “high impact” means, with reference to the classifications as illustrated.
    Finally, I support the proposed amendments to align AIDA more closely with OECD guidance on responsible AI. Namely, this is the definition in proposed section 2 of AIDA, which has also been adopted by the National Institute of Standards and Technology in the United States in its AI risk management framework.
    To the extent that Canada can harmonize with other key jurisdictions where it makes sense for us to do so, we should.
    I look forward to the committee's questions, as well as to the comments from my fellow witnesses.
(1605)

[Translation]

    Thank you.

[English]

    Finally, Mr. Young, the floor is yours.
    Thank you for the invitation to appear before this committee for its important review of Bill C-27.
    This bill includes significant proposed amendments to Canada's privacy laws at the same time as it introduces a proposed oversight regime for artificial intelligence. The AIDA component warrants focused study by the committee. Certainly, as you've heard from my co-witnesses, there's a lot to consider there. However, I will restrict my comments to the privacy components.
    I am a privacy and regulatory lawyer. My practice over the past 25 years has included advising private sector organizations—both for-profit and non-profit—as well as government and Crown agencies. I address all relevant areas, including individual privacy, employee privacy and health privacy.
    In these introductory comments, I will focus on one impactful area of the bill, which you have heard some comments about already: de-identified and anonymized information. I'm hoping to provide some clarification as well as my thoughts on how the proposed provisions can be improved.
    The proposed treatment of such information in Bill C-27 is critically important. Firstly, it clarifies a category of information that, while not being fully identifiable and therefore available for specific uses without consent, is still deemed appropriate for protection under the law. Secondly, it provides for a category of anonymized information that can be used more broadly for research purposes, innovation and policy development.
    The first category, de-identified information, is governed by all of the law's privacy protections, subject to certain specific exceptions. Conversely, the second category, anonymized information, is stated to not be subject to the law. However, as I will mention, this stipulation—that it's not subject to the law—is not the end of the story. The law will and should continue to provide oversight over anonymized information. This is a point that is sometimes missed. I certainly heard it raised as a concern in previous comments. I think it's very important to understand that, however we define the term—and we've heard a number of comments here—it will continue to be subject to the law.
    I have a number of recommendations for improvement.
     First, with respect to de-identified information, the definition should be amended to stipulate appropriate processes to ensure no person can be directly identified from the information. Additionally, proposed section 74 of the CPPA, which addresses technical and administrative protections, should be amended to include, as an additional criterion, the risk of re-identification.
    Secondly, the definition of anonymized information should be amended to make more explicit the processes required for anonymization. With its law 25, Quebec got it right in this area. I recommend aligning with Quebec's approach, which stipulates that the generally accepted best practices for anonymization should be those set out in appropriate regulations. Such regulations should include transparency, risks of re-identification, accountability and guardrails for downstream uses. The Quebec law also recognizes that it is not possible, from a practical perspective, to say that anonymized information cannot be re-identified. The CPPA provision should reflect the same approach. Additionally, there should be a requirement for the organization performing any anonymization process to conduct a re-identification risk analysis. This is a proposed requirement in Quebec's regulations governing anonymized information.
    Thirdly, the applicability of the law's protections for de-identified information is a bit of a complicated area. I can certainly go into it in more detail during questions, if you like. Currently, the CPPA provides that de-identified information is personal information, except for certain provisions, where it will not be considered personal information.
(1610)
     This is the wrong approach. Instead, as recommended by the OPC, a simple statement should be made that all de-identified personal information remains personal information. Also, the list of exceptions in the bill is confusing. To make it simpler and clearer, many of the exceptions should be omitted entirely—they are not needed. I can explain that in more detail if you wish.
    My final comment is to address, as I mentioned a couple of minutes ago, a concerned voice by some stakeholders that the statute's anonymization regime should be made expressly subject to oversight by the Privacy Commissioner. I know you've heard that from at least one witness and maybe others here. In my view, such a provision is not required. The commissioner will have oversight over an organization's compliance with the anonymization rules, whatever they are. Also, and very importantly, if anonymized information does become identifiable—and that's this whole risk of reidentification—all of the statute's protective provisions again will apply with full vigour, and the commissioner will have oversight. Actually, there are two routes whereby the commissioner will or may continue to have oversight.
    In sum, my recommendations are as follows.
    First, the definition of “de-identified” information should be made more rigorous, including addressing the risk of reidentification. Secondly, the definition of anonymized information should be amended to make more explicit the processes required to achieve anonymization, and these should be set out in regulations, including a requirement for risk assessment. Finally, the regime for applicability of the CPPA's protections for de-identified information should be made clearer, in particular, stating that all such information remains personal information.
    I will be happy to elaborate and answer any questions you have regarding these comments or any other provisions of the bill.
    Thank you very much, Mr. Young.
    To start the discussion, I will turn it over to Mr. Perkins for about six minutes.
    Thank you very much, Mr. Chair.
    Thank you, witnesses—those were great presentations. This has been a fascinating bill and discussion so far, so thank you very much. There were lots of good, new approaches, too, and reinforcement of others we've heard....
    My first question will be for Mr. Lamb. You won't be surprised to learn, if you've been following, that my belief is that this bill actually puts the interests of large corporations ahead of individuals' right to privacy.
    Starting in proposed section 5, even if it's changed to “fundamental right”, it still has the word “and”, which puts it on par with an organization's right to use the data.
    In my view, “fundamental right” is further watered down by proposed subsection 15(7), which allows implied consent, which I think is a thing that should have gone out with the dodo bird. I don't think there should ever be implied consent.
    Then there's proposed subsection 18(3), which you referenced, which says it has restrictions. When I read it, though, it says I can use somebody's data “without their knowledge” even if it harms them. I have to understand that I'm a marketer—I've been elected for only two years. I liked to push the envelope for the large corporations I did marketing for on data. I know a bit about how I use data in the retail space.
    I'd like to ask you if you really believe that putting a fundamental right and purpose on par with everything else doesn't still skew the bill totally towards large corporate exceptions in this bill to allow businesses basically to do the things that marketers want to do, which is use everything as an exception to use individual data to sell more product.
(1615)
    I understand your concern, and my sympathies are with the interpretation of this legislation as consumer protection legislation. I think the status of the current law to date is that that's what it is, so courts and potentially a tribunal will look at the facts of any case from that perspective. I think that should give you some reassurance. If there's some need to be more expressive about that and to bolster that, I would be in sympathy with that and your concerns.
    With proposed subsection 18(3), my suggestion was to start looking at some of the case law that's coming out of the GDPR, that jurisdiction, and Singapore. You get an idea of how that's to work.
    One of the potential things you can investigate is that, if you're going to get rid of implied consent, you're going to have to have a very robust “legitimate interest” exception and—
     Okay, I have only a little time. Could you table or share with the committee later some of that case law that you think would be helpful to us? That would be great.
    Mr. Scott Lamb: Sure, I'd be happy to do that.
    Mr. Rick Perkins: You're the first one I'm aware of, actually, who's raised section 21 of the proposed act, so thank you for doing that.
    When I read it, it's yet another clause that says that an organization can use somebody's information without their knowledge or consent for internal research.
    From a positive perspective, I use data for internal research all the time, and it says it has to be de-identified first. I often was looking at individual customer data through loyalty programs, or if I actually had a coalition program like Air Miles, I had a lot of collected data on individuals to do them.
    Would this inhibit a company from doing what they've done in the past—a retailer, for example— in analyzing coalition loyalty rewards programs or their own in-house loyalty programs?
    Potentially it could be interpreted that way, and that's why I think there's an added reason to delete it.
    Thank you.
    I think Ms. Piovesan and Mr. Joli-Coeur both raised the issue of high-impact systems. It's something I've been struggling with too, and I know we'll probably get to more of it when we get to a deeper part of the AIDA study.
    I'm struggling with what a high-impact system is. Why is only the high-impact system being covered by legislation and not other levels, and what are those other levels, Ms. Piovesan?
    On high-impact systems, if you look at AI law in different parts of the world, you see the governance applies to systems that are likely to cause a significant risk of harm.
    There are lots of AI systems that we are using every day, like our GPS, that do not have a significant risk of harm to an individual, thus they should not be subject to the kind of governance oversight that we're talking about in AIDA.
    The importance of “high-impact” is that it is a trigger to determine when governance is required, as stipulated in the law.
    What would be high risk?
    High risk is defined in the E.U. as a criticality of risk or, in the letter from Minister Champagne, when you're touching on elements of harm or bias—unjustified, unlawful bias—that can harm at scale an individual or property. In the case of the amendment, you have a number of different classes of systems or classes of use cases.
    I want to be clear. We're not talking about a high-impact system; we are talking about a system used in a high-impact context.
    That's a nice distinction.
    Mr. Joli-Coeur.
    I agree with Carole's comments, essentially.
    Okay.
    Mr. Young, I have a quick question. I believe you mentioned that there were parts of proposed section 15 on consent that actually could or should be removed. Is that correct?
(1620)
    No. It is very confusing. There's a provision, I remember, in a proposed section of the bill, that says basically that de-identified information is not personal information except for these sections, and it's a laundry list of about 20. That's what I'm talking about.
    In the context of de-identified....
    That's right. Half of them in their language use the term “de-identified information”. You don't have to turn around and say it isn't “not personal information”. It just reads that way. That's really what I'm talking about.
    Okay, I have another question, if I can.
    I'll be generous, Mr. Perkins. You have one last question.
    Mr. Young, you also said all de-identified...needs to remain personal. Can you explain why that's important and it's not done in the act?
    It's essentially the same point. I was just leading on from that point I just made. It goes back to Bill C-11, which really tried to suck and blow at the same time. It defined a term of “de-identified information”, which if you read it, inherently said it's outside the statute, because it's not personal information—it cannot reasonably identify an individual. However, the statute went on to actually have several provisions, really some of which are still here, that said these apply; these are rules for de-identified information. That was crazy.
    I'm sorry, but I lost track. Ask your question again.
    De-identified information needs to remain classified as personal information.
     It's personal information. That's the point. The point is that there are really three levels: personal information, de-identified information and anonymized information. There are different levels of identifiability, let's call it.
    Personal information is fully identifiable. De-identified information, consistent with both the GDPR and the law in Quebec, does not include a direct identifier, but it may be re-identified. It has a risk of impacting individuals. The statute says that it is still personal information.
    However, it has certain specific exceptions that are totally taken out of the view of the statute. The way it reads now is that it continues to be personal information. You don't need to say that. You can just say that it is de-identified information.
    That means that, one, it is personal information and two, it's governed by the statute. That is de-identified information.
    The anonymized level is theoretically outside the application of the statute altogether, but as I mentioned, that isn't the whole story; there are still rules that apply to it.
    Thank you very much.

[Translation]

    Thank you very much.
    Over to you, Mr. Turnbull.

[English]

    This is really great testimony today. Thank you very much. I'm finding this very interesting and I really appreciated your opening remarks. Thank you all for being here.
    Mr. Lamb, I want to go back to something Mr. Perkins was asking you about. I know he was trying to budget his time and you didn't quite finish what you were saying, but I just want to go back to it for a second with regard to your point of view on implied consent.
    I think you were starting to say that if you didn't have implied consent, you'd need to have stronger legitimate interest clauses in there, perhaps. I don't want to put words in your mouth.
    Maybe you could just go back and finish your thought on that and just spell that out for us.
    I think you would have to get to the GDPR standard of a stand-alone right. If you don't have implied consent, you're going to go for a stand-alone right for that legitimate interest.
    Could I just follow up on that and ask if you do agree that implied consent needs to be in the current bill?
    I'm fine with the structure as it is, with legitimate interest as an exception. I think that's a balancing. If you get rid of implied consent, I think you're heading down the path, then, of a stand-alone right.
(1625)
    Ms. Piovesan, I appreciated your comments as well, with respect to the new exceptions to consent. It sounded as though you were very supportive of that approach. You mentioned legitimate and beneficial purposes as being reasonable. I think you said “reasonably measured”. Can you explain what you meant by that a bit more?
    Well, there are safeguards that are put in place for the use of that exception. There is a test that needs to be met, depending on the exception that you're going to apply. There are safeguards, such as a legitimate interest analysis, that are detailed in the draft bill. All of that is in addition to requirements for a privacy impact assessment or something that looks like it.
    There is an analysis that has to go into effect. In addition, at first instance, you have to meet the threshold test of reasonableness. Is the use or collection within the reasonable expectation of the individual?
    Ultimately, you may have to submit that brief to the Privacy Commissioner if there is a question.
    This is all to say that there are reasonable safeguards put in place to prohibit the flagrant misuse of that consent exception.
    Mr. Young, I saw you nodding your head. Do you agree with that as well?
    I agree with it, yes, absolutely.
    That's good.
    Carole, you made comments as well about the AIDA portion of the bill. I know our focus has been on the privacy and the PIPEDA modernization portion, but I want to ask you about that, since we have you here today.
    I think what you were saying in your opening remarks was that the high rewards or high benefits of these AI systems also come with risks, and I take the point very well that risk and reward often go together.
    Can you speak to the risk-based approach and describe that a little more? You mentioned in your opening remarks how important that is to AI governance.
    Can you explain that a bit more, so that we have it on the record?
     It's consistent with the point I was making earlier. If you look at jurisdictions such as the United States and the EU, the EU has a robust artificial intelligence act that is to be passed, we're told, any day now.
    Look at the Canadian context. The application of a governance framework is triggered when there is a high-risk scenario, meaning that not every single AI system will be subject to the same kind of oversight and rigour as a system that would fall within that high-risk category. That allows for a little more flexibility in the way we manage high-risk use cases. It does put more emphasis on a thoughtful approach to the types of intended purposes these systems are put to.
    That makes a lot of sense to me. Otherwise, you might over-regulate and not get the benefit from some of these systems. I think that's a real risk in terms of this legislation, wouldn't you say?
    I would agree with that.
    Going back to what you were saying about the EU's legislation and work, which seems to be the gold standard that people keep referring to, how would our approach in AIDA align with the AI laws in the EU?
    At its core, it is similar in that it takes a risk-based approach to governing artificial intelligence. Our draft law is much more bare bones than what you see in the EU context. The EU AIA, the artificial intelligence act, is far more prescriptive than what we have in AIDA. There are some distinct differences between our approach and our draft law and theirs.
    At the core, we're looking at a risk-based approach that seeks to govern the data inputs, the models themselves and the outputs of those models throughout the life cycle of the AI system. At its core, that is consistent not only with the EU but with approaches we see in the U.S. as well.
     I'm concerned that AIDA is going to be out of date by the time it's developed, just because of how fast the generative AI space is evolving. You mentioned this in your opening remarks as well, the amount of data being processed, how complex these systems are and how fast they're evolving.
    Does that necessitate a really flexible approach? I think, from my perspective, that our approach started with a container, and then we heard from the minister that there were amendments coming. Is that the right approach to take, from your point of view, given how fast the space is evolving?
    If you look at the draft of AIDA, much of it is an accountability framework. It is a series of assessments with accountability that overlays when that high-impact trigger is reached.
    I agree that a flexible approach is useful in a context in which you have so much changing so quickly, so I support that flexible approach, recognizing, as I said in my opening remarks, that there are some distinct holes in this particular draft.
(1630)
    Thank you, Chair. I think I'm good.

[Translation]

    Thank you, Mr. Turnbull.
    We now go to Mr. Lemire.
    My question is for Mr. Lamb, but the other witnesses can comment if they wish.
    A few weeks ago in Trois‑Rivières, the Bloc Québécois hosted a conference on AI. A number of people expressed concern that the copyright of creators would not be adequately protected under AIDA. That is vitally important in Quebec, where cultural and linguistic preservation are more timely than ever in the face of assimilation and the decline of the French language.
    Mr. Lamb, how could AIDA be specifically amended to better protect the copyright of creators, given the particular importance of preserving Quebec's culture and language?

[English]

    I think that one thing that has been discussed at length and is an important discussion is bias. The legislation and the discussion have moved around issues of bias to ensure cultural protection. Bias in any way in that regard in the legislation should strive not to prejudice any group, sector or community in our society. Of course, French language rights are very important and should be understood and protected.

[Translation]

    If creators do not have adequate copyright protection, how might cultural and linguistic creativity suffer, especially in Quebec? What measures could be taken to mitigate the potential impact?

[English]

     Are you speaking with respect to the privacy legislation or the artificial intelligence legislation?

[Translation]

    I'm interested in your comments on both. I'm especially interested in the AI angle, but also with respect to data protection.

[English]

    I didn't get the translation for that, unfortunately.
    It's particularly in the context of the artificial intelligence act, but it can be in whatever context you want.
    Again, I think that the issues are issues of bias. I think that's where your concern should be in ensuring that French language rights are protected and that artificial intelligence isn't in any way biasing French as a common language of discourse in our country.

[Translation]

    How could AIDA draw on international best practices to better protect copyright, while encouraging AI innovation?

[English]

    Innovation, as you saw from my remarks, and research and development are fundamental. We have to be very careful as we move forward with this legislation.
    My remarks were to carefully distinguish between research and development and artificial intelligence that's rolled out in a public domain or for consumer purposes. While I agree with my colleague that the benefits are enormous for the public, we also have to be very careful that, in restricting, managing and regulating how that is rolled out to the public, we do not foreclose leading-edge research into artificial intelligence.
    Again, I would emphasize that this is not a pleasant world we live in at times. The stakes for who has the commanding heights of artificial intelligence are extraordinarily high. We should do nothing to restrict that, and we should make sure that our country and our allies are at the front end of that research and development.
    We want to protect our public from the pernicious effects of artificial intelligence.
    You raised some issues of language rights and bias. That's a very important discussion to make sure that we protect our public, maintain our democratic values and ensure that the fundamental issue of privacy is preserved in our country.

[Translation]

    Thank you.
    Do you have anything to add, Ms. Piovesan?
(1635)

[English]

    I have two points to make in response to what you said.
    The first is that generative artificial intelligence, generative AI, is the nature of artificial intelligence that we see today through ChatGPT and other types of tools. That is typically considered a general purpose AI, for which, in the letter of the minister, an amendment is proposed to be defined as high-impact systems, which means that it would fall within the governance structure of AIDA. Arguably, it would serve to protect the rights of authors and content creators, because there would be the necessity to govern the use of those systems.
    My colleague, Mr. Joli-Coeur, disputed that it should, in fact, be governed separately. I have sympathy for his positions. We are seeing measures that could be put in place to mitigate some of the risks coming up more and more—content provenance and being able to trace watermark content in its original form so we can understand if that content has ever been manipulated or changed in any way that could be problematic to an author or to a content creator when they put their systems online.
    In addition, through an entirely separate process, we are undergoing a Copyright Act review, so these types of conversations are very much alive in Canada and in other jurisdictions as well.

[Translation]

    I have just a few seconds left, Mr. Jarvie, so you can have them.

[English]

    I just want to add that this is one of the reasons I think AIDA should address the concerns of groups, the harms that could be effected against groups or communities. This is a great example.

[Translation]

    Absolutely.
    I see Mr. Young nodding. Did you have something to add?
    No, not really.

[English]

     I'd just like to make one sort of general comment. My expertise hasn't focused on AIDA. I've read it. I'm familiar with the European legislation. People have mentioned that it has holes and that it's like part of the EU legislation—part of it. It has the framework, but what it doesn't have.... I apologize; I'm not up to date on the minister's letter on this. What the European legislation does is define levels of risk, right down to no risk, right? They have something like four levels. Why don't we have that? It makes sense.
     It's like we're operating with one hand behind our back to build this. If I were going to say...and I'm not coming with any opinions on what to do. I think it's a huge quandary for this committee and Parliament as to what to do with AIDA. There's no question about that. It just strikes me: Why aren't we there? I mean, if you read the EU act, which isn't in force.... It's still in the process. There's quite a bit of process to get it into law, but it's there. It's the act. You read it and you see these levels, and they have responses and levels of oversight and care. We should be doing that.

[Translation]

    Thank you very much.
    I have to say that, as a legislator, I really appreciate today's discussion.
    Thank you.
    Go ahead, Mr. Masse.

[English]

    Thank you, Mr. Chair.
    To continue, I think it was Mr. Young who mentioned Bill C-27 and Quebec's Law 25. Can you give us a little more background as to why it's important to have consistency there?
    Also, potentially, could we inadvertently cause some damage to Quebec with regard to this bill if we don't handle this properly? I'm worried. We're looking at neutrality for Quebec at the very least, I think, as an objective, but I'm also worried about inadvertently damaging their system right now.
    Perhaps you could start us off on that conversation.
    Sure. The level of damage that I would address is confusion if we don't align them—confusion for Quebeckers, primarily, if we end up with some different standards. This actually applies elsewhere than the categories of de-identified information. Anonymized information is the main focus. It just makes sense to align across the country.
     There are two points. I mean, really, I guess I can describe it by “two points”.
     Data is collected by primarily national cross-Canada organizations intermingling the data. Are we going to come up with some rules that just somehow hive off Quebec data for these cross-nationals? That's not going to happen. It's going to be intermingled. Yes, there are possible systems that could do it and put in rules, but it doesn't make sense. I'll rephrase that. Organizations will not want to have to come up with a separate category of information for Quebec, different from the rest of the country. It's not exceptional to basically have the same rules apply. I'm talking about anonymized information.
    The other thing is that I have spent a lot of my career advising marketers. I know that they don't like to say, “Well, actually, our higher standard rules apply just to Quebec residents.” How will that happen? It's not going to happen. You're going to have those companies saying, “No, we're going to the highest standard.”
    Thirdly, I will say that I think Law 25 is going to inform privacy standards across the country de facto, and it is informing this committee and what we're doing here. That, hopefully—
(1640)
    At the end of the day, you're saying that if we don't have that alignment, there are two major injurious sections. One will be on the privacy of individuals and what they have to go through, and then the other will be on economic consequences from companies not wanting to do business, set up shop or provide products and services in Quebec. Is that basically what—
    I don't buy into that, actually. I heard it at a conference today—that if we don't get these rules right, nobody is going to come to Canada. That isn't going to happen. Marketers know that.
     Yes, I know that. I was just trying to clarify.
    No. I would phrase it this way, Mr. Masse. It will impact companies, because it will be more costly, more complicated or whatever to come up with the compliance mechanisms. That's the biggest impact on organizations' collecting data if we don't align the standards.
    I'll let you finish on this, because I would like other witnesses to talk about this, as well, if they'd like to get in on it.
    I want your opinion on an AI commissioner being independent, almost like an officer of Parliament.
    Do you have an opinion on that? I would like to invite comments from the other witnesses. This would be having an AI commissioner as an independent regulator, similar to the Auditor General, outside of the political influence of the minister to make rulings.
    What are your thoughts on that, and do you have an opinion on that? I'll also turn it over to other guests who are at the table there.
    Are you asking me? Okay.
    Yes, the commissioner should be independent, absolutely, and appointed by Parliament.
    Is there anybody else who would like to weigh in on this?
    Sure, I will. This was part of my opening statement.
    It's actually imperative that we have some independence from the ministry for the artificial intelligence and data commissioner. A parliamentary officer sounds like a good way to effect this, as long as it's at arm's length in some way, so that there is someone outside the ministry who is going to be looking after all these regulations and actually calling them on it.
    Would you prefer that to be drafted as a component of the law, or should it be separate from that? That would be even more challenging at the moment. Do you think it can be done within the current construct of the law?
    I'll get legal advice on that, as well. I'm pretty sure it can be, from what I understand, but I want to make sure. The Privacy Commissioner and the Competition Bureau are a bit different, but this might be baked more into the data itself.
    I don't want to overstate what I know would work or wouldn't work as a mechanic in terms of embedding it in the law. I think it would, but take that with a grain of salt, because I'd have to do my own legal analysis on that.
    That's fair enough.
    I'd also like to ask whether anybody has any strong opinions on the ethics tribunal. I'd like to hear about that if somebody has an opinion on the ethics tribunal.
    Mr. Masse, I think you have ethics on your mind, because it's the privacy tribunal.
    I do. Thank you. After my last experience at the ethics committee, it's taking a long time to scrub that. I don't know if anybody watched it. Yes, it's the privacy tribunal.
    Thank you, Mr. Chair, for your assistance. I don't know how much time I have left—it's probably just a minute or so—but if anybody wants to jump in on that, that would be great. If not, we can move on.
(1645)
    I would just reiterate my initial comments that I support a separate tribunal. Despite the misgivings of the Privacy Commissioner, I think that would be a useful forum and protection for the development of the law in this area.
    Fair enough.
    Thank you, Mr. Chair.

[Translation]

    Thank you, Mr. Masse.

[English]

    Mr. Vis, the floor is yours, sir.
    Thank you, Mr. Chair, and thank you to all of the witnesses for some excellent testimony and comments today.
    Mr. Lamb, I have a quick point of clarification. I believe you said in your opening statement that under proposed subsection 18(1) of the bill, an organization would not be able to transfer that data to another organization.
    Is that correct?
    That is correct.
    In previous meetings, we have heard that there are some loopholes in this legislation regarding data portability, specifically as it relates to the transfer of data abroad.
    I have a hard time understanding that interpretation of the legislation in the context of proposed section 20, which is “De-identification of personal information”, and proposed section 19, which is “Transfer to service provider”, where an organization may transfer an individual's personal information to a service provider without that individual's knowledge or consent.
    Am I misunderstanding the legislation here? What is your understanding of what I just stated?
    I think your concern is correct, and I do worry about the wording, particularly in proposed subsection 18(3).
    The member of Parliament, Mr. Perkins, raised that issue. I think you should delete the word “internal” from that proposed subsection 21. I am concerned about the ability to pool.
     Thank you.
    You touched upon the GDPR in some of your comments as well. This question relates to a debate that's starting to form—we haven't really touched on it too much—between privacy by design and.... Unlike the European Union's GDPR, the CPPA does not contain an explicit reference to the concept of privacy by design.
    In the Office of the Privacy Commissioner of Canada's submission on Bill C-27, the commissioner recommends that the CPPA require organizations to implement privacy by design measures for a product, service or initiative from the earliest stages of development.
    During their appearance before the committee, however, government representatives indicated that several elements of the CPPA, such as the fact that it requires organizations to develop a privacy management program, mean that the concept of privacy by design is already embedded in the legislation.
    Do we need something similar to the GDPR, where it's explicitly stated, or is the current approach of privacy management as contained in proposed section 9 going to work okay?
    I think the privacy management program effectively does deal with privacy by design. It forces organizations to develop their policies. It forces organizations to consider privacy impacts. I think overall that achieves what you want.
    If we didn't have privacy management programs, then I think you would have to have something like privacy by design to ensure that we're developing a culture in all organizations that recognizes the impacts of privacy on individuals.
    Thank you.
    Ms. Piovesan, I understand you played an active role in some of the consultations undertaken for this legislation prior to its being tabled.
    In previous meetings, I have focused very extensively on the rights of children in this legislation. Why was the fundamental right to privacy specifically for children not included in the legislation originally?
    My second question would be about the fact that, under the legislation as it stands right now, information related to children is deemed to be sensitive information, yet the legislation lacks a definition of what sensitive information is.
    Can sensitive information as it relates to children be de-identified and de-anonymized under this bill, or if the information is sensitive, can it never be touched?
(1650)
    To the first point, I don't know. I don't know why it wasn't included in the original draft, so I don't have an answer to that. It wasn't part of the consultations that I was part of.
    To the second point, sensitive information has always been, and remains, a contextual analysis of factors that look at things like health information, financial information and the degree to which there are elements of the information that should be protected.
    What about biometric data?
    Biometric data would constitute sensitive information as well. By and large, thinking about the contextual analysis, I would typically include that there.
    If you look at Law 25 in Quebec, you will see a definition of sensitive information that I think can be very informative, so to my fellow witness's point, I think that is important.
    Would it be your position that we should adopt a definition of sensitive information that is similar to the Quebec law and include it in Bill C-27?
    I would support that.
    Thank you.
    Thank you, Mr. Vis.
    MP Van Bynen is up next.
    Thank you, Mr. Chair.
    My first question is for Ms. Piovesan.
     We have talked about de-identified information, but there is provision under proposed section 39 of the consumer privacy protection act that an organization has the right to disclose an individual's de-identified personal information without their knowledge or consent to entities listed in paragraph 39(1)(b) if the disclosure is made for a socially beneficial purpose.
    Do you think that the definition of socially beneficial purpose is precise enough to ensure that Canadians' privacy is protected?
     I want to start by saying I think there is real value to being able to use data for good, so the inclusion of this provision is important and I support it.
    The definition as provided relates to “health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose”. To the extent that it is not sufficient, there are opportunities to include a more defined and precise definition in the long run.
     However, as an initial inclusion, I am comfortable with what we have here.
    Okay.
    Mr. Joli-Coeur, what are your thoughts on that?
    I agree that there should be a possibility to use de-identified data for socially beneficial purposes. I think some of my colleagues on this panel have suggested extending this permission to also disclose personal information for socially beneficial purposes to more than just, essentially, public sector organizations, and even to other private organizations.
    Thank you.
    Mr. Joli-Coeur, the government provided a written response that indicated that it expects the Quebec laws to take precedence over the federal ones, as they would be considered substantially similar. Do you believe this bill will generally align with Quebec law?
    Generally, yes. There are some discrepancies, which were outlined today.
     I think, from a general perspective, Quebec will meet the substantially similar threshold. There could be improvements for interoperability that would help organizations, as we've mentioned. The notion of anonymization and its definition are one example, but generally speaking, the bill is broadly aligned.
    I'll put this question to you first. If you could propose only two amendments to the act, what would they be and why do you feel they're important amendments?
     I could go back to the suggestions I put forward in my speaking notes.
     The definition of anonymization is too strict. It should be more flexible. There should be a reasonableness standard there that would align with Quebec's and other global standards.
    Perhaps the other one is the one I mentioned about automated decision-making. Restrict the definition a bit to cover only decisions that are made and that are replacing a human—only decisions where there is no human in the loop.
(1655)
    Okay.
    Mr. Jarvie, what are your thoughts on that?
    I agree with many of the other witnesses here today in supporting a change to the definition of anonymization to align more with Quebec's definition and maintain some interoperability there. Given the way it's drafted now, it's an impossible standard.
    The other—and I'll make reference to my opening remarks—would be to change the consent exception framework for public interest purposes. That includes proposed sections 35 and 39. I think, in this regard, we could take some inspiration from Law 25, which inserted a new framework for disclosures by private sector entities to other private sector or public sector entities. That includes undertaking a privacy impact assessment and entering into an agreement with the other party. In the case of Quebec, it's actually submitting the agreement to the Commission d’accès à l’information. In the case of Bill C-27, it's adapting the language from proposed paragraph 35(c), which suggests notice to the commissioner at the very least.
    In addition to allowing for information exchanges among private sector entities, which could be beneficial, I think it could also be extended to include taking information from the public Internet. As we know, machine learning technologies, in many cases, can benefit from having access to this, provided that some appropriate guardrails are in place, as suggested.
     Going back to the social purpose question I posed earlier, are there any other requirements of the organizations, other than those found in proposed section 39, that should be included? You made reference to notifying the commissioner.
    Yes. If we were to undertake the suggestion to combine or generalize proposed sections 35 and 39, to make it a bit more like the framework in Quebec's Law 25, which begins at section 21 of that law, then it would involve what is styled in that law as “privacy impact assessments”. That isn't a concept that figures, as such, in Bill C-27, but I think it's been discussed to some extent at this committee already. It's been broadly outlined. It's understood. You're examining the disclosure in this case, or the collection.
    I suggest that after seeing what kind of privacy impact it has, you do a proportionality analysis and many other things besides that. If an agreement is entered into between the parties to the exchange, it should have certain contractual assurances around how the information is to be handled throughout its life cycle for this purpose. Finally, notice should be given to the commissioner.
    As I said, in Quebec's case, you actually submit the agreement to the commissioner, and then you can activate or operationalize that agreement only after 30 days, giving the Quebec commissioner time to respond, presumably. Once the commissioner has notice, they can of course simply request the agreement. They can request the privacy impact assessment and undertake any other steps. The important thing is to provide notice, so that the commissioner is aware.
    What are your thoughts on that, Ms. Piovesan?
    It will have to be brief, Madam Piovesan.
    Can you just repeat the question?
    What are your thoughts on the statements that were made earlier about having a more robust requirement for privacy impact assessments?
    I fully agree with my colleague's point that you can combine proposed sections 35 and 39. It can be an effective way to more flexibly use personal information in a responsible manner, but you would do so with the governance of the privacy impact assessment in place, so that there is a thoughtful and measured approach to identifying the data that you'll be collecting, the justification for that data, the potential privacy risks associated with that use, and then a clear plan to mitigate those risks.
    I would very much support that proposal.
(1700)
    Would that include notification to the appropriate authority?
    Do you mean in advance of the collection?
    I mean in advance of the disbursement.
    I'm not necessarily sure about that. I don't know if it's reasonable that it would be tabled with the commissioner in advance, but I do support the application of the privacy impact assessment as a tool to mitigate risk.
    Thank you, MP Van Bynen.

[Translation]

    It's now over to Mr. Lemire.
    Thank you, Mr. Chair.
    Over the past few months, Ms. Piovesan, in our role as MPs, we've held a number of meetings with businesses that operate in Quebec, including small and medium-sized Quebec start-ups.
    Since AIDA contains little in the way of details and imposes criminal liability on companies that use high-impact systems, in a podcast, you called Bill C‑27 an advanced draft. You raised the issue of the criminality component.
    Can you explain what the bill is missing, and why that undermines how confident and comfortable businesses are operating both in Quebec and in Canada? How should Bill C‑27 be clarified to take it from a draft bill, as you put it, to a real one?

[English]

    If I've misunderstood anything, please let me know.
    Number one, I am very empathetic to the concerns of small and medium-sized businesses when it comes to the level of compliance. That is part of the reason it is very important that we harmonize wherever possible, because part of the challenge with compliance is the degree to which there is nuance.
    Let me just give you a very small example. Right now, if you look at AI law alone that is bubbling up in different jurisdictions, you have the EU, which has a robust law. Then you have Canada, and you have different jurisdictions as well. You have China, Singapore...a number are considering laws. Then, in the U.S. alone, there are 200 bills that were tabled from the city level to the federal level, so to the extent that we can minimize the complexity of compliance, we are far better off.
    When it comes to the kinds of enforcement measures that you find in AIDA, one of the most striking to me is the criminality component, which I am most concerned about, because it does set us apart as an outlier. We don't see that in other laws around the world. I would recommend striking that, to be honest. Get rid of the criminality component, and really align AIDA, from an enforcement and a penalty perspective, as closely as possible with other leading jurisdictions.
    I hope I fully answered your question.

[Translation]

    Yes, absolutely.
    I'd like to ask you another question out of curiosity.
    In your opening remarks, you said the AI act had holes that needed to be plugged. Can you tell us how we should plug those holes?
    What do you recommend to strengthen part III, the AI act?

[English]

     Most of the holes that need to be plugged are found in the minister's amendments, in the letter that was tabled by the minister and read together with the companion documents. If you were to take it all together and we were to see a bill that addresses all of those different points, my suspicion is that it would be a fairly good, fulsome bill. I haven't seen it, and I don't know, but I suspect that we would be in a position where we are much more comfortable with some of those gaping holes.

[Translation]

    Thank you very much, Ms. Piovesan.

[English]

    Thank you.
    Mr. Masse.
    Thank you, Mr. Chair.
    I'm going to continue with the criminality aspect.
    I guess what you're proposing, then, is basically a fining system. Doesn't that sound a little weak, though, when you're dealing with personal privacy from individuals that could affect them quite significantly? What I worry about is that it sends the message that you can buy your way out of anything. Why wouldn't we want to have some criminality as part of it, especially when corporate responsibility is already weak in Canada?
    I'm not sure I fully align with that.
    Where there is criminal activity, it's typically already covered in the Criminal Code. The issue with AIDA as it's currently constructed is that it is intended to be an accountability framework for high-impact systems. In this current construct, it sets Canada apart with the criminality component. If you are engaging in criminal activity, we already have a statute in place that covers that activity and provides the necessary oversight and actions that would be taken.
(1705)
    The fact that we would do this differently by ourselves.... Is that the only reason, or is...?
     I'm not sure if you got a chance to look at what the Biden administration released and what they're doing. It seems to me that if you aren't going to take that out of a section, where else could we strengthen the corporate responsibility, then? If we take that out, do you have a suggestion as to what else could be added? I know that many organizations, individuals and privacy people have expectations for that. If they don't have that, do you have any suggestions on what we would counter with, to at least provide some substance to what they would lose?
    The Biden executive order is not law. We would have to actually see what the U.S. is capable of passing as law.
    In the Canadian context, you have fines that are stricter than what you would find under the EU context, so you already have a significant deterrent, from the monetary perspective.
    In terms of the level of criminality that I understand we are concerned about from the constituents you're speaking with, I would go back to the point that we have the Criminal Code in place, which would deter that behaviour already.
    Okay, thanks.
    I know that my constituents at least are very disappointed with that in terms of corporate responsibility with respect to the environment, protection of privacy and consumers...so to me, it's something to think about. I do appreciate your testimony, because we're thinking long and hard on this.

[Translation]

    Thank you, Mr. Masse.
    We now go to you, Mr. Généreux.
    Thank you to the witnesses as well.
    Today's discussion is fascinating. I am very interested in what you have to say.
    Ms. Piovesan, if I understood correctly, you helped draft Bill C-11, the predecessor to the bill before us today, Bill C-27.

[English]

    Did I participate in the...?

[Translation]

    Did you participate in the consultations or even in the drafting of the bill, itself?

[English]

    Okay.
    I participated in the national consultations on data and digital literacy, I think it was, in 2018. I participated as an innovator—as one of the innovation leads.
     I did not participate in the drafting of the digital charter, nor in the white paper to reform PIPEDA that came out at that time. I have not participated in the drafting of any of these laws, neither Bill C-11 nor Bill C-27.

[Translation]

    I'm going to turn to the other witnesses now.
    Did any of you participate in the consultations on Bill C-11 or the bill the committee is currently studying, Bill C-27? Please nod your head if you did.
    I see that no one was consulted. All right.
    In light of what we've seen since we began our study a few weeks ago, no one seems to have been consulted, but the Minister of Innovation, Science and Industry says that 300 individuals and organizations were consulted after the bill was introduced. I'd like to find those individuals and organizations. I don't know where they are.
    In a moment, I'll be giving notice of a motion, but I'd like to ask you a question, first, Ms. Piovesan.
    Mr. Balsillie appeared before the committee, and I'm sure you read his remarks. He likened the bill to a bucket that has holes. What witnesses have told us so far seems to suggest that the bucket basically has no bottom. That's what it seems like.
    You talked about the fact that the committee has heard opposing views from witnesses. Take the tribunal, for instance. Some suggested getting rid of it because we didn't need it, while others argued the opposite, that having a tribunal in the sector was important.
    Given how far apart on the spectrum people's views are, do you think the bill should have been split from the beginning? We've heard from the start that the bill is almost monstrous, that it's too big, that the privacy piece and the AI piece should have been dealt with separately.
    What do you think?

[English]

     I'm not sure I'm in the best position to comment on whether the two bills should be separated, but what I will say is that they are conceptually linked. Much of what goes into the AI bill does depend on the type of data that it relies upon, which then touches on part 1 of that bill. There are distinctions, for sure, and there are conceptual linkages that I think are important. I do appreciate that part 1 has been subject to substantially more consultation than part 3 has.
(1710)

[Translation]

    Mr. Young, you said earlier that Quebec's bill was superior to this bill, if I understood the gist of your comments correctly.
    Is that true? Do you believe that?

[English]

    I'm sorry. I'm listening, but could the translator repeat it? It was at low volume.

[Translation]

    My understanding is that you spoke favourably about Quebec's bill. It's actually in force now, law 25, if I'm not mistaken.
    Given the complimentary way you spoke about law 25, do you think it's a better bill than this one?

[English]

    That's a good question.
    I think it sets, largely, an excellent standard. That would be the way I would characterize it.
    In comparison with this bill.... I think the bill could learn from Law 25 in a number of areas. Certainly, anonymization and de-identifying are one. There are others, and we haven't touched on those today, in which Quebec provides what I will call a higher standard, a more rigorous standard of privacy.

[Translation]

    Very good. Thank you.
    Mr. Chair, I'd like to give notice of a motion, if I may.
    My apologies to the witnesses. This will take a few minutes.
That, in relation to Bill C‑27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, and given that,

(i) the Minister of Innovation, Science, and Industry gave evidence to the Committee on September 26, 2023, stating “My office and department have had more than 300 meetings with academics, businesses and members of civil society regarding this bill.”

the committee therefore requests, for the sake of transparency, that the minister’s office and department release the details pertaining to the more then 300 meetings held by his office and department with academics, businesses, and civil group, on Bill C‑27, broken down by each meeting, including,

(a) names of any and all meeting attendees, including the name of representing organizations if applicable;

(b) the title of each meeting and any agendas if applicable;

(c) material submitted by the meeting attendees or organizations to the department or minister’s office, including but limited to amendment proposals, briefings, and or letters;

and that such information be deposited to the clerk of the committee no later then November 20th, 2023, and be published on the committee website.
    Thank you, Mr. Chair.
    Are you putting the motion on notice, or are you moving the motion in order to debate it, Mr. Généreux?
    I'm giving notice of the motion so that we can debate it at a later time.
    Come to think of it, I think we're going to have to debate it now, since our next meeting isn't until November 20.
    It's up to you, Mr. Généreux. If you are giving notice of the motion—
    We have to debate the motion now. Sorry, but I think we have to because we need the information in question.
    Mr. Généreux has moved a motion that deals with Bill C-27, which is before the committee today, so the motion is in order.
    The motion is up for debate.
    Go ahead, Mr. Lemire.
    Thank you, Mr. Chair.
    I have a couple of friendly amendments. They are minor, so the mover could even incorporate them into the motion, himself. The documents need to be translated in both official languages, which the motion doesn't specify. We could push the November 20 deadline to allow for the documents to be translated.
    Here's the other question I have. Why is it necessary to make the information available on the committee's website? I don't see the reason for doing so, so I have an issue with that.
    Nevertheless, I think everyone can agree on the importance of having the documents translated in both official languages.
    Mr. Lemire is proposing an amendment to the motion. I understand the purpose, but I think we need the specific language.
    I'll ask the clerk, who should be able to help us. How much time would it take to translate the documents in question? What deadline should we use?
    Without knowing how many documents will come in, I think it's a bit tough to say how much time is needed. I think it's safe to assume that it could take quite a while. Nobody knows.
(1715)
    As far as I'm concerned, the documents can't be submitted until they are available in both official languages. Therefore, I wouldn't specify a date, unless the Conservatives have something to suggest.
    Mr. Lemire is proposing an amendment to remove the wording “no later then November 20th, 2023”.
    The phrase: “that this information be deposited with the Clerk of the Committee for distribution to members“ should also be added.
    I imagine that includes publishing the information in the Committee’s digital binder.
    Here, then, is Mr. Lemire’s proposed amendment.

[English]

     Just to inform our witnesses, I apologize for that, but I warned you before the meeting that given there was a motion on the floor, it was up for debate.
    I understand that you have a flight to catch, Madam Piovesan.
    Actually, I just wanted to correct the record, because I understood that you were asking me, “Were you part of drafting?”, but if you were asking, “Are you part of any of the consultation processes?”, I was part of some of the consultations, but I was not part of drafting. I just wanted to correct the record.
    I appreciate that.
    In all likelihood.... Okay, we'll see if this will be quick. I'm always doubtful of that. I'm thinking of maybe discharging our witnesses, given that we have 15 more minutes—
    Mr. Rick Perkins: Is that all we have left?
    The Chair: I don't think this will be settled in 15 minutes, however optimistic you are, Mr. Perkins, so if I have your consent, colleagues, I will thank the witnesses for joining us today.
     If you wish to hear us debate this motion, you may very well stay. You're more than welcome. However, if you want to leave, know that your testimony has been appreciated, and if there are things that you want to submit to committee members, please do so via the clerk. The documents will be revised as we continue the study of Bill C-27.
    Thank you very much.

[Translation]

    Thank you all. I apologize.
    We have 15 minutes left of our two-hour meeting.
    Mr. Lemire’s amendment is under consideration.

[English]

    Is there—

[Translation]

    Order. An amendment is under consideration.
    Are there any further comments?

[English]

    If there aren't any more comments, we'll put the amendment to the vote.
    Our witnesses are leaving. Do you want me to jump right in?
    You can jump right in, yes.
    I guess what I'm trying to understand is if it is the case that people are disputing whether the government has done consultation work. Is that what the whole point of this is? It seems to me that we're listening to—
    Pardon me?
    An hon. member: [Inaudible—Editor]
    The way this was brought forward, Mr. Généreux, your points seem to call into question whether the government had done the 300 consultations that were very clearly, in my view, conducted.
     I'm just not sure why you think this is important to us as members of the committee in order to be able to move forward. I think we're doing some great work. I'm sad that we had to let our witnesses go early here, because we were having such good, in-depth conversations. Could you maybe enlighten us as to why you think all these materials are necessary for this committee to continue the work that it's doing, which I think is going quite well?
    Just before I defer to other members, I want to remind everyone that right now we are debating the amendment proposed by Mr. Lemire, which is to remove everything after “be deposited to the clerk of the committee”. We remove “no later than November 20”.
    Instead, it would be “deposited to the clerk of the committee for communication to members”.

[Translation]

    Did you add “translated into both official languages“?

[English]

    “and also translated in both official languages”.
     It goes without saying. That would be added somewhere relevant in the lengthy text of the motion.

[Translation]

    I repeat: “and that this information be translated into both official languages and deposited with the Clerk of the Committee“ for the purpose of being…
    Did you hear all that?

[English]

    That's the amendment we're debating right now, before we go on to the other aspects of the motion.
    Are there any comments on the amendment proposed by Mr. Lemire?
    Mr. Généreux.
(1720)

[Translation]

    Mr. Chair, we are heading back to our ridings for a week. Given all the staff at the House of Commons, I imagine it will be possible to have the documents translated within a week.
    Worst-case scenario, we can agree to Mr. Lemire’s proposal. However, I don’t want to receive these documents in February, after we’ve finished our work. That’s important to mention. We know what happens when you remove a date: it ends up at the bottom of the heap and other documents are dealt with first. That’s the risk. Obviously, we would want the documents to be bilingual.
    Are there any further comments?

[English]

    Are there any more comments on the amendment proposed by Mr. Lemire?
    If there are none, I will put it to a vote.
    I'm looking around the table to see if anyone wants to jump in.
    Mr. Van Bynen, were you raising your hand?
    You're on mute.
    I'm sorry.
    I was just concurring. I thought you were calling the vote.
     I'm about to, on the amendment proposed by Mr. Lemire.
    Do we need a recorded vote on the amendment by Mr. Lemire, or is there unanimous consent around the table?
    (Amendment agreed to)
    The Chair: We're back to the main motion as amended by Mr. Lemire.
    Mr. Généreux.

[Translation]

    In response to Mr. Turnbull, I have no doubt that the Minister has consulted.
    So far, we’ve all invited witnesses, and Lord knows the list is still quite long. Personally, I’d like to know who was consulted about all the witnesses appearing later, so that we can prepare for our future meetings.
    So far, among all the high-level experts, if I can call them that, who’ve appeared before us, it seems that no one was consulted. I’d like to know if we’ve missed anyone on the list of those who were consulted. If so, perhaps we should have invited them to appear before the Committee.
    I understand that it’s getting late to invite new witnesses. That said, the motion would at least allow us to gather references, find out what recommendations these people made that were not taken up, etc. In the set of documents we’re requesting, we may be able to find answers to these questions.
    I repeat, I’m not at all questioning the fact that the Minister consulted 300 organizations or individuals.
    Mr. Turnbull, you have our attention.

[English]

     Thanks for the clarification, Mr. Généreux. I appreciate that.
    I guess this is what I'm struggling with here. From my perspective, any of the meetings that were had would already be listed on the lobbying registry—that's any time the ministry meets with a group based on this—so there already is documentation of that. You could look it up yourself, if you want.
    We also have, obviously, a process for this committee. Many of the groups have already submitted briefs. They have every right and ability to submit a brief to the committee. When I look at this, we're talking about providing “material submitted”. They already have the option of submitting material.
    You've been asking all of the witnesses here as to whether they've been consulted. Some of them may not have been, but we know that there are lots of stakeholders out there, and over 300 have been.
    I'm not sure what this achieves when, in essence, committee members already have access to the information that's there. As well, stakeholders already have the ability to submit briefs to the committee.
    In essence, the information is largely already available. I'm just not sure why we would need a motion at this particular time.

[Translation]

    Mr. Williams, you have the floor.

[English]

    Just to Mr. Turnbull's point, if the information is there, then we can get it pretty easily. I think this is a pretty simple motion.
    Are there any more comments, or should we put this motion to a vote?
    Go ahead, Mr. Turnbull.
(1725)
    I'm really not sure; when I look at this, it says, “for the sake of transparency, that the minister's office and department release the details pertaining to”. We've already taken out the date, with Mr. Lemire's amendment. The specific date would not be in there. Really, we're asking for the “names of any and all meeting attendees, including the name of representing organizations”, plus “the title of each meeting and any agendas”—
    We can read.
    Yes, well, I'm processing; I don't see how this is absolutely necessary for—
    You already said that.
    He's repeating himself. Let's just go to a vote. He keeps repeating himself.
    I'm sorry, Mr. Turnbull. Were you done?
    Mr. Turnbull has the floor.
    Yes, but he's not supposed to repeat himself. I know how to do a filibuster.
    The Chair: Mr. Perkins—
    Mr. Rick Perkins: If you want to do a filibuster, you can't repeat yourself.
    Mr. Perkins, I am the chair, and I recognize Mr. Turnbull.
    Again, I'm not clear on why; I have concerns about what the intention of this is—
    Mr. Rick Perkins: You already said that.
    Mr. Ryan Turnbull: —and why we need to submit all this documentation when it's already clearly available.
    Mr. Rick Perkins: [Inaudible—Editor]
    Mr. Ryan Turnbull: Well, you haven't given a good rationale for it.
    I don't have to. I don't answer to you.
    Now you're being combative.
    Mr. Perkins, when one member has the floor—
    Mr. Rick Perkins: [Inaudible—Editor]
    The Chair: —that person speaks. I appreciate the heckling in the House, but this is not the House. This is committee. I don't accept—
    I'd like you to enforce the rules on relevancy and repetition.
    I am. Now I'm enforcing the rules on heckling around this table. For the sake of decorum, I would ask you to keep it down.
    Mr. Turnbull has the floor.
    Yes, Mr. Sorbara.
    On a quick point of order, perhaps we can avoid some of the crosstalk and focus on the issue at hand.
    Mr. Perkins, if Mr. Turnbull is repeating comments, then you can rise on a point of order and state that. Let's do it the proper way, and we can get through what we need to get through.
    Yes, Mr. Turnbull.
    I'd like to propose an amendment.
    I propose removing sections (b) and (c) of this. I think “names of any and all meeting attendees” is perfectly acceptable and could be produced in pretty short order.
    I would amend the motion by excluding (b) and (c).
     There's an amendment to remove (b) and (c). You've all heard the proposed amendment by Mr. Turnbull.
    Do we have unanimous consent for removing (b) and (c)?
    (Amendment agreed to [See Minutes of Proceedings])
    The Chair: We're back to the motion as amended by Mr. Lemire and subsequently by Mr. Turnbull.
    Do we have unanimous consent to adopt the motion, or should I put it to a vote? I see no more speakers.
    We'll go to a recorded division on the motion as amended.
    (Motion as amended agreed to: yeas 10; nays 0 [See Minutes of Proceedings])
    The Chair: Thank you, Mr. Masse. You scared me for a while. We're back to our traditional consensus style of work here at the committee. I'm happy about that. It's passed unanimously.
    Thank you, everyone, for your collaboration on this. It brings us straight to the hour.

[Translation]

    We stayed on time and on budget.
    Thank you very much to the interpreters, support staff and analysts.
     I wish you all a good week in your respective ridings, a good weekend and a good evening.
    The meeting is adjourned.
Publication Explorer
Publication Explorer
ParlVU