:
Good afternoon, everyone. I call this meeting to order.
Welcome to meeting no. 94 of the House of Commons Standing Committee on Industry and Technology.
Today's meeting is taking place in a hybrid format, pursuant to the standing orders.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill .
I'd like to welcome our witnesses today: Daniel Konikoff, interim director of the Privacy, Technology & Surveillance program at the Canadian Civil Liberties Association; Tim McSorley, national coordinator at the International Civil Liberties Monitoring Group; Matthew Hatfield, executive director of OpenMedia; Sharon Polsky, president of the Privacy and Access Council of Canada; John Lawford, executive director and general counsel at the Public Interest Advocacy Centre, who is joined by staff lawyer Yuka Sai; and Sam Andrey, managing director of The Dais at Toronto Metropolitan University.
Thank you for being here today.
I'm pleased that we are able to start on time.
Without further ado, Mr. Konikoff from Canadian Civil Liberties Association, you have the floor for five minutes.
:
Good afternoon. Thank you for inviting us to appear before you today.
I am the interim director of the privacy, technology and surveillance program at the Canadian Civil Liberties Association, an organization that has been standing up for the rights, civil liberties and fundamental freedoms of people in Canada since 1964.
Protecting privacy and human rights in our tech-driven present is no small undertaking. We commend the government for trying to modernize Canada's legislative framework for the digital age, and we commend the work that this committee is doing to get this legislation right.
We also acknowledge the procedural hurdles that may make it challenging for us to speak completely to Bill and its potential amendments. However, I will highlight three amendments from CCLA's written submission that we believe must be adopted to make Bill C-27 more respectful of people's rights in Canada.
First, Bill does not give fundamental rights their due and frequently puts them in second place, behind commercial interests. It has been said before but CCLA believes that it's worth emphasizing that Bill C-27 must be amended to recognize privacy as a human right, both in the CPPA and in AIDA, since privacy is something that should be respected at all points throughout data's life cycle.
This bill must also be amended to recognize our equality rights in the face of data discrimination and algorithmic bias, risks that grow exponentially as more and more data is gathered and fed into AI systems that make predictions or decisions of resounding consequence.
Privacy, data and AI legislation the world over, such as that in the European Union, already have stronger rights-based framing and protections. Canada simply needs to catch up.
Second, there are concerning gaps in Bill around the issue of sensitive information. Sensitivity is a concept that appears often throughout the CPPA; however, it is left undefined, allowing private interests to interpret its meaning as they see fit. A lot of personal information does qualify as sensitive, and although information's sensitivity often depends on context, there are special categories of information whose collection, use and disclosure carry inherent and extraordinary risks.
I want to draw your attention to one category in particular, the collection and use of which have implications for both the CPPA and AIDA, and that is biometric data.
Biometric data is perhaps the most vulnerable data we have, and its abuse can be particularly devastating to members of equity-seeking groups. Look no further than the prevalence of facial recognition technology. Facial recognition is used everywhere from law enforcement to shopping malls, and it relies on biometric information that is often collected without people's awareness and without people's consent. Right2YourFace coalition, of which CCLA is a member, has advocated having stronger legislative safeguards with respect to facial recognition and the sensitive biometric data that fuels it. Bill must be amended to not only explicitly define sensitive information and its many categories but also to unequivocally define biometric information as sensitive information worthy of special care and protection.
Third and finally, we take issue with the number of consent carve-outs in proposed section 18 of the CPPA, and how these can ultimately trickle down to AIDA. These carve-outs are, by and large, an affront to meaningful consent, and so to people's right to privacy. People should be able to meaningfully consent or decline to consent to how private companies gather and handle their personal data. Prioritizing a company's legitimate interest to violate consumer consent over people's privacy is simply inappropriate, as is leaving room for more consent carve-outs to be added in regulations later on. Bill is, frankly, porous with these exemptions and exceptions, and these gaps come at the expense of people's privacy.
There is no shortage of concerns around this bill, and I haven't really spoken to the issues that CCLA has with AIDA's narrow conception of harm, its lack of transparency requirements and its dangerous exclusions of national security institutions whose public mandates are often performed with privately acquired artificial intelligence technologies. We address these issues in greater depth in our written submission to the committee, but I'd be happy to expand on them in questioning.
I'd also like to direct the committee's attention to our written submission, which flags some of these concerns and includes an AI regulation petition that received over 8,000 signatures.
Bill overall needs tighter provisions to prioritize people's fundamental rights. The CPPA needs to plug its gaps around information sensitivity and consent, and if AIDA is not to be scrapped outright, reset or just separated from this bill, it needs fundamental rethinking.
Thank you.
:
Thank you, Chair, and thank you for the invitation to share the perspectives of the ICLMG today regarding Bill .
We're a Canadian coalition that works to defend civil liberties from the impact of national security and anti-terrorism laws. Our concerns regarding Bill are grounded in this mandate.
While we support efforts to modernize Canadian privacy laws and establish AI regulations, the bill unfortunately contains multiple exemptions for national security purposes that are unacceptable and undermine Bill 's stated goal of protecting the rights and privacy of people in Canada.
We have submitted a written brief to the committee with 10 recommendations and accompanying amendments. I'd be happy to speak in more detail about any of these during the question period, but for now, I'd like to make three specific points.
First, in regard to the CPPA, we are opposed to proposed sections 47 and 48 of the act, which create exceptions to consent by allowing an organization to disclose, collect or use personal information if it simply “suspects that the information relates to national security, the defence of Canada or the conduct of international affairs”. This is an incredibly low threshold for circumventing consent.
Proposed section 48 is particularly egregious. It allows for an organization of “its own initiative” to collect, use or disclose an individual's personal information if it simply suspects that the information relates to these three areas. The concern does not even need to be connected to a suspected threat. Again, it only needs to relate, and that's not defined in the bill.
Not only are these sections very broad, they're also unnecessary. Other sections of the law would allow for more targeted disclosure to government departments, institutions and law enforcement agencies. For example, proposed section 45 allows an organization to proactively divulge information if it “has reasonable grounds to believe”—a much higher threshold—“that the information relates to a contravention” of a law that has been, is being or will be committed. We contrast that “reasonable grounds to believe” threshold with simply suspecting that it “relates”.
In that regard, we find proposed sections 47 and 48 unnecessary and overly broad. We propose, then, that proposed sections 47 and 48 simply be removed from the CPPA. Barring that, we've proposed specific language in our brief that would help to establish a more robust threshold for disclosing personal information.
Second, we're deeply concerned with the artificial intelligence and data act overall. In line with other witnesses, we believe it is a deeply flawed piece of legislation that must be withdrawn in favour of a more considered and appropriate framework. We have outlined these concerns in our brief, as well as in a joint letter shared with the committee and the minister, signed by 45 organizations and experts in the fields of AI, civil liberties and human rights.
AIDA was developed without appropriate public consultation or debate. It fails to integrate appropriate human rights protections. It lacks fundamental definitions. Egregiously, it would create an AI and data commissioner operating at the discretion of the , resulting in a commissioner with no independence to enforce the provisions of AIDA, as weak as they may be.
Finally, I'd like to address an unacceptable exception for national security that is found in AIDA as well.
Canadian national security agencies have been open regarding their interest and use of artificial intelligence tools for a wide range of purposes, including for facial recognition, surveillance, border security and data analytics. However, no clear framework has been established to regulate the development or use of these tools in order to prevent serious harm.
AIDA should present an opportunity to address this gap. Instead, it does the opposite in proposed subsection 3(2), where it explicitly excludes the application of the act to:
a product, service or activity that is under the direction or control of
(a) the Minister of National Defence;
(b) the Director of the Canadian Security Intelligence Service;
(c) the Chief of the Communications Security Establishment; or
(d) any other person who is responsible for a federal or provincial department or agency and who is prescribed by regulation.
This means that any AI system developed by a private sector actor that falls under the direction or control of this open-ended list of national security agencies would face absolutely no independent regulation or oversight.
It is inconceivable how such a broad exemption can be justified. Under such a rule, companies could create tools for our national security agencies without the need to undergo any assessment or mitigation for harm or bias, creating a human rights and civil liberties black hole. What if such technology were leaked, stolen or even sold to state or private entities outside of Canada's jurisdiction? All AI systems developed by the private sector must face regulation, regardless of their use by national security agencies.
Our brief includes specific examples of the harms that this lack of regulation can cause. I'd be happy to discuss these more with the committee. Overall, if AIDA does go ahead, we believe that proposed subsection 3(2) should simply be removed.
Thank you.
:
Good afternoon. I'm Matt Hatfield. I'm the executive director of OpenMedia, a grassroots community of nearly 300,000 people in Canada who work together for an open, accessible and surveillance-free Internet.
I'm speaking to you today from the unceded territory of the Tsawout, Saanich, Cowichan and Chemainus nations.
What is there to say about Bill ? One part is long-overdue privacy reform, and your task is closing its remaining loopholes and getting the job of protecting our data done. One part is frankly undercooked AI regulation that you should take out of Bill C-27 altogether and take your time to get right. I can't address both at the length they deserve. I shouldn't have to, but we are where the government has forced us to be, so let's talk privacy.
There are some great changes in Bill . These include real penalty powers for the OPC and the minister's promised amendments to entrench privacy as a human right. OpenMedia hopes this change to PIPEDA will clearly signal to the courts that our ownership of our personal data is more important than a corporation's interest in profiting off that data, but any regulatory regime is only as strong as its weakest link. It does no good for Canada to promise the toughest penalties in the world if they're easy to evade in most real-world cases. The weaknesses of Bill C-27 will absolutely be searched for and attacked by companies wishing to do Canadians harm.
That's why it's critical that you remove the consent exceptions in Bill and give Canadians the right to ongoing, informed and withdrawable consent for all use of our data. While you're fixing consent, you must also broaden Bill C-27's data rules to apply to every non-governmental body. This includes political parties, non-profit organizations like OpenMedia and vendors that sell data tools to any government body. No other advanced democracy tolerates a special exception to respecting privacy rules for the same parties that write privacy law. That's an embarrassing Canada original, and it shouldn't survive your scrutiny of this bill.
Privacy was the happier side of my comments on Bill . Let's talk AI.
I promise you that our community understands the urgency to put some rules in place on AI. Earlier this year, OpenMedia asked our community what they hoped for and were worried about with generative AI. Thousands of people weighed in and told us they believe this is a huge moment for society. Almost 80% think this is bigger than the smart phone, and one in three of us thinks it will be as big or bigger than the Internet itself. “Bigger than the Internet” is the kind of thing you're going to want to get right, but being first to regulate is a very different thing from regulating right.
is at the U.K.'s AI safety conference this week, telling media the risk is in doing too little, not too much. However, at the same conference, Rishi Sunak used his time to warn that we need to understand the impact of AI systems far more than we currently do, in order to regulate them effectively, and that no regulation will succeed if countries hosting AI developments do not develop their standards in close parallel. That's why the participants of that conference are working through foundational questions about exactly what is at stake and in scope right now. It's an important, necessary project, and I wish them all success with it.
If they're doing that work there, why are we here? Why has this committee been tasked with jamming AIDA through within a critical but unrelated bill? Why is Canada confident that we know more than our peers about how to regulate AI—so confident that we're skipping the basic public consultation that even moderately important legislation normally receives?
I have to ask this: Is AIDA about protecting Canadians, or is it about creating a permissive environment for shady AI development? If we legislate AI first, without learning in tandem with larger and more cautious jurisdictions, we're not going to wind up with the best protections. Instead, we're positioning Canada as a kind of AI dumping ground, where business practices that are not permitted in the U.S. or the EU can be produced here in rights-violating and even dangerous ways. I'm worried that this is not a bug, but rather the point—that our innovation ministry is fast-tracking this legislation precisely to guarantee Canada will have lower AI safety standards than our peers.
If generative AI is a hype cycle whose products will mostly underwhelm, then this is much ado about not much and there is no need to rush the legislation. However, if even a fraction of it is as powerful as its proponents claim, failing to work with experts and our global peers on best-in-class AI legislation is a tremendous mistake.
I urge you to separate AIDA from Bill and send it back for a full public consultation. If that isn't in your power, at the very least, you cannot allow Canada to become an AI dumping ground. That's why I urge you to make the AI commissioner report directly to you, our Parliament, not to ISED. A ministry whose mandate is to sponsor AI will have a strong temptation to look the other way on shady practices. The commissioner should be charged with reporting to you yearly on the performance of AIDA and on gaps that have been revealed in it. I also urge you to mandate parliamentary review of AIDA within two years of Bill C-27's taking effect, in order to decide whether it must be amended or replaced.
Since PIPEDA reform was first proposed in 2021, OpenMedia's community has sent more than 24,000 messages to our MPs demanding urgent comprehensive privacy protections. In the last few months, we've sent another 4,000 messages asking our Parliament to take the due time to get AIDA right. I hope you will hear us on both points.
Thank you, and I look forward to your questions.
Thank you for inviting me to share some views about Bill on behalf of the Privacy and Access Council of Canada, an independent, non-profit and non-partisan organization that is not funded by government or by industry.
Our members in public, private and non-profit sector organizations work with and assess new technologies every day, as have I through my 30-plus-year career as a privacy adviser. For that entire time, we have all heard the same promise: Technology will provide great benefits. To an extent, it has.
We’ve also been nudged to do everything digitally, and data is now the foundation of many organizations that collect, analyze and monetize data, often without the knowledge, much less the real consent, of the people the data is about.
It's understandable that there's great support for Bill , except that many of the people who support it don't like it. They figure, though, that it's taken 20 years to get this much, and we can't wait another 20 for something better to replace PIPEDA, so it's better than nothing at all.
With respect, we disagree. We do not share the view that settling for the sake of change is better than standing firm for a law that, at its heart, would definitively state that Canadians have a fundamental right to privacy. The 's concession to add that into the bill itself and not just the preamble is very welcome.
We disagree that settling for bad law is better than nothing, and Bill is bad law because it would undermine everyone's privacy, including children's—however they're defined in each jurisdiction. It also does nothing to counter the content regulation laws that would undermine encryption, would criminalize children who try to report abuse and would make it impossible for even your private communications to be confidential, whether you consent or not.
Definition determines outcomes, and Bill starts off by defining us all as “consumers” and not as individuals with a fundamental human right to privacy. It promotes data sharing to foster commerce, jobs and taxes. It adds a new bureaucracy that would be novel among data protection authorities and would delay individuals' recourse by years. It does not require AI transparency or restrict AI use by governments, only by the private sector that has not yet been deputized by government, which then gets sheltered by our current ATIP laws.
It won't slow AI and facial recognition from infiltrating our lives further. It won't slow the monetization of our personal information by a global data broker industry already worth more than $300 billion U.S. It doesn't impose any privacy obligations on political parties. It doesn't allow for executives to be fined—only organizations that then include the fine as a line item in their financials and move on, happy that their tax liabilities have been reduced.
Bill does allow personal information to be used for research but by whom or where in the world isn't limited. Big pharma using your DNA to research new medicines without your consent is just fine if it's been de-identified, although it can be easily reidentified, and larger and larger AI datasets make that more and more likely every day.
Bill would require privacy policies to be in plain language, and that would be great if it stated the degree of granularity required, but it doesn't. It allows the same vague language and generalities we now have, yet it still doesn't allow you to control what data about you may be shared or with whom, or give you a way to be forgotten.
It lets organizations collect whatever personal information they can from you and about you, without consent, as long as they say, in their self-interested way, that it's to make sure nothing about you is a threat to their “information, system or network security”, or if they say the collection and use “outweighs any potential adverse effect” on you resulting from that collection or use, and leaves it to you to find out about and to challenge that claim.
We've all heard industry's threat that regulation will hamper innovation. That red herring was invalidated when radio didn't kill newspapers, TV didn't kill radio and the Internet didn't kill either one. Industry adapted and innovated, and tech companies already do that with each new product, update and patch.
Companies that have skirted the edge of privacy compliance can adapt and innovate and can create things that, at their core, have a genuine respect for privacy, human rights, and sound ethics and morality. They can, but in almost a half a century since computers landed on desktops, most haven't. Politely asking organizations to consider the special interests of minors is lovely but hardly compelling, considering that, 20 years after PIPEDA came into force, barely more than half of Canadian companies the OPC surveyed have privacy policies or have even designated someone to be responsible for privacy.
Those are basic and fundamental components of a privacy management program that do not take 20 years to figure out. We don't have time to wait, but we also cannot afford legislation that is inadequate before it's proclaimed, that's not aligned with Quebec's Law 25, the U.S. executive order on AI or other jurisdictions that are well ahead of Canada on this. We also can't afford something that further erodes trust in government and industry as it freely trades away the privacy rights of Canadians for the sake of commercial gain.
I will be happy to answer your questions, and we will be detailing our views in a submission to the committee. I hope you hear us.
The Public Interest Advocacy Centre is a national, non-profit and registered charity that provides legal and research services on behalf of consumers—in particular, vulnerable consumers. PIAC has been active in the field of consumer privacy law and policy for over 25 years.
My name is John Lawford. I'm the executive director and general counsel. With me today is Yuka Sai, staff lawyer at PIAC.
Bill reverses 25 years of privacy law in Canada. Businesses can now assume consent, and consumers must prove abuse. If this sounds uncomfortable from an individual rights perspective, that's because it is.
Firstly, with regard to consent, the new business activities exception to consent, which is in proposed subsection 18(1), makes full use of your personal information without your consent, or even your knowledge, legal for business. Business activities are defined so widely and tautologically in proposed subsection 18(2) that only businesses will be able to define what a business is. It's ridiculous. Proposed section 18 completely reverses the default of an individual's informed consent for the collection or use of personal information under PIPEDA. Do Canadians really want that?
The addition of an exception to consent and knowledge in proposed subsection 18(3), for the collection or use of additional personal information for legitimate interests, is an import from European law but without the fundamental right to privacy that it modulates in Europe.
Secondly, with regard to de-identification, under proposed section 20, consumers also lose out on opportunities to scrutinize the use of their personal information when it is de-identified. De-identify is defined as:
to modify personal information so that an individual cannot be directly identified from it, though a risk of the individual being identified remains.
It is akin to saying that to kill means to take the life of a person directly, although a chance of their remaining alive remains. It is contradictory and meaningless.
De-identification was also clearly a “use” of personal information under PIPEDA. What that use approach stops is the indiscriminate filling of databases with personal information with only the most cursory removal of tombstone information identifiers from the data. Reidentification is therefore a real risk, but even de-identified information can harm individuals when they are profiled in databases that are then used to market to them or to deny them services. Bill supercharges this outcome.
Go ahead, Yuka.
:
Thank you for the invitation to address the committee today.
I'm Sam Andrey. I'm the managing director of the Dais, a think tank at Toronto Metropolitan University where we work to develop the policy ideas to advance an inclusive, innovative economy, education system and democracy for Canada.
I'm going to focus my remarks today on the AI and data act. As many of my colleagues have noted, AI has the potential to have a transformative impact on our economy and our daily lives, but it also poses significant risks, including systemic forms of discrimination, psychological harms and malicious use.
The latest data from StatsCan shows that only about 4% of Canadian businesses are using AI, so to reach AI's full potential and increase adoption, we need a responsible governance framework.
Unfortunately, we think the current bill fails to adequately do that. The bill's surprise introduction and lack of public consultation since have limited the ability of folks in civil society, experts, industry and equity-deserving communities to engage with this important legislation. Our team at TMU, led by Christelle Tessono, has partnered with McGill University's Centre for Media, Technology and Democracy to engage with many of these folks over the last year and has produced recommendations for improving the bill, which we'll be sending to the committee.
I'm just going to highlight three of those today that we hope can be addressed if AIDA is moving forward.
First, the bill's definition of “harm” is very narrowly focused on individuals, but the harms of AI systems also occur at broader community and group levels. Depending on the type and context of the system in question, harm to individuals can be difficult to prove and only evident when assessed at a population level. Moreover, there are types of collective harms that are manipulative and exploitative from AI that would likely not be captured by this definition. Things like election interference, harm to the environment and collective harms to children are not harms that would be captured by the definition, which is focused on individuals.
Second, as my colleagues have said, the proposed regulatory model does not create sufficient independence from the minister of ISED, who would have competing roles of championing the economic benefits of AI while regulating and enforcing its risks. We think that the proposed AI and data commissioner needs to be independent from the minister, ideally through a parliamentary appointment and certainly with sufficient resources to support their role.
We would also propose two additions. One is the ability for individuals to make complaints to the commissioner. Currently to launch any investigation, the minister has to have reasonable grounds to believe that an investigation is warranted, which is a very high bar. The other is for the commissioner to be able to conduct pre-emptive audits.
Third, as has been mentioned, this bill currently only applies to the private sector. 's proposed list of high impact systems that he's shared with this committee that would be potentially subject to regulation includes a number of AI systems commonly used by public sector actors, like facial recognition used by the police and health care, but it creates a double standard where the private sector developers of these systems are going to be subject to regulation and our public servants operating them will not be.
This double standard is unlike the EU, and it fails to position the Canadian government as leading by example through legal bans and guardrails for its own responsible development and use of AI. The current structure of the bill, particularly its commissioner being an ISED departmental official, makes it poorly structured to provide oversight for all public sector AI. We acknowledge that it would not be an easy amendment job, but I would just note that Parliament needs to prioritize the development of AI regulation for the public sector, which needs to include adequate public consultation and engagement.
I want to close by saying that Canada's investments in developing AI systems and research have not yet been matched by a comparable effort to regulate the quickly evolving risks of the technology. We're encouraged that the and this committee are open to amendments that will strengthen the bill, and there's really a large community across Canada who wants to help.
Thank you for the opportunity.
:
I'll start from the beginning.
Thank you for coming and for your excellent presentations on this important bill.
In the first number of meetings, we were calling this bill a broken bill for a lot of the reasons that all of you outlined. You've probably been following it.
The fundamental right in the purpose clause is critical from our perspective. It's certainly critical that, in the purpose section, it is at a level of superiority to the need of an organization's ability to use it.
Perhaps I could start off by asking Mr. Konikoff if he believes that the words there need to be not personal privacy and an organization's right, but some other language that makes it superior to that.
:
I will move on then to Mr. Lawford.
One thing that's come up recently about the issues.... I've spoken a lot about the issues of proposed sections 12, 15 and 18, which you outlined. Proposed section 15 outlines plain language in consent, which obviously is not something we get a lot of when we do that.
I'm reading the latest terms and conditions from Zoom, which were released in the summer. It reached the news that Zoom was actually taking the right to transcribe and own everything that is said.
The thing that really bothers me is 15.2 of their terms and conditions, which is in almost every organization. It says, “You agree that Zoom may modify, delete, and make additions to its guides, statements, policies, and notices, with or without notice to you, and for similar guides, statements, policies, and notices applicable to your use of the Services by posting an updated version on the...webpage.” They don't actually come out and say that, if they're changing the terms, they'll just post it somewhere on a mysterious web page and assume that you've consented to the fact that they're now going to transcribe and own everything you say on Zoom.
I'll leave proposed section 18 because that's a different discussion, but what can be done in proposed section 15 to fix that so that companies don't have the right to do whatever they want to the terms and conditions without an individual's knowledge?
Yes, the issue of consent has been a challenge the whole time. Even under PIPEDA and other privacy laws, an organization is not supposed to be able to refuse to provide the good or service just because you refuse to provide consent. However, they all do, and that hasn't been challenged and hasn't been enforced. As you say, with regard to Zoom and all the rest of them, they do collect the information.
We don't have a choice right now. It's all or nothing, a Faustian bargain. If you want to use this website, if you want to get a car loan online, if you want to do anything online, yes, you're supposed to read the privacy policy that Mark Zuckerberg also admitted to Congress even he doesn't read. Therefore, the organizations—that acknowledge that no one reads their privacy policies and that, yet, still collect the personal information without now, admittedly, having received informed consent—are collecting personal information in violation of PIPEDA and the other laws. No one's ever challenged that.
The only way we're going to get around it is to give each one of us a control as to who gets our information and what they're going to do with it. If I say, “Yes, Zoom, you may collect these pieces of information about me, and you will give me a receipt, an automated receipt system, so that I have proof that this is what I consented to,” then I have something to challenge it with. If the companies were held to account.... It's a challenge because most of them are outside of Canada, but other laws do have extraterritorial reach. Perhaps this one could as well, because consent is the foundation of all of this. In the EU also, it's not a whole lot better. That's one that we absolutely have to tighten up.
Speaking to the point of urgency, like I said, our community also feels some urgency, but why is the urgency so much higher in Canada than in other jurisdictions that are looking at AI? Everyone is moving forward, of course, with delineating the risks, the impacts and how to address them appropriately, but the idea that Canada must move before many of our peers doesn't make a lot of sense to me. I don't think it's going to lead to the best possible rules.
I think that, if we're looking at the timeline, at the very least we should be taking the time for a full public consultation. Consultations like that are really how we stress-test legislation and tease out the different types of problems that can occur. It's a really critical step to improving the final product. We've seen it work with other legislation. It's done on most legislation. I don't see any good reason why we skipped it here.
I'd like to thank the committee for having me here today, even though it's not a committee I usually sit on. It's a pleasure to be here.
I want to thank the witnesses for their presentations.
Mr Konikoff, if you don't mind, I'd like to talk about automated decision systems. As we know, Bill grants a new right, namely the right for an individual to receive an explanation about the use of these systems. However, unlike Quebec's Bill 25, Bill C‑27 does not contain provisions that would allow a person to object to the use of an automated decision system or to have a review of the decisions made by such a system.
In your opinion, what are the potential repercussions for consumers and users if Bill C‑27 does not include such provisions?
:
We base our opposition to this on the fact that the Privacy Commissioner presently does investigations and, although they are sometimes slow, the results are, in our opinion, fair.
We looked at the Competition Tribunal debacle this year with Rogers and Shaw, and the use of that extra step, if you will, by a company that felt like dragging out a process or winning.... We can't see any likelihood that companies using personal information won't take that extra step and go to the tribunal to challenge every commissioner decision. That very likely adds two years to all negative decisions on companies' parts. You could say that presently you can go from the Privacy Commissioner's decision to the Federal Court, but you have to re-prove the case in front of the Federal Court.
It seems like an unnecessary step. When you add that along with our concerns that you can't bring a class action until after all the proceedings are done, including in front of the tribunal, that will discourage class actions. We believe that some private enforcement does change the behaviour of companies when there are egregious privacy violations.
Our concern is that this is just setting up a structure that is an extra step and may well be less favourable to complainants like the Competition Tribunal is to the competition commissioner.
:
Sure. I would be delighted. We think a lot about misinformation and online harm. The government has been considering legislation on online safety for a while and been consulting about it. We're urging that it move forward.
We were surprised, but I think pleasantly surprised, that the AI act now would be a potential vehicle to address some of the harms of content recommendation systems, or “social media”, as most people refer to it. It was in 's list. If the online safety legislation doesn't move forward, or if it really focuses heavily on content like child sexual exploitation and terrorist content more specifically, then I think this could be a vehicle in which we attempt to regulate the recommendation systems and their algorithmic amplification for potential harm. I think it's a good example of the type of thing that will take time to do correctly through the regulatory process, but I think it is a potential way.
Specifically on the generative AI component of it, in the voluntary code that was referenced, there's a proposed requirement for what's called watermarking. It's basically people being able to detect that it's a manipulated image or video or a deepfake. Especially as generative AI improves and our ability to trust anything we're seeing with our eyes breaks down, that type of technical and regulatory response will be very important.
That's just an example of how we can use this bill. I think that is very important.
:
It's a really core challenge, especially when it comes to misinformation, as opposed to some other content that's more clearly illegal, say things like hate speech.
With respect to misinformation, yes, we have to be very careful, but I tend to focus on a “more speech” approach rather than a censorship approach, which is building the systems where fact checkers are adding context to things we're seeing online and where things like deepfakes are being labelled so people know it. It's not to say there won't be manipulated imagery online, of course—that has always been the case—but people should know that what they're seeing is that. I think that's a way to balance freedom of expression and the real harms that are happening with respect to disinformation.
There are other pieces about algorithmic propagation and the financial motives that we can get into, but I think, at its core, any legislation or regulation through the AI act that tries to regulate speech needs to put at the forefront. Companies need to consider the freedom of expression alongside the other aims.
:
That's a good question.
Most large online platforms use automated systems to do content moderation. Those can produce imperfect results. Right now, you're seeing legitimate pro-Palestinian expression being caught up in filters about Hamas, just as an example. These systems are imperfect, though for the scale of the systems, they're often necessary.
We think, though, that a potential online safety bill, or potentially the AI act, could create additional recourse for users to challenge systems. The EU Digital Services Act, which is their equivalent, provides the ability for users to receive an explanation as to why it was taken down and to appeal it. That's something we don't have here in Canada, just as an example.
Those kinds of content moderation systems are getting better over time. AI and large language models will undoubtedly help make them more effective, but I think, at the end of the day, basically the recourse for a human to be in the loop for those things that are grey is absolutely necessary.
I think the ability for the law to meaningfully prevent and ban outright bias in these systems, psychological harm and misuse and malicious use depends on the context of the system we're talking about, but in financial services, in health care, in content moderation, which we were talking about, and in generative AI, there's a whole variety of ways in which harms and risks could manifest.
What is good about this bill is that it is comprehensive and wide in terms of its application, so the regulator, when it gets stood up, will have a big job in starting to prioritize which to focus on first. 's list provides some hints at that, but I think to secure responsible adoption, we need to focus on the systems that are also going to be used by a lot of businesses.
Generative AI is a good example of that, in that, increasingly, businesses are starting to think about how they could embed those in their processes to make their businesses more efficient.
Just before I turn to Mr. Vis, I'd like to seek unanimous consent from committee members. As you all know, we received a Standing Order 106(4) to study the SDTC affair. Based on the timeline, we would need to study it on Monday. I am asking for unanimous consent to do it on Tuesday. We have a committee meeting on Tuesday, but so far the invited witnesses have declined, so that would be a good use of committee time.
If all are in agreement, we would do it on Tuesday. Do I have unanimous consent?
Some hon. members: Agreed.
The Chair: Thank you so much.
Mr. Vis, the floor is yours.
:
Turn it around, so that it's no longer up to the companies. Make it so that—thee and me—we have the authority to grant permission to the companies.
When we look at that, I caution that, when it comes to companies or legislation, it will require age verification. We already have companies saying that in order to make sure that the children aren't looking at this content, provide photo ID—government-issued photo ID of mom, dad and the kids.
All they're doing is collecting that information. Why should we trust that they're going to protect that any better than the information they already don't protect well? It's so complex. That's why, please, involve our organization, my colleagues' organizations and the people who actually understand this from an operational level.
I want to apologize. I had to go give a speech in the House, so I may have missed some things. I'd like to avoid repeating anything that may have already been asked in my absence. That said, Mr. Lawford and Ms. Sai, I'd like to ask you some questions to follow up on Mr. Gaheer's question about the tribunal that the bill aims to create.
I have great respect for Mr. Balsillie, whom the committee received on Tuesday, and for Mr. Geist, who appeared last week. I digress to say that, so far, no one has spoken positively about this bill. I think we have a serious problem.
Moreover, Mr. Lawford and Ms. Sai, you're saying that we should remove the provisions to create a tribunal from the bill because that could slow down the process should any lawsuits be filed after the bill comes into force.
Could you elaborate on that?
I'll go over to the Privacy and Access Council of Canada. As this committee meeting is going on today, in Europe there has been a large AI meeting. All the leaders were there—the U.K. leader, the Italian leader and so forth.
I want to ask your thoughts in terms of the artificial intelligence act. I think there was a document dated April 14, 2023, with regard to the EU becoming likely the de facto global standard for general-purpose generative AI intelligence systems. I may be very humble about this, but with the speed at which AI and other forms of new technologies are taking place, I don't know how many people actually understand them.
We were over in Europe several months ago as chairs of the Canada-Europe Parliamentary Association. We had some folks actually from Montreal there, who gave us presentations.
It's very complicated and so forth, but I would like to hear your thoughts in terms of the EU's proposed AI act and where that will take not only the EU but the world, because it seems there is some “first mover” going on, if I can use that term.
:
I think each country wants to be the first. As was questioned earlier, is that the right choice? Canada is marching forward and pushing this through, but to what benefit and, more concerning, to what harm?
When it comes to the EU and the U.K., yes, they've given thought and lots of consultation, but I think it's important to not consider these pieces of legislation in isolation, because on one hand we have robust AI regulations coming out of the same country that just passed the euphemistically named “Online Safety Act” that requires all content to be monitored, including yours, because the Internet is global.
How do we protect anything when AI is behind the scenes? AI is used in these buildings, in airports and in shopping centres. It's everywhere already.
Yes, they have a jump on Canada. Is it the right direction? It's certainly better than what we have in Bill . There is no disagreement on that, whether from today's meetings or from many of your previous witnesses. We can look to our European counterparts. They are on a better path. That's about as generous as I can get right now.
:
Thank you very much, Mr. Savard-Tremblay.
It would normally be Mr. Masse's turn, but he had to leave a little early. He agreed to give me his time. So, I'm going to take this opportunity to ask you a few questions, too.
[English]
I'll just echo some of the concerns my colleague Mr. Van Bynen has raised about consent fatigue and also what Mr. Perkins talked about when it comes to the Zoom contract, where the terms can be changed at the discretion of the organization.
In my mind, consent, when it comes to online activities, is a bit overrated, because there is such a big imbalance in power between the user and the organization. We cannot say that there is a meeting of the minds when privacy lawyers don't even bother to read the terms. I'm a lawyer. I haven't practised in a while, but I don't read the terms, and we need to use these apps in our day-to-day lives.
This is what, to me, the role of the legislator is: to strike that balance for consumers, kind of like in a landlord and tenant situation, where the terms are very clearly defined. I gather from your interventions that this balance has not been struck in this bill. What would be absolutely essential for us to strike that balance?
Go ahead, Mr. Hatfield.
:
First, I'd like to touch upon this idea of whether we are balancing business interests with the privacy interests of individuals. I think we have to remember that businesses, especially digital platforms, already exert an incredible amount of power and leverage over individual consumers. Already, there is no equal balancing there.
What we would like to see in this bill is a prioritization of consumer knowledge and consent, rather than a bill that seems to treat consumer consent as an inconvenience for businesses.
On the topic of consent fatigue, that's a concept we take umbrage with because it seems to be used by industry to push for a progressive paring down of consent. The question that seems to be asked right now is what types of business activities no longer need to be consented to because consumers are tired of the lengthy, repetitive consent requests. The question we should be asking is how we overcome consent fatigue by innovating how consumers can manage their preferences in an easy-to-understand and accessible way. Basically, it's retaining the same level of control over consent as before, but in new ways.
This term “consent fatigue” really shouldn't be the basis for getting rid of consent based on ever-changing consumer expectations that are, in truth, being shaped by the industry itself.