:
I call the meeting to order.
Good morning one and all. Welcome to meeting number 108 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming its study of Bill , an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts.
Today's witnesses are all joining us by video conference. We have with us Ignacio Cofone, Canada research chair in artificial intelligence law and data governance at McGill University; Catherine Régis, full professor at Université de Montréal; Elissa Strome, executive director of pan-Canadian AI strategy at the Canadian Institute for Advanced Research; and Yoshua Bengio, scientific director at Mila - Quebec Artificial Intelligence Institute.
Welcome and thank you all for being with us.
Since we are already a bit behind schedule, I'm going to turn the floor right over to you, Mr. Cofone. You have five minutes for your opening statement.
:
Thank you very much, Mr. Chair.
Good morning, everyone, and thank you for the invitation to share with the committee my thoughts on Bill .
I'm appearing today in my personal capacity. Mr. Chair has already introduced me, so I'm going to skip that part and say that it is crucial that Canada have a legal framework that fosters the enormous benefits of AI and data while preventing its population from becoming collateral damage from it.
I'm happy to share my broad thoughts on the act, but today I want to focus on three important opportunities for improvement while maintaining the general characteristics and approach of the act as proposed. I have one recommendation for AIDA, one for the CPPA and one for both.
My first recommendation is that AIDA needs an improved definition of “harms”. AIDA is an accountability framework, and the effectiveness of any accountability framework depends on what it is that we hold entities accountable for. AIDA recognizes currently property, economic, physical and psychological harms, but for it to be helpful and comprehensive, we need one step more.
Consider the harms to democracy that were imposed during the Cambridge Analytica scandal and consider the meaningful but diffuse and invisible harms that are inflicted every day through intentional misinformation that polarizes voters. Consider the misrepresentation of minorities that disempowers them. These go unrecognized by the current definition of “harms”.
AIDA needs two changes to recognize intangible harms beyond individual psychological ones: It needs to recognize harms to groups, such as harms to democracy, as AI harms often affect communities rather than discrete individuals, and it also needs to recognize dignitary harms, like those stemming from misrepresentation and the growing of systemic inequalities through automated means.
I therefore urge the committee to amend subsection 5(1) of AIDA to incorporate these intangible harms to individuals and to communities. I would be happy to propose suggested language.
This fuller account of harms would put Canada up to international standards, such as the EU AI Act, which considers harms to “public interest”, to “rights protected” by EU law, to a “plurality of persons” and to people in a “vulnerable position”. Doing so better complies with AI ethics frameworks, such as the Montreal declaration for responsible AI, the Toronto declaration and the Asilomar AI principles. You would also increase consistency within Canadian law, as the directive on automated decision-making repeatedly refers to “individuals or communities”.
My second recommendation is that the CPPA must recognize inferences as personal information. We live in a world where things as sensitive and dangerous as our sexuality or ethnicity and our political affiliation can be inferred from things as inoffensive as our Spotify listens or our coffee orders or text messages, and those are just some of the inferences that we know about.
Inferences can even be harmful when they are incorrect. TransUnion, for example, the credit rating agency, was sued in the United States a couple of years ago for mistakenly inferring that hundreds of people were terrorists. By supercharging inferences, AI has transformed the privacy landscape.
We cannot afford to have a privacy statute that focuses on disclosed information and builds a back door into our privacy law that strips from it its power to create meaningful protection in today's inferential economy. The CPPA doesn't rule out inferences being personal information, but it doesn't incorporate them explicitly. It should. I urge the committee to amend the definition of personal information in one of the acts to say that “ 'personal information' means disclosed or inferred information about an identifiable individual or group”.
This change would also increase consistency within Canadian law, as the Office of the Privacy Commissioner has repeatedly stated that inferences should be personal information, and also with international standards, as foreign data protection authorities emphasize the importance of inferences for privacy law. The California attorney general has also stated that inferences should be personal information for the purposes of privacy law.
My third brief recommendation is a consequence of this bill, which is reforming enforcement. As AI and data continue to seep into more aspects of our social and economic lives, one regulator with limited resources and personnel will not be able to have their eye on everything. They will need to prioritize. If we don't want all other harms to fall through the cracks, both parts of the act need a combined public and private enforcement system, taking inspiration from the GDPR, so that we have an agency that issues fines without preventing the court system from compensating for tangible and intangible harm done to individuals and groups.
We also have a brief elaborating on the suggested outlines here.
I'd be happy to address any questions or elaborate on anything.
Thank you very much for your time.
:
Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to comment on the AI portion of Bill .
I am a full professor in the faculty of law at Université de Montréal. I am also the Canada research chair in collaborative culture in health law and policy, as well as the Canada-CIFAR chair in AI, affiliated to Mila. From January 2, 2022 to December 2023, I co-chaired the Working Group on Responsible AI for the Global Partnership on AI.
The first point I want to make is to reaffirm not only the importance, but also the urgency of creating a better legal framework for AI, as proposed in Bill . That has been my view for the past five years, and I am now more convinced than ever, given the dizzying pace of recent developments in AI, which you are all familiar with.
We need legal tools that are binding. They must clearly set out our expectations, values and requirements in relation to AI, at the national level. During the citizen consultations that culminated in the development of the Montréal Declaration for a Responsible Development of Artificial Intelligence, the first need identified was for an appropriate legal framework that would enable the development of trusted AI technologies.
As you probably know, that trend has spread across the world, the most obvious example definitely being the European Union's efforts. As of last week, the EU is now one step closer to adopting a regulatory framework for AI.
In addition to these national requirements, the global discussions around AI and the resulting decisions will have repercussions for every country. In fact, the idea of creating a specific AI authority is being discussed.
In order to ensure that Canadian values and interests are taken into account in the international space, Canada has to be able to influence the discussions and decisions. Setting out a national vision with strong and clear standards is vital to playing a credible, meaningful and influential role in the global governance of AI.
That said, I think Bill could still use some improvements. I will focus on two of them today.
The first improvement is to make the artificial intelligence and data commissioner more independent. Although recent amendments have resulted in improvements, the commissioner is still very much tied to Innovation, Science and Economic Development Canada. To avoid any conflict of interest, real or apparent, the government should create more of a wall between the two entities. This would address any tensions that might arise between the government's role as a funder on one hand, and its role as a watchdog on the other.
Possible solutions include creating an office of the artificial intelligence commissioner that is totally independent of the department, and empowering the commissioner to impose administrative monetary penalties or require that corrective actions be taken to address the accountability framework. In addition, the commissioner could be asked to recommend new or improved regulations informed by their experience as a watchdog, mainly through the annual public report.
Other measures could also be taken. Once the legislation is passed, for instance, the government could give the commissioner the financial and institutional resources, as well as the qualified staff necessary to successfully carry out the duties of the commissioner. Making sure that the commissioner has the means to achieve their objectives is really important. Another possibility is to create a mechanism whereby the public could report issues directly to the commissioner. That would establish a relationship between the two.
The second major improvement that's needed, as I see it, is to further strengthen the crucial role that human rights can play in analyzing the risks and impacts of AI systems. The importance of taking into account human rights in defining the classes of high-impact AI systems is specifically mentioned. However, the importance of then incorporating consideration of those rights in companies' assessments, which could include an analysis of the risks of harm and adverse effects, is not quite so clear.
I would also recommend adding specific language to address the need to conduct impact assessments for human rights in relation to individuals or groups of individuals who may be affected by high-impact AI systems. A portion of those assessments could also be made public. These are sometimes called human rights impact assessments.
The Council of Europe, the European Union with its AI legislation, and even the United Nations Educational, Scientific and Cultural Organization are working on similar tools, so exploring the possibility of sharing expertise would be worthwhile.
The second recommendation is fundamental. While the AI race is very real, there can be no winner of the race to violate human rights. The legislation must make that clear.
Thank you.
[English]
Hello. My name is Elissa Strome. I am the executive director of the pan-Canadian AI strategy at the Canadian Institute for Advanced Research, CIFAR.
[Translation]
Thank you for the opportunity to meet with the committee today.
[English]
CIFAR is a Canadian-based global research organization that brings together brilliant people across disciplines and borders to address some of the most pressing problems facing science and humanity. Our programs span all areas of human discovery.
CIFAR's focus on pushing scientific boundaries allowed us to recognize the promise of an idea that Geoffrey Hinton came to us with in 2004—to build a new CIFAR research program that would advance the concept of artificial neural networks. At the time, this concept was unpopular, and it was difficult to find funding to pursue it.
Twenty years later, this CIFAR program continues to put Canada on the global stage of leading-edge AI research and counts Professor Hinton, Professor Yoshua Bengio—who is here with us today—Professor Richard Sutton at the University of Alberta and many other leading researchers as members.
Due to this early foresight and our deep relationships, in 2017, CIFAR was asked to lead the pan-Canadian AI strategy. We continue to work with our many partners across the country and across sectors to build a robust and interconnected AI ecosystem around the central hubs of our three national AI institutes: Amii in Edmonton, Mila in Montreal and the Vector Institute in Toronto. There are now more than 140,000 people working in the highly skilled field of AI across the country.
However, while the pan-Canadian AI strategy has delivered on its initial promise to build a deep pool of AI talent and a robust ecosystem, Canada has not kept up in our regulatory approaches and infrastructure. I will highlight three priorities for the work of this committee and ongoing efforts.
First is speed. We cannot delay the work of AI regulation. Canada must move quickly to advance our regulatory bodies and processes and to work collaboratively, at an international level, to ensure that Canada's responsible AI framework is coordinated with those of our partners. We must also understand that regulation will not hinder innovation but will enhance it, providing greater stability and ensuring interoperability and competitiveness of Canadian-led AI products and services on the global stage.
Second is flexibility. The approach we take must be able to adapt to a fast-changing technology and global context. So much is at stake, with the potential for AI to be incorporated into virtually every type of business or service. As the artificial intelligence and data act reflects, these effects can have a high impact. This means we must take an inclusive approach to this work across all sectors, with ongoing public engagement to ensure citizen buy-in, in parallel with the development and refinement of these regulations.
We also must understand that AI is not contained within borders. This is why we must have systems for monitoring and adapting to the global context. We must also adapt to the advances and potentially unanticipated uses and capabilities of the technology. This is where collaboration with our global partners will continue to be key and will call upon the strengths of Canada's research community, not only in ways to advance AI safety but also in the ethical and legal frameworks that must guide it.
Third is investment. Canada must make significant investments in infrastructure, systems and qualified personnel for meaningful AI regulation when used in high-impact systems. We were glad to see this defined in the amendments to the act.
Just like those in the U.S. and the U.K., our governments must staff up with the expertise to understand the technology and its impacts.
For Canada to remain a leader in advancing responsible AI, Canadian companies and public sector institutions must also have access to the funding and computing power they need to stay at the leading edge of AI. Again, the U.S., the U.K. and other G7 countries have a head start on us, having already pledged deep investments in computing infrastructure to support their AI ecosystems, and Canada must do the same.
I won’t pretend that this work won't be resource-intensive; it will be. However, we are at an inflection point in the evolution of artificial intelligence, and if we get regulation right, Canada and the world can benefit from its immense potential.
To conclude, Canada has tremendous strengths in our research excellence, deep talent pool and rich, interconnected ecosystem. However, we must act smartly and decisively now. Getting our regulatory framework, infrastructure and systems right will be critical to Canada's continued success as a global AI leader.
I look forward to the committee's questions and to the comments from my fellow witnesses.
Thank you.
Good morning.
First, I want to say how much I appreciate this opportunity to meet with the committee.
My name is Yoshua Bengio, and I am a full professor at Université de Montréal, as well as the founder and scientific director of Mila - Quebec Artificial Intelligence Institute. Here's a fun fact: I recently became the most quoted computer scientist in the world.
[English]
Over the past year I've had the privilege of sharing my perspective on AI in a number of important international forums, including the U.S. Senate; the first global AI Safety Summit, an advisory board to the UN Secretary-General; and the U.K. Frontier AI Taskforce; in addition to the work I'm doing here in Canada in co-chairing the advisory committee on AI for the government.
In recent years, the pace of AI advancement has accelerated to such a degree that I and many leaders in the field of AI have revised downwards our estimates of when human levels of broad cognitive competence, also known as AGI, will be achieved—in other words, when we will have machines that are as smart as humans at a cognitive level.
This was previously thought to be decades or even centuries away. I now believe, with many of my colleagues, including Geoff Hinton, that superhuman AI could be developed in the next two decades, and even possibly in the next few years.
If we look at the low end, we're not ready, and this prospect is extremely worrying.
[Translation]
The prospect of the early emergence of human-level AI is very worrisome.
[English]
As discussed in the above international forums, without adequate guardrails, the current AI trajectory poses serious risks of major societal harms even before AGI is reached.
To be clear, progress in AI has opened exciting opportunities for numerous beneficial applications that have motivated me for many years, yet it is urgent to establish the necessary guardrails to foster innovation while mitigating risks and harms.
With that in mind, we urgently need agile AI legislation. I think this law is doing that, and is moving in the right direction, but initial requirements must be put in place even before the consultations are completed to develop the more comprehensive regulatory framework. With the current approach, it would take something like two years before enforcement would be possible.
[Translation]
I therefore support AIDA broadly and would like to formulate recommendations to this committee on ways to strengthen its capacity to meaningfully protect Canadians. They are laid out in detail in my submission, but there are three things that I would like to highlight.
The first is the urgency to adopt legislation.
[English]
Upcoming advances are likely to be disruptive, and the timeline for these is very uncertain. In this situation, an imperfect law whose regulation could be adapted later is better than no law and better than postponing a law too much. We should best move forward with AIDA's framework and rely on agile regulatory systems that can be adapted as this technology evolves.
Also, because of the urgency, the law should include initial provisions that will apply as soon as it is adopted to ensure the public's protection while the regulatory framework is being developed.
What would we do as an initial step? I'm talking about a registry.
Systems beyond a certain level of capability should report to the government and provide information about their safety and security measures, as well as safety assessments. A regulator will be able to use that information to form best-in-class requirements for future permits to continue developing and deploying these advanced systems. This would put the burden of demonstrating safety on developers with the billions required to build these advance systems, rather than taxpayers.
Second, another important point to add in the law is that national security risks and societal threats should be listed among the high-impact categories. Examples of capabilities to bring harm include being easily transformable to help bad actors design dangerous cyber-attacks and weapons, deceiving and manipulating as well as or better than humans, or finding ways to self-replicate in spite of contrary programming instructions.
Finally, my last main point concerns the need for pre-deployment requirements. Developers should be required to register their system and demonstrate its safety and security even before the system is fully trained and developed, and before deployment. We need to address and target the risks that emerged earlier in an AI's life cycle, which the current law doesn't seem to do.
[Translation]
In conclusion, I welcome the committee's questions and look forward to hearing what my fellow witnesses have to say. All of their comments thus far have been quite interesting.
At this point, I would like to thank you for having this important conversation.
:
Of course. The directive on automated decision-making explicitly recognizes that harms can be done to individuals or communities, but when it defines harm in proposed subsection 5(1), AIDA has repeated references to individuals for harm to property and for economic, physical and psychological harm.
The thing is that harms in AIDA, by their nature, are often diffuse. Oftentimes they are harms to groups, not to individuals. For a good example of this, think of AI bias, which is covered in proposed subsection 5(2), not in 5(1). If you have an automated system that allocates employment, for example, and it is biased, it is very difficult to know whether a particular individual got or didn't get the job because of that bias or not. It is easier to see that the system may be biased towards a certain group.
The same goes for representation issues in AI. An individual would have difficulty in proving harm under the act, but the harm is very real for a particular group. The same is true of misinformation. The same is true of some types of systemic discrimination that may not be captured by the current definition of bias in the act.
What I would find concerning is that by regulating a technology that is more likely to affect groups rather than individuals under a harm definition that specifically targets individuals, we may be leaving out most of what we want to cover.
:
I would like to ask this of perhaps all of the witnesses, maybe starting with Ms. Strome.
We've had a great debate about Dr. Bengio's saying that having an imperfect bill is better than not having a bill. The challenge for parliamentarians is in two aspects of that.
One, I never like passing an imperfect bill, especially one as important as this. I don't think there's any merit in sort of saying that we're number one because we got our first bill through. The way Parliament works is that it's five to 10 years before legislation comes back.
I also don't like giving the department a blank cheque to basically not have to come back to Parliament on an overall public policy framework of how we're going to govern this. This bill lacks that. It just talks about the specifics about high-impact general purpose and machine learning. It doesn't talk overall, such as the Canada Health Act does in referring to five principles.
What are the five principles of AI, such as transparency and that kind of thing? The bill doesn't speak to that, and it governs all AI. I think that's an issue going forward. I also think that it's an issue to give the bureaucracy, while maintaining flexibility, total control over future development without having to seek approval from Parliament.
I would like to ask all of the witnesses about the five things, four things or three things that are high-level philosophies about how we should govern AI in Canada, which this bill does not seem to define.
I'll start with Ms. Strome, and then we'll go from there.
There's a broad international consensus about what constitutes safe and trustworthy AI. Whether it's the OECD principles or the Montreal declaration, many organizations have a common consensus about what constitutes responsible AI.
These principles include having fairness as a primary concern. That ensures that AI delivers recommendations that treat people fairly and equitably and that there's no discrimination and no bias.
Another principle is accountability, which means ensuring that AI systems and developers of AI systems are accountable for the impacts of the technologies that they are developing.
Transparency is one that you mentioned. That ensures that we understand and have the opportunity to interrogate AI systems and models and get a better understanding of how they are coming towards the decisions and recommendations that they are developing.
Privacy is a principle that is very deeply interconnected with the bill that's before you today. Those are questions are deeply intertwined with AI as well to ensure that the fundamental principles and rights of privacy are also protected.
Thanks to all the witnesses for being here today. It seems that we have some really important testimony, so thank you for making the time. Thank you for lending your expertise to this important conversation.
I think we've all heard the phrase or the cliché that “perfection can be the enemy of the good”. I wonder if this is one of those instances.
We have a very fast-evolving AI space and lots of expertise here in Canada, but then we have people with differing opinions. Some people say that we should split the bill up and do the AIDA portion over again. We have others saying that we need to move forward. In a lot of the opening testimony that I heard from you today, speed is of the essence.
Mr. Bengio, maybe you can comment on whether you think that we should start over with AIDA and maybe comment on the importance of moving quickly.
:
Maybe the shortest-term concern that was a priority, for example, for the experts consulted by the World Economic Forum just a few weeks ago is disinformation. An example is the current use of deep fakes in AI to imitate images of people by imitating their voices and rendering their movement in video and interacting with people through texts and through dialogue in a way that can fool a social media user and make them change their mind on political questions.
There's real concern about the use of AI in politically oriented ways that go against the principles of our democracy. That's a short-term thing.
The one that I would say is next, which may be a year or two later, is the threat in terms of the use of these advanced AI systems for cyber-attacks. These systems, in terms of programming, have been making a lot of rapid progress in recent years, and it's expected to continue even faster than any other ability, because we can generate an infinite amount of data for that, just like in playing the game of Go. When these systems get strong enough to defeat our current cyber-defences and our industrial digital infrastructure, we are in trouble, especially if these systems fall into the wrong hands. We need to secure those systems. One of the things that the Biden executive order insisted on is that these large systems need to be secured to minimize those risks.
Then there were other risks that people talk about, such as helping bad actors to develop new weapons or to have the expertise that they wouldn't have otherwise. All of these things need a law as quickly as possible to make sure that we minimize those risks.
:
Absolutely. You are exactly right.
Voluntary codes are useful to get off the ground quickly, but there's no guarantee that companies will follow that code. Also, the voluntary code is very vague. We need to have more precision about criteria for what is acceptable and what is not. Companies, I think, need to have that.
We've seen that some companies have even declared publicly in the U.S. that they wouldn't follow the Biden voluntary code, so I think we have no choice. We have to make sure that there's a level playing field. Otherwise, we're favouring the corporations that don't go by the voluntary code. For them it means less expense [Technical difficulty—Editor] with the public. We really need to have regulations and not just [Technical difficulty—Editor].
Professor Bengio, in your opening statement, you talked about provisions that could be implemented right away, given the urgent need for action. You described something along the lines of a registry, whereby large generative AI systems and models would be registered with the government and include a risk evaluation.
Basically, you're saying that we should do the same thing we do for drugs: before a drug is allowed on the market, the manufacturer has to show that it is safe and that the benefits outweigh the risks.
Are you likening the challenge with AI systems to a public health issue, thereby warranting that companies submit substantial evidence about their products to a government agency?
Thanks to our witnesses.
There are a couple of things that have worked in the past in this committee that have come to light on where we are right now and how it relates to the voluntary code. One of the things is that it used to be legal in Canada for businesses to write off fines and penalties on the environment or on anti-consumer court cases. They would actually get a tax deduction of up to 50% off the fines and penalties. Drug companies were fined for being misleading and environmental companies were fined for doing the wrong thing—actually, it wasn't environmental companies, but there was environmental damage that was done.
It led to this imbalance that made it actually an incentive, a business-related expense, to go ahead with bad practices that affected people and the environment, because it actually paid off for them. It created an imbalance for innovation and so forth.
The other one is my work on enacting the right to repair, which passed through this committee and was in the Senate. We ended up taking a voluntary agreement in the auto sector. We basically said that we got a field goal instead of a touchdown. This has now emerged again as an issue, because some of the industry will follow the voluntary agreement and some won't. Some wouldn't even sign on to the voluntary agreement, including Tesla, until recently. There are still major issues, and now they're back to lobbying here on the Hill. We did know the vulnerability 10 years ago, when we started this, that when it worked in towards the electronics and the sharing of information and data, it changed things again, and there wasn't anything there.
My question is for Ms. Strome, Ms. Régis and Mr. Bengio.
With this voluntary agreement, have we created a potential system right now whereby good actors will come to the table and follow a voluntary agreement while bad actors might actually use it as an opportunity to extend their business plans and knock out competition? I've seen that happen in those two examples that we've had there.
I'll start with you, Ms. Strome, because you haven't been on yet. Then we can hear from Ms. Régis and Mr. Bengio, if we can go in that order, please.
I'll go to Mr. Cofone, but first, I guess, the trouble we're in right now is that we have this voluntary code out there already. There are the actions and deliberations of companies that are making decisions right now, some in one direction and some in another, until they're brought under regulatory powers. I think the ship has sailed, to some degree, in terms of where this can go. We're left with this bill and all the warts it has on a number of different issues.
One thing that's a challenge—and maybe Mr. Cofone can highlight a little bit of this with his governance background—is that I met with ACTRA, the actors guild, and a lot of their concerns on this issue have to be dealt with through the Copyright Act. If we don't somehow deal with it in this bill, though, then we actually leave a gaping hole for not just abuse of the actors—that includes children—and their welfare, but we also leave a blind spot for how the public can be manipulated and so forth in everything from consumer society to politics to a whole slew of things.
What do we do? Do you have any suggestions? How do we fix those components that we're not even...? It's a separate act.
:
Yes. Perhaps I can quickly add something to your prior question besides agreement with the prior three responses.
An environment like you brought up is a great example. In environmental law, years ago, we thought that regulating was challenging, because we mistakenly thought that the costs were local but the harms were global. Not regulating meant developing the industry while not imposing global harms.
With AI, it's the same. We think sometimes the harms are global and the costs of regulating are local, but that is not the case. Many of the harms of AI are local. That makes it urgent for Canada to pass a regulation such as this one, a regulation that protects its citizens while it fosters industry.
On the Copyright Act, it's a challenging question. As Professor Bengio pointed out a bit earlier, AI is not just one technology. Technologies do one thing—self-driving cars drive and cameras film—but AI is a family of methods that can do anything. Regulating AI is not about changing one variable or the other; AI will actually change or affect all of the law. We will have to reform several statutes.
What is being discussed today is an accountability framework plus a privacy law, because that's the one that's most intimately affected by AI. I do not think we should have the illusion that changing this will account for all AI and for all the effects of AI, or think that we should stop it because it doesn't capture everything. It cannot. I think it is worth discussing an accountability framework to account for harm and bias and it is worth discussing the privacy change to account for AI. It is also possibly warranted to make a change in the Copyright Act to account for generative AI and the new challenges it brings for copyright.
Really quickly, maybe I could get a yes-or-no answer or whether it's a good idea or bad idea, maybe in the long-term, if eventually we got to a joint House and Senate committee that overlooked AI on a regular basis, similar to a defence thing. Would that be a good thing or a bad thing? It would cover all those bases of other jurisdictions, rather than just the industry committee, if we had both houses meet and oversee artificial intelligence in the future.
I know it's a hard one—yes or no—but I don't have much time.
Could we go in reverse order? Thank you.
:
Thank you very much, Mr. Chair.
I'd like to thank all the witnesses. Today's discussions are very interesting.
I'm not necessarily speaking to anyone in particular, but rather to all the witnesses.
Bad actors, whether they be terrorists, scammers or thieves, could misuse AI. I think that's one of Mr. Bengio's concerns. If we were to pass Bill tomorrow morning, would that prevent such individuals from doing so?
To follow up on the question from my Bloc Québécois colleague earlier, it seems clear to me that, even in the case of a recorded message intended to scam someone, the scammer will not specify that the message was created using AI.
Do you really believe that Bill will change things or truly make Quebeckers and Canadians safer when it comes to AI?
:
I think so, yes. What it will do, for example, is force legitimate Canadian companies to protect the AI systems they've developed from falling into the hands of criminals. Obviously, this won't prevent these criminals from using systems designed elsewhere, which is why we have to work on international treaties.
We already have to work with our neighbour to the south to minimize those risks. What the Americans are asking companies to do today includes this protection. I think that if we want to align ourselves with the United States on this issue to prevent very powerful systems from falling into the wrong hands, we should at least provide the same protection as they do and work internationally to expand it.
In addition, sending the signal that users must be able to distinguish between artificial intelligence and non‑artificial intelligence will encourage companies to find technical solutions. For example, one of the things I believe in is that it should be the companies making the content for cameras and recorders that encrypt a signature to distinguish what is generated by artificial intelligence from what is not.
For companies to move in that direction, they need legislation to tell them that they need to move in that direction as much as possible.
:
I would like to raise a few small points. There was a question about whether the Canadian legislation will be sufficient. First, it will certainly help, but it won't be enough, given the other legislative orders that must be taken into account. The provinces have a role to play in this regard. In fact, as we speak, Quebec is launching its recommendations report on regulating artificial intelligence, entitled “Prêt pour l'IA”. The Government of Quebec has mandated the Conseil de l'innovation du Québec to propose regulatory options, so we have to consider that the Canadian legislation will be part of a broader set of initiatives that will help solidify the guarantees and protect us well.
As for the United States, it's difficult to predict which way it will go next. However, President Biden's executive order was a signal of a magnitude few expected. So it's a good move then, and one to watch.
Your question touches a bit on the really important issue of interoperability. How will Canada align with the European Union, the United States and others?
As for the European case, the final text of the legislation was published last week. Since it's 300 pages long, I don't have all the details; however, I will tell you that we certainly have to think about it, so as not to penalize our companies. In other words, we really need to know how our legislation and Canadians are going to align with it, to a certain extent.
Furthermore, one of the questions I have right off the bat is this. European legislation is more focused on high‑risk AI systems, and their legal framework deals more with risk, while ours deals more with impact. How can the two really work together? This is something that needs more thought.
:
Thank you for the question. I'll give it a shot for you.
That report was authored by Professor Gillian Hadfield, who's the director of the Schwartz Reisman Institute at the University of Toronto. She's a Canada CIFAR AI chair, and I believe she was a witness at this committee last week or the week before.
As you can understand, Professor Hadfield is an expert in regulation and in particular has developed significant expertise in understanding AI regulation in Canada and internationally.
The policy brief that we published at CIFAR represented ideas that came from Professor Hadfield and her laboratory, her research associates and her colleagues, really looking directly at the need to be more innovative as we think about regulating AI. This is a technology that's moving very quickly. It's a technology with so many dimensions that we haven't explored previously in other regulated sectors.
I believe the point that Professor Hadfield and her colleagues were making was that as we think about regulating AI, we also need to be incredibly flexible, dynamic and responsive to the technology as it moves forward.
The pan-Canadian AI strategy at its inception was really designed to advance Canada's leadership in AI research, training and innovation. It really focused on building a deep pool of talented individuals with AI expertise across the country and fielding very rich, robust, dynamic AI ecosystems in our three centres in Toronto, Montreal and Edmonton. That was the foundation of the strategy.
As the strategy evolved over the years, we saw additional investments in budget 2021 to focus on advancing the responsible development, deployment and adoption of AI, as well as thinking about those opportunities to work collaboratively and internationally on things like standards, etc.
Indirectly, I would say that the pan-Canadian AI strategy has at least been engaged in the development of the AI and data act through several channels. One is through the AI advisory council that Professor Bengio mentioned earlier. He's the co-chair of that council. We have several leaders across the AI ecosystem who are participants and members on that council. I'm also a member on that council. The AI and data act and Bill have been discussed at that council.
Second—
Professor Régis, I don't want to make any assumptions or age us unnecessarily. However, when I was young, I watched films on television. After 10 minutes or so, there would be advertisements. Persuasive tactics were used to try to sell me products. It was obvious that persuasion was involved and that, if I watched these things, I was explicitly consenting to having all sorts of items sold to me.
With all the artificial intelligence or non‑artificial intelligence algorithms out there, I find that it's now getting harder and harder to identify a persuasion tactic. This issue will become increasingly widespread. We're often asked to agree to something. However, the fine print makes it incomprehensible to the average person, or even to a highly educated person.
First, do you agree that it's becoming harder and harder to consent to these tactics? Second, how can the quality of consent be improved in this situation? Third, is there any way to improve the current bill in order to enhance the quality of consent?
That's a lot of questions. You have one minute and 15 seconds left. You can answer the questions in quick succession.
:
Yes. It's easier than ever to be persuaded. That's actually one of the strategies. It can involve a very personalized approach based on your history and some of your personal data. This is indeed a problem. In the case of children, the issue gives me even greater cause for concern.
This issue not only affects consumers, as you pointed out, but democracies in general. I'm concerned about being locked into bubbles where we receive only information that confirms certain things or that exposes us to less diverse viewpoints. This issue raises a wide range of concerns, which must be taken into account. That's my answer to the first question.
Now, what more can we do? As I was saying, we need to think about this. Consumer protection is also at stake. We could do more work on the provincial component. In a recent study carried out by the Canadian Institute for Advanced Research, millions of tweets were analyzed to find out how people across Canada viewed artificial intelligence. Contrary to what you might think, people sometimes have an extremely positive view of artificial intelligence. However, they're less critical and less aware of what this technology actually does in their lives and of its limits. We often hear about legal issues, for example, but this awareness is in its infancy.
One recommendation in the Quebec innovation council's report is to encourage people to develop a critical mindset and to think about what artificial intelligence is doing, how it can influence us, and how we can create a guide for defending ourselves against it. This must start at an early age.
[English]
Ms. Strome, you mentioned in your opening testimony that we need to invest in subject matter experts in the Department of Industry. I'm very concerned about this. We know that artificial intelligence operates not just in Canada; it's also global. Even if we have a regulatory approach in Canada, if this bill is indeed passed, I don't think we can isolate ourselves from the potential societal and individual harms that will come from AI actors in other parts of the world.
I remember a few years ago that the Government of Canada—and I'm not trying to make a political point here—had a hard time operating its payment schedule for public servants.
Mr. Brian Masse: It still does.
Mr. Brad Vis: It still does.
How in the world is Industry Canada going to regulate online harms from AI? They can't even manage their own pay systems. I just don't know if our public service is nimble enough right now to do the job we need it to do in the format suggested thus far.
We have many opportunities to work with like-minded peer nations around the world. Obviously, we are close allies with the U.S., the U.K. and other G7 countries. All of these countries are grappling with the same issues related to the risks associated with AI.
There are some good steps in the right direction. New systems are being developed and considered around international collaboration on the regulation of AI. One is the one I just mentioned, the U.K. AI Safety Summit, which is now a collection of like-minded countries that are coming together on a regular basis to explore and understand those risks and how we can work together to mitigate them.
It was really telling in the Bletchley declaration, which was published following that meeting, that there was a recognition even in the statement that different countries will have different regulatory approaches, laws, and legislation around AI. However, even within those differences, there are, first of all, opportunities to align, and even opportunities for interoperability. I think that's one great example, and it's an opportunity for Canada to actually make a really significant contribution.
:
One of the areas we're deeply concerned about right now is the lack of investment in computing infrastructure within our AI ecosystem. Right now, there is really and truly a global race for computing technology. These large language models and advanced AI systems really require very advanced and significant computational technology.
In Canada and the Canadian AI ecosystem, we don't have access on the ground to that level of computational power. Companies right now in Canada are buying it on the cloud. They're buying it primarily from U.S. cloud providers. Academics in Canada literally don't have access to that kind of technology.
For us to be able develop the skills, tools, and expertise to really interrogate these advanced AI systems and understand where their vulnerabilities are and where the safety and risk concerns are, we're going to need very significant computational powers. As we talk about regulating AI, that goes for the academic sector, the government sector and the private sector as well. That's a critical component.
:
I actually believe that patents aren't the only measure of the value in our AI ecosystem. In fact, I believe that talent is one of the strongest measures of the strength and the value of our AI ecosystem.
Patents, absolutely, are important, particularly for start-up companies that are trying to protect their intellectual property. However, much of the AI that's developed is actually released into the public domain; it's open-sourced. We derive really significant value and really innovative new products and services that are based on AI through the very highly skilled people who come together with the right resources, the right expertise, the right collaborators and the right funding to actually develop new innovations that are based on AI.
Patents are one measure, but they're not the only one, so I think that we need to take a broader view on that.
When we look at where Canada stands internationally, it's true that AI is on a very competitive global platform and stage right now. One index is called the global AI index. For many years, Canada sat fourth in the world, which is not bad for a small economy relative to some of the other players there. However, we are slipping on that index. Just this year, we slipped from fourth to fifth position, and when you look deeply into the details of where we're losing ground on AI, you see that much of it is coming because of the lack of investment in AI infrastructure. Other countries are making significant pledges, significant commitments and significant investments in building and advancing AI infrastructure, and Canada has not kept pace with that.
In the most recent index, we actually dropped from 15th to 23rd in the world on AI infrastructure, and that affects our global competitiveness.
:
With regard to venture capital, it is usually early-stage venture capital that is being invested there.
I'll go to Elissa first and then to Ignacio and anybody else who wants to jump in at that time.
In terms of the AI regulations, you want them to be.... It's like accounting. You have a principle. You have very prescriptive regulations when it comes to accounting. You want to make sure that the regulations are not so tight that they limit growth and the capacity to evolve and innovate, but you also want to make sure that they are not so loose that there are holes and loopholes in them, if you want to use that word.
Are we striking that right balance in terms of getting it done? That's very difficult to achieve. I've spoken to colleagues in Europe on AI, both at the subnational and the European level. All parliaments are grappling with this issue.
Where are we in striking that right balance?
:
Thank you very much, MP Sorbara.
Thanks to all of our witnesses today. It was a fascinating discussion.
We have a bit of committee business to attend to. I'll let you all go.
Some hon. member: Hear, hear!
The Chair: I don't know if you can hear through Zoom, but you're receiving applause from our members here. It was really interesting. Thank you for your work on this and for sharing your insights with us as we go forward on this legislation.
You are free to go, and thanks again.
[Translation]
Colleagues, this brings us to committee business. I know that a few notices of motion have been tabled, including Mr. Williams' notice.
Mr. Williams, you have the floor.
I'm sorry to be a dog with a bone on this topic, but I think it's really important that we continue to look at cellphone bills on behalf of Canadians. We're not going to stop until we get these cellphone bills down.
We had a motion that we talked about last week, and I want to revisit it. Specifically, what we have changed in this motion to make it a little different is to ensure that we get not only Rogers and Bell to the committee but also Vidéotron. I think it's important that we get CEOs of these companies here to talk about what's happening with cellphone bills for Canadians.
Second, it's that we will have the here. I think it's important for him. I know from talking to him in the past that he's always talked about how he wants to come in front of Canadians and talk about lowering cellphone bills.
I'm going to go back to March 5, 2020. The then industry minister, Navdeep Bains, announced that they were going to lower cellphone bills by 25% in the next two years, by 2022. It would save families $690 a year. With the announcements we had last week by Bell—I guess it's three weeks ago now—the average cellphone bill in Canada is $106. Rogers and Bell are going to follow suit with a $9-per-month increase, which will mean those bills are going to $115.
It comes down to one thing, and that's data. That's the question we want to ask the CEOs. Canadians used three times more data in 2022 than they did in 2015. You know that when you go and look at Instagram or you're downloading Reels or you're using YouTube or Netflix, you consume more data. Cellphone bills, if you were consuming only five gigabytes a month, have gone down 25%, but Canadians consume more data, and cellphone bills are going up. These are good questions to ask on behalf of all Canadians.
I move as follows:
That, as Canadians already pay the highest cellphone bills in the world, and Rogers and Bell have indicated an increase to cellphone bills of $9 a month, the committee call for two meetings to be held by February 15, 2024, one with the CEOs of Rogers, Vidéotron and Bell and the second one with the Minister of Innovation, Science and Industry, to explain why prices are going up; and that this committee also condemn pricing increases and report back to the House.
Thank you, Mr. Chair.
:
We had a great subcommittee meeting. Obviously it was in camera, so I won't speak to the conversation, but the subcommittee report we did is public. It includes a very substantive study that put forward originally and that we agreed to commence with. It also included point 5 here, which includes all of the CEOs of all of the companies, including Telus. We already have agreed to ensure that we would bring forward the CEOs to question them on cellphone bills. I'm looking at point 5 in the subcommittee report that came back to this, which had consensus. I also understand, based on our current committee schedule, that we have dates and times to make sure those meetings happen.
The way I feel about this is that it seems that Mr. Williams is just trying to push up something that's going to happen anyway, that we already agreed to. I don't see the rationale for that when we've already come to consensus on this. We've all agreed it's an important topic. We've all agreed there are concerns around cellphone price increases that are planned by Rogers and/or others. We also can get more acquainted with the facts, because there is lots of other information we need to look at. There is a whole spectrum of other issues we can talk about, but all of those are already included in the subcommittee report and its motion.
From my understanding, we agreed that meetings would start as early as February 26 on this topic. Mr. Williams' motion, I believe, just bumps it up and is now calling to have those meetings about a week or 10 days earlier.
What's the rationale for that? Why would this committee need to bump that up by two weeks or 19 days when we've already agreed to do it in due course? We agreed with that.
We also have other studies that we've talked about. We've had that conversation together and agreed. We came to consensus.
This seems like it blows up the consensus we had. We had a very constructive conversation to achieve consensus, and I thought we had a way forward, and now we have a motion that tries to bump this study up by 10 days. What is the rationale for that? I can't understand it.
Please, someone clarify that for me. Maybe Mr. Williams can clarify what the rationale is.
I know that we've agreed to a broader study on telecommunications, which talks about infrastructure and the problems we've had with companies. This is specific to price increases by Rogers, as announced by Rogers, and by Bell. Of course, we're agreeing with the committee to bring other witnesses—the four horsemen or others—together in front of the committee, because this is needed now and is pertinent now. This is the third time that we're trying this motion to get committees together.
There is a broader study in telecommunications. It's talking about infrastructure and it's going to talk about wireless and about the many Canadians who do not still have access to cellphone and signal. I know there are seven million Canadians who have been promised high-speed Internet access. Fifty per cent of Canada still does not have that access.
At the end of the day, this is about one topic only—increases that have been announced by Rogers and getting those CEOs and the minister together on that increase.
Why is that important? I'll tell you why.
Just this morning at 11 o'clock, Manulife, which had announced last week that it was going to offer specialty drug medication only to Loblaws—an exclusive deal, which was going to be a problem—actually backed off today, because of pressure. They announced that they are not going to follow through with that deal. That's what happens when we work together and put political pressure on these companies.
Rogers needs to answer now, not in four weeks, not in six weeks. They need to answer within two weeks why they're increasing prices to Canadians now. We should be out doing this now and not waiting.
Thank you.
:
No one is disagreeing with the fact that cellphone companies should be called before the committee and questioned about any planned increases. I think we've all agreed to that. That's actually in the subcommittee report. I think it's more substantive. It already includes the CEOs of Telus and Quebecor Media, etc. It includes all the CEOs of all the companies that have been mentioned. It also includes a focus on increased customer cellphone bills, so any.... It's already there.
I think we've already agreed to do this work, so I still can't understand the rationale for an additional motion that just bumps it up. If you're asking for additional committee resources to start that component of the broader study earlier, okay, that's fine, but then isn't it subject to committee resources? If we've asked for additional resources to study Bill , why shouldn't that be the first priority, which is what we agreed to?
We've already agreed to that. We've already had that debate and that conversation. We agreed to what's in the subcommittee report, so why is this now...? Even though we've already agreed to it, somehow it's now an even higher priority because you just decided it in the last week or so.
It doesn't make sense to me when we've already agreed to do a broader study. We've agreed to call all the witnesses. We've agreed to focus on cellphone prices and bills and we've agreed that it can be the first priority in that broader study. We've also agreed to a report of findings and recommendations back to the House.
I just can't understand what the.... In a way, isn't this a redundant motion? We've already done this.
Isn't there some rule in the Standing Orders that a motion has to be substantively different in order for it to be considered? This doesn't seem different at all. I don't see anything that's different here. I really can't understand the rationale for this, other than a bit of a grandstand.
I think a couple of things have overtaken this from when we had our schedule earlier, but you made a really good point. If we have witnesses lined up, that also involves travel for them and so forth, so I think we can find consensus here.
If we get further resources for this committee, we can start the study earlier. That's what I would like to see. I think this is an issue that is significantly important. There are good interventions, but if we're going to then create a problem for other witnesses.... That was something we didn't have before this was even tabled. At one point, it didn't look like we had some of those witnesses coming forth, but we do now.
I would offer that we leave this in your hands, Mr. Chair, to find out if we can actually get some additional resources to start this a bit more quickly, if possible. That's the way I would like to approach it, and I think it's a fair compromise for what we're trying to do here.
I think it's a heightened environment with regard to what the cellphone companies have been doing. We all feel it. It's the number one correspondence that I get in my constituency on a regular basis, aside from Gaza and a few other situations that are taking place.
At any rate, my position would be to leave it in your hands to see if we can actually get additional resources in this committee to start this study a bit earlier and go from there so that we don't disrupt what our clerk has done and any other flow of the work that you have to do.
If that's okay with the rest of the committee, I think that's a good way to go forward.
:
Okay. I can definitely see....
If there's consensus around the room to say that we'll start this study on telecoms earlier than planned if we have the additional resources, and we'll keep Bill as planned.... The clerk is here by my side, so we'll be looking for additional resources.
There is still a motion before this committee, though. I don't know how colleagues want to proceed with this motion or if there's an agreement that we just start the telecoms study earlier.
I'll looking at Ryan and Brian.
Ryan, I'll yield the floor to you.
:
I would just say I agree with Mr. Masse. I think if the committee can get the additional resources to start this study on telecoms a bit earlier and prioritize the CEOs, that's a good way forward.
One difference is that in our subcommittee report, we've allocated more time to have those CEOs come before us, which I think is important. I think the subcommittee report gives us more of an opportunity to scrutinize the CEOs, as is the intention here, so maybe I could suggest that Mr. Williams....
Mr. Williams, I know you won't like this, but maybe you want to withdraw the motion, and then we can come to a consensus to get the additional resources to hopefully start a bit earlier. We'd be happy to do that. We could reach consensus.