:
Colleagues and friends, I call this meeting to order.
Welcome to meeting number 107 of the House of Commons Standing Committee on Industry and Technology.
Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.
Before I introduce our witnesses, we have one quick piece of business, the election of the second vice‑chair. Pursuant to Standing Order 106(2), the second vice‑chair must be a member of an opposition party other than the official opposition.
I am now prepared to receive motions for the second vice‑chair. Can someone submit Mr. Garon's name?
[English]
Colleagues, I need someone to....
It's Mr. Bittle.
[Translation]
It has been moved by Mr. Bittle that Jean‑Denis Garon be elected as second vice‑chair of the committee.
Since there are no other motions, do I have the unanimous consent of the committee to elect Mr. Garon as second vice‑chair?
Some hon. members: Agreed.
(Motion agreed to)
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill , an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other acts.
I would now like to welcome the witnesses. We have Vass Bednar, executive director of the master of public policy in digital society program at McMaster University, who is joining us by videoconference. Also, from the University of Toronto, we have Andrew Clement, professor emeritus, Faculty of Information, who is also joining us by videoconference, as well as Nicolas Papernot, assistant professor and CIFAR AI chair.
Thank you to all three of you for being here.
[English]
I want to apologize for our being late to the committee. We had about 10 votes in the House of Commons. Because of the delay, we have until about 7 p.m. for the testimonies and the questions.
Without further ado, we will start with you, Madam Bednar, for five minutes.
:
Thank you, and good evening.
My name is Vass Bednar. You heard that I run the master of public policy program in digital society at McMaster University, where I'm an adjunct professor of political science. I engage with Canada's policy community broadly as a senior fellow at CIGI, a fellow with the Public Policy Forum, and through my newsletter “Regs to Riches”. I'm also a member of the provincial privacy commissioner's strategic ad hoc advisory committee.
Thank you for the opportunity to appear. I appreciate the work of this committee. I do agree there is an urgent need to modernize Canada's legislative framework so that it's suited in the digital age. I also want to note I've been on a sabbatical of sorts for the past year, and I have not followed every detailed element of debate on this bill in deep detail. That made me a little bit anxious about appearing, but then I remembered that I am not on the committee; I am appearing before the committee, so I decided to be as constructive as I could be today.
As we consider this framework for privacy, consumer protection and artificial intelligence, I really think we're fundamentally negotiating trust in our digital economy, what that looks like for citizens and actually articulating what responsible innovation is supposed to look like. That's what gets me excited about the direction that we're going.
Very briefly, on the privacy side, it's well known, or it has been well said, that this is not the most consumer-centric privacy legislation we see from other jurisdictions. It does provide clarity for businesses, both large and small, which is good, and especially small businesses. I don't think the requirements for smaller businesses are overly onerous.
The elements on consent have been well debated. Zooming in on that language beyond what is necessary, I think, is such a major hinge of debate. Who gets to decide what is necessary and when? I think the precedent of consent, of course, is critical. I think about a future where, as people who are experiencing our online world, or exchanging information with businesses, there's just way more autonomy for consumers.
For example, there's being able to search without self-preferencing algorithms that dictate the order of what you see; seeing prices that aren't tailored to you, or even knowing there is a personalized dynamic pricing situation; accessing discounts through loyalty programs, without trading your privacy to use them; or simple things like returning to an online store that you've shopped at before without seeing these so-called special offers based on your browsing or purchase history.
That tension, I think, is probably going to be core to our continued conversation around that need for organizations to collect.
On algorithmic collusion, recent reporting from The New Statesman elaborated on how the prices of most goods now are set not by humans, but by automatic processes that are set to maximize their owners' gains. There's this big academic conversation about the line between what's exploitative and what's efficient. Our evolving competition law may soon begin to consider algorithmic collusion, which may also garner more attention through advancements on Bill as it prompts the consideration of the effects of algorithmic conduct in the public interest.
Again, very briefly on the AI side, I agree with others that the AI commissioner should be more empowered, perhaps as an officer of Parliament. That office needs to be properly funded in order to do this work. Note that the provinces may want to create their own AI frameworks as a way to solve for some of the ambiguities or intersections. We should embrace and celebrate that in a Canadian federalist context.
In the spirit of being constructive and forward-looking, I wonder if we should be taking some more inspiration from very familiar policy areas of labelling and manufacturing just to achieve more disclosure. For the layer of transparency that's proposed for those who manage a general purpose AI system, we should ensure that individuals can identify AI-generated content. This is also critical for the result of any algorithmically generated system.
We probably need either a nutrition facts label approach to privacy or a registration requirement. I would hope we can avoid onerous audits, or kind of spurring strange secondary economies, that sprout and maybe aren't as necessary as they seem. Having to register novel AI systems with ISED, so the government can keep tabs on potential harms and justifications for them entering into the Canadian market, would be helpful.
I will wrap up in just a moment.
Of course, we, you, should all be thinking about how this legislation will work with other policy levers, especially in light of the recently struck Digital Regulators Forum.
Much of my work is rooted in competition issues, such as market fairness and freedom. I note that in the U.S., the FTC held a technology summit on artificial intelligence just last week. There it was noted, “we see a tech ecosystem that has concentrated...power in the hands of a small number of firms, while entrenching a business model built on constant surveillance of consumers.” Canadian policy people need to be more honest about connecting these dots. We should be doing more to question that core business model and ensure we're not enshrining it, going forward.
I have a final, very quick worry about productivity, which I know everyone is thinking about.
I have a concern that our productivity crisis in Canada will fundamentally act, whether implicitly or explicitly, to discourage regulation of any kind over the phantom or zombie risk of impeding this elusive thing we call innovation. I want to remind all of you that smart regulation clarifies markets and levels the playing field.
Thanks for having me.
:
Thank you, Mr. Chair and committee members.
I am Andrew Clement, professor emeritus in the faculty of information at the University of Toronto. As a computer scientist who started in the field of artificial intelligence, I have been researching the computerization of society and its social implications since the 1970s.
I'm one of three pro bono contributors to the Centre for Digital Rights' report on that Jim Balsillie spoke to you about here.
I will address the artificial intelligence and data act, AIDA, exclusively in my remarks.
AI, better interpreted as algorithmic intensification, has a long history. For all of its benefits, from well before the current acceleration around deep neural networks, AI misapplication has already hurt many people.
Unfortunately, the loudest voices driving public fear are coming from the tech giant leaders, who are well known for their anti-government and anti-regulation attitudes. These “move fast and break things” figures are now demanding urgent government intervention while jockeying for industry dominance. This is distracting and demands our skepticism.
Judicious AI regulation focused on actual risks is long overdue and self-regulation won't work.
wants to make Canada a world leader in AI governance. That's a fine goal, but it's as if we are in an international Grand Prix. Apparently, to allay the fears of Canadians, he abruptly entered a made-in-Canada contender. Beyond the proud maple leaf and his smiling at the wheel, his AIDA vehicle barely had a chassis and an engine. He insisted he was simply being “agile”, promising that if you just help to propel him over the finish line, all would be fixed through the regulations.
As Professor Scassa has pointed out, there's no prize for first place. Good governance isn't even a race but an ongoing, mutual learning project. With so much uncertainty about the promise and perils of AI, public consultation informed by expertise is a vital precondition for establishing a sound legal foundation. Canada also needs to carefully study developments in the EU, U.S. and elsewhere before settling on its own approach.
As many witnesses have pointed out, AIDA has been deeply flawed in substance and process from the get-go. Jamming it on to the overdue modernization of PIPEDA made it much harder to give that and the AI legislation the thorough review they each merit.
The minister initially gave himself sweeping regulatory powers, putting him in a conflict of interest with his mandate to advance Canada's AI industry. His recent amendments don't go anywhere near far enough to achieve the necessary regulatory independence.
claimed to you that AIDA offers a long-lasting framework based on principles. It does not.
The most serious flaw is the absence of any public consultation, either with experts or Canadians more generally, before or since introducing AIDA. It means that it has not benefited from a suitably broad range of perspectives. Most fundamentally, it lacks democratic legitimacy, which can't be repaired by the current parliamentary process.
The minister appears to be sensitive to this issue. As a witness here, he bragged that ISED held “more than 300 meetings with academics, businesses and members of civil society regarding this bill.” In his subsequent letter providing you with a list of those meetings, he claimed that, “We made a particular effort to reach out to stakeholders with a diversity of perspectives....”
My analysis of this list of meetings, sent to you on December 6, shows that this is misleading. Overwhelmingly, ISED held meetings with business organizations. There were 223 meetings in all, of which 36 were with U.S. tech giants. Only nine meetings were with Canadian civil society organizations.
Most striking by their complete absence are any organizations representing those that AIDA is claimed to protect most, i.e., organizations whose members are likely to be directly affected by AI applications. These are citizens, indigenous peoples, consumers, immigrants, parents, children, marginalized communities, and workers or professionals in health care, finance, education, manufacturing, agriculture, the arts, media, communication, transportation—all of the areas where AI is claimed to have benefits.
AIDA breaks democratic norms in ways that can't be fixed through amendments alone. It should therefore be sent back for proper redrafting. My written brief offers suggestions for how this could be accomplished in an agile manner, within the timetable originally projected for AIDA.
However, I realize that the shared political will for pursuing this option may not currently be achievable. If you decide that this AIDA is to proceed, then I urge you to repair its many serious flaws as well as you can in the following eight areas at the very least:
First, sever AIDA from parts 1 and 2 of Bill so that each of the sub-bills can be given proper attention.
Position the AI and data commissioner at arm's-length from ISED, appropriately staffed and adequately funded.
Provide AIDA with a mandatory review cycle, requiring any renewal or revision to be evidence-based, expert-informed and independently moderated with genuine public consultation. This should involve a proactive outreach to stakeholders not included in ISED's Bill meetings to date, starting with the consultations on the regulations. I'm reminded here of the familiar saying that if you're not welcome at the table, you should check that you're not on the menu.
Expand the scope of harms beyond individual support to include collective and systemic harms, as you've heard from others.
Base key requirements on robust, widely accepted principles in the legislation and not solely in regulations or schedules.
Ground such a principles-based framework explicitly in the protection of fundamental human rights and compliance with international humanitarian law, in keeping with the Council of Europe's pending treaty, which Canada has been involved with.
Replace the inappropriate concept of high-impact systems with a fully tiered, risk-based scheme, such as the EU AI Act does.
Tightly specify a set of unacceptably high-risk systems for prohibition.
I could go on.
Thank you for your attention. I welcome your questions.
:
Thank you for inviting me to appear here today. I am an assistant professor of computer engineering and computer science at the University of Toronto, a faculty member at the Vector Institute, where I hold a Canada CIFAR AI chair, and a faculty affiliate at the Schwartz Reisman Institute.
[Translation]
My area of expertise is at the intersection of computer security, privacy and artificial intelligence.
I will first comment on the consumer privacy protection act proposed in Bill . The arguments I'm going to present are the result of discussions with professors Lisa Austin, David Lie and Aleksandar Nikolov, some colleagues.
[English]
I do not believe that the act in its current form creates the right incentives for adoption of privacy-preserving data analysis standards. Specifically, the act's reliance on de-identification as a privacy protection tool is misplaced. For example, as you know, the act allows organizations to disclose personal information to some others for socially beneficial purposes if the personal information is de-identified.
As a researcher in this field, I would say that de-identification creates a false sense of security. Indeed, we can use algorithms to find patterns in data, even when steps have been taken to hide those patterns.
For instance, the state of Victoria in Australia released public transit data that was de-identified by replacing each traveller's smart card ID with a unique random ID. The logic was that no IDs means no identities. However, researchers showed that mapping their own trips, where they tapped on and off public transit, allowed them to reidentify themselves. Equipped with that knowledge, they then learned the random IDs assigned to their colleagues. Once they had knowledge of their colleagues' random IDs, they could find out about any other trip—weekend trips, doctor visits—all things that most would expect to be kept private.
[Translation]
As a researcher in this area, that doesn't surprise me.
[English]
Moreover, AI can automate finding these patterns.
With AI, such reidentification can happen for a large portion of individuals in the dataset. This makes the act problematic when trying to regulate privacy in an AI world.
Instead of de-identification, the technical community has embraced different approaches to privacy data analysis, such as differential privacy. Differential privacy has been shown to work well with AI and can demonstrate privacy, even if some things are already known about the data. It would have protected the colleague's privacy in the example I gave earlier. Because differential privacy does not depend upon modifying personal information, this creates a mismatch between what the act requires and emerging best technical practices.
[Translation]
I will now comment on the part of Bill that proposes an artificial intelligence and data act. The original text was ambiguous as to the definition of an AI system and a high‑impact system. The amendments that were proposed in November seem to be moving in the right direction. However, the proposed legislation needs to be clearer with respect to data governance.
[English]
Currently, the act does not capture important aspects of data governance that can result in harmful AI systems. For example, improper care when curating data leads to a non-representative dataset. My colleagues and I have illustrated this risk with synthetic data used to train AI systems that generate images or text. If the output of these AI systems is being fed back to them, that is, to train new AI systems, these new AI systems perform poorly. The analogy one might use is how the photocopy of a photocopy becomes unreliable.
What's more, this phenomenon can disparately impact populations already at risk of being the subject of harmful AI biases, which can propagate discrimination. I would like to see broader considerations at the data curation stage captured in the act.
Coming back to the bill itself, I encourage you to think about producing support documents to help with its dissemination. AI is a very fast-paced field and it's not an exaggeration to say that there are new developments every day. As a researcher, it is important that I educate the future generation of AI talent on what it means to design responsible AI. In finalizing the bill, please consider plain language documents that academics and others can use in the classroom or laboratory. It will go a long way.
[Translation]
Lastly, since the committee is working on regulating artificial intelligence, I'd like to point out that the bill will have no impact if there are no more AI ecosystems to regulate.
[English]
When I chose Canada in 2018 over the other countries that tried to recruit me, I did so because Canada offered me the best possible research environment in which to do my work on responsible AI, thanks to the pan-Canadian AI strategy. Seven years into the strategy, AI funding in Canada has not kept pace. Other countries have larger funding for students and better computing infrastructure, both of which are needed to stay at the forefront of responsible AI research.
[Translation]
Thank you for your work, which lays the foundation for responsible AI. I thought it was important to highlight these few areas for improvement in the interest of artificial intelligence in Canada.
I look forward to your questions.
I'll start with our online guests first to get them involved in the conversation, Mr. Clement first and then Ms. Bednar.
Mr. Clement, you mentioned the number of meetings, 223 meetings, being with the business sector. One of the things brought up that I think is an interesting question was by Mr. Gaheer, and it was on the algorithms. I'm wondering, with only focusing consultations with the companies.... We've seen at this committee in the past, whether it be gas pricing, where there's vertical integration in the industry, where there's no real competition because refining is all done by a select group of corporations. In fact, you have some brand name gas that has basically moved from market to market. We've seen as well, too, specifically bread price fixing. We've also had the Competition Bureau in on that. We've even seen the CEOs admit to us they didn't even have to collude to get rid of hero pay for grocery store retailer staff. They got rid of it all on the same day. Miraculously they came to the same conclusion.
The question I have specifically is: Is there a potential, I guess through the private sector, to create algorithms that actually also reduce further competition? You don't even have to have collusion if you have a lack of competition, which we have in many markets in Canada.
I'll start with Mr. Clement on the concerns about more algorithms being used to define the Canadian marketplace against consumers.
The other point that has been raised a number of times is that the act has two complementary and completely separate components: artificial intelligence, and everything to do with privacy. So there are links to be made between the two.
On the other hand, as you mentioned, we have to adapt to new AI technologies, which are evolving rapidly, as well as to the regulations put in place in Europe, the United States and around the world.
Most of the experts who have come to testify, as you have, since the study of this bill began, have told us that there should have been consultations much earlier and that, in light of those consultations, those two elements would probably not have been combined in the same bill.
Today, however, we're studying a bill that contains two elements that most people feel should be separated. Do you also believe that they should be separated, that AI is an extremely important element that should be dealt with independently, and that there should be much broader consultations than what has been done so far?
I'm going to turn to you, Professor Bednar. In the 19th and 20th centuries, industries extolled the virtues of a free market with no regulation. This has led to huge fortunes, huge monopolies, as well as abuses against consumers.
All of this has led to historic regulations. One is the antitrust laws that we know today and the big consumer protection laws. However, with the artificial intelligence industry advancing at an exponential rate, I get the impression that we need a framework for the market to work.
I will quote you in English, a language I rarely use. You said earlier, “Smart regulation clarifies markets”.
In French, we would say that smart regulations make markets work better. As we know, that is the basis of economics, in a way.
Do you think that, in this context, the best solution is for this industry and the market to regulate themselves? In your opinion, are we at a stage in the development of artificial intelligence where regulation, viewed from a historical perspective, is as important as antitrust legislation may have been at one time?
:
Thank you, Mr. Chair, and good afternoon, honourable members. I guess it's good evening.
As said, my name is Leah Lawrence and I'm the former president and CEO of Sustainable Development Technology Canada. I served there from 2015 to 2023.
When I started at SDTC, it was on the brink of shutdown, but I was able to put in place a team that transformed the organization. We took it from being consistently 20% over budget to under budget. We were formally commended by the Auditor General of Canada and the Treasury Board Secretariat for our increased flexibility, our diverse funding streams and overhead costs that were half of those of comparable federal programs. Given this, ISED increased SDTC's funding during my tenure by over 200%.
Over the last year, I spent a lot of time responding to various inquiries as a result of the actions of a whistle-blower. This, and the resulting media attention, took a big toll on me and the organization. I felt that my leadership had become a distraction that would prevent SDTC, an organization that I had dedicated myself to for over eight years, from fulfilling its mandate, so, despite having the continued confidence of SDTC's board of directors, I resigned. I note that in resigning I received no severance, and because SDTC's employees are not civil servants, no government pension.
I decided to resign two days after appearing before the House of Commons ethics committee after I listened in disbelief to ISED CFO Doug McConnachie testifying on the same panel. He told the committee he had spent 30 hours talking to SDTC's whistle-blower, speculating as evidenced by recordings obtained by media on the outcomes of the various investigations under way while they were still ongoing, including saying that these investigations “could have been done in a way that exonerated the board and scapegoated Leah”.
As the ISED overseer of the investigation, Mr. McConnachie's actions were unethical and compromised the investigation. Despite his actions, the investigation still found no wrongdoing or misconduct and made several administrative recommendations that the team and I were implementing when I decided to resign. However, I am here today to talk primarily about governance and conflict of interest.
The SDTC Act and the Government of Canada set the public policy framework. The board of directors sets the governance framework. In the case of SDTC, half of the board is appointed by the Government of Canada. Also, an assistant deputy minister from ISED attends all board meetings and is privy to all materials. That includes all funding recommendations and all discussions of conflict of interest.
The CEO's and the management's job is to take that policy direction from the government and the governance direction from the board and turn it into operating practices for the organization. I note the board also approves all project funding.
From 2015 to 2019, I did a lot of work on governance reform with the previous chair and the chair of governance, Jim Balsillie and Gary Lunn.
A key change—to harmonize the conflict of interest rules for our two categories of board members, including limiting and eliminating direct conflicts and implanting cooling-off periods—was blocked when a non-government appointee got a ruling from the Ethics Commissioner that they did not need to follow the same governance standards as government appointees. This made it impossible for management to hold all board members to the same rules.
Early in 2019, it became very apparent to me that the government wanted to replace the chair of the board. In May or June of 2019, I was informed by ISED's official representative, ADM Andy Noseworthy, that Ms. Annette Verschuren was going to be appointed to replace Mr. Balsillie.
I expressed concern SDTC was funding a project for her company. I expressed concern there was a potential for both conflict of interest and the perception of conflict of interest. I expressed concern that both Ms. Verschuren and SDTC could potentially be damaged by the appointment.
In the days that followed, our government relations lead contacted the minister's staff to reiterate our concerns about Ms. Verschuren's appointment, noting that no previous chair had direct or perceived conflicts of interest and that, further, it was previously a condition of the chair's appointment to be conflict-free. ADM Noseworthy subsequently told me that in the absence of a written policy explicitly prohibiting a beneficiary of funds from becoming chair, the appointment would go ahead.
I fear that my ongoing efforts to continue to strengthen the governance regime at the board level were largely stymied from this point on. Henceforth, it became largely an exercise in managed conflict, rather than precluding or eliminating conflict.
I continued to work on governance reform with legal advisers, and was pleased when the board finally adopted a policy of post-directorship cooling-off periods and hired a board ethics adviser. These are good developments; however, they are not enough. Another important reform remains outstanding: that any appointee to the board be free of conflicts of interest.
My second recommendation is that the Treasury Board Secretariat convene a group of chairs and CEOs from the many independent agencies that provide funds on behalf of the government and ask them what supports they need to discharge their mandates from a governance and public accountability point of view.
In closing, independent government-funded organizations like SDTC play an important role. They have access to people and resources that the government does not, and can deliver on outcomes that complement and support government policy.
The action plan that ISED has required of SDTC, which they have implemented, does not address the matters of governance and conflict of interest that I have raised here today and that I advocated for throughout my time at SDTC.
I thank you for your time today, and I am happy to take your questions.
Welcome to the committee, Ms. Lawrence. Thank you for being with us and taking the time to answer our questions.
Perception is extremely important in our world. Sustainable Development Technology Canada, SDTC, is governed by the Canada Foundation for Sustainable Development Technology Act and by regulations. The organization distributes public funds and is therefore subject to scrutiny by parliamentarians and the public, which is legitimate.
Given that, I'd like to pick up on the example of Ms. Verschuren, who accepted the position of chair of SDTC's board even though some of her companies were receiving funding from the organization. We also know that they got more funding subsequently, but we were told that legal opinions had been produced.
So, on the one hand, there was a determination that it wasn't illegal to do that, but, on the other, there were significant concerns about the ethics of doing so. Normally, when a legal opinion is sought in a situation like that, wouldn't it just make sense to step back and turn down the position or decline the funds?
:
There are two things—or probably three things.
When I started at SDTC in 2015, I was surprised to find that the conflict of interest policies for the employees and the board were actually the same. That is to say that employees actually were allowed at that point to have direct conflicts of interest. One of my first acts as CEO was to separate those two policies and ensure that there were no direct conflicts for employees. That continues to this day.
With respect to boards of directors, all a CEO can do is advise. The ethics policy for the board of directors had always had the ability for a board member to have conflicts. By the time I was appointed, most of the GIC appointees did not, but it had been a long-standing practice that the non-governmental appointees had managed conflict, as we have been talking about earlier.
The process in place is as follows: Before we go into an investment committee round, there are standing conflicts when people are appointed that are managed and are always looked at by the investment vice-president and director when we're going into a round.
The next thing that happens is there is a circulation well in advance of the meeting for boards of directors to declare if they have a conflict or a perceived conflict with a potential recipient of funds and consortia partners related to them. That has to be received back before the board members will receive any of the materials. If they declare a conflict, they do not receive the materials related to that declared conflict in the board package.
In the board package, all of the declarations are summarized and the chair would then call for any additions or any changes at the board meeting. Anybody who perhaps became aware of something between those dates could raise it at that time.
That's the process.
The idea is that these things would be minuted and followed up on as the case may be.
That's the process. It was followed in most cases. This was the managed conflict that we had in place.