:
Colleagues, I call this meeting to order.
Welcome to meeting No. 102 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill , an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.
I'd like to welcome our witnesses this afternoon. With us is Ana Brandusescu, AI governance researcher with McGill University.
Good Afternoon, Ms. Brandusescu.
I would also like to welcome Alexandre Shee, industry expert and incoming co‑chair of Future of Work, Global Partnership on Artificial Intelligence.
Good Afternoon, Mr. Shee.
From Digital Public, we have Bianca Wylie.
Thank you for being with us, Ms. Wylie.
Lastly, from International Association of Privacy Professionals, we have Ashely Casovan, managing director of the AI Governance Centre.
I'd like to thank you, too, Ms. Casovan.
[English]
Without further ado, I will yield the floor for five minutes to Ms. Brandusescu.
:
Good afternoon. Thank you for having me here today.
My name is Ms. Ana Brandusescu. I research the governance of AI technologies in government.
In my brief, with public participation and AI expert, Dr. Renee Sieber, we argue that the AIDA is a missed opportunity for shared prosperity. Shared prosperity is an economic concept where the benefits of innovation are distributed equitably among all segments of society. Innovation is taken out of the hands of the few—in this case, the AI industry—and put in the hands of the many.
Today, I will present four problems and three recommendations from our brief.
The first problem is that AIDA implies but does not ensure shared prosperity. The preamble of the bill states, “Whereas trust in the digital and data-driven economy is key to ensuring its growth and fostering a more inclusive and prosperous Canada”. However, what we see is a concentration of wealth in the AI industry, especially for big tech companies, which does not guarantee that the prosperity will trickle down to Canadians. Being “data-driven” can just as easily equal mass data surveillance and more opportunities to monetize data.
Trust, too, can be easily conflated in Canada with social acceptance of AI, telling people over and over that AI is invariably good. You may have heard the phrase “show, don't tell”. Repeating that AI is beneficial will not convince marginalized people who are subject to AI harms, such as false arrests. AI harms are extensively covered by the Canadian parliamentary study titled “Facial Recognition Technology and the Growing Power of Artificial Intelligence”.
The second problem is the AIDA's centralization of power to ISED and the Minister of Industry. The current set-up is prone to regulatory capture. We cannot trust ISED—an agency placed in the position of both promoting and regulating AI, with no independent oversight for the AIDA—to ensure shared prosperity. Agencies placed in these dual roles with dual responsibilities, such as nuclear regulatory agencies, are often incompatible, so it will inevitably favour commercial interests over accountability of AI development.
The third problem is that public consultation is absent. To date, there has been no demonstrable public consultation on AIDA. Tech policy expert Christelle Tessono and many others have raised this concern in their briefs and in articles. ISED's consultation process thus far has been selective. Many civil society and labour organizations were largely excluded from consultation on the drafting of the AIDA.
The fourth problem is that the AIDA does not include workers' rights. Workers in Canada and globally cannot share in the prosperity when their working conditions to develop AI systems include surveillance in the workplace and mental health crises. Researchers have extensively documented the exploitative nature of AI systems development on data workers. For instance, there is huge toll on their mental health, even leading to suicide.
In 2018, I learned from digital governance expert Nanjira Sambuli about Sama, which is a Silicon Valley company that works for big tech and hires data workers all over the world, including in Kenya. The contracts that Sama held with Facebook/Meta and OpenAI have been found to traumatize workers.
We have also seen many cases of IP theft from creators, as AI governance expert Blair Attard-Frost has written about in their brief on generative AI.
To share in the prosperity promised by AI, we propose three recommendations.
First, we need a redraft of the AIDA outside of ISED to ensure public and private sector accountability. Multiple departments and agencies that are already involved in work on responsible AI need to co-create the AIDA for the private and the public sector and prevent the use of harmful technologies. This version of the AIDA would hold companies like Palantir, as well as national security and law enforcement agencies, accountable.
Second, we need AI legislation to incorporate robust workers' rights. Worker protection means unions, lawsuits and safe spaces for whistle-blowers. Kenyan data workers unionized and sued Meta due to the company's exploitative working conditions. The Supreme Court ruled in their favour. Canada can follow the lead of the Kenyan government in listening to its workers.
Similarly, in the actors' union strike, American workers prevented production companies from deciding when they could use and not use AI, showing that workers can indeed drive regulations. Beyond unions and strikes, workers need safe and confidential channels to report harms. That is why whistle-blower protection is essential to workers' rights and responsible AI.
Third and lastly, we need meaningful public participation. Government has a responsibility to protect its people and ensure shared prosperity. A strong legislative framework demands meaningful public participation. Participation will actually drive innovation, not slow it down, because the public will tell us what's right for Canada.
Thank you.
:
Thank you, members of the committee, for the opportunity to speak with you today.
My name is Alexandre Shee. I'm the incoming co-chair of the future of work working group of the Global Partnership on AI, of which Canada is a member state. I'm an executive at a multinational AI company, a lawyer in good standing and an investor and adviser to AI companies, as well as the proud father of two boys.
Today, I'll speak exclusively on part 3 of the bill, which is the artificial intelligence and data act, as well as the recently proposed amendments.
I believe we should pass the act. However, it needs significant amendments beyond those currently proposed. In fact, the act fails to address a key portion of the AI supply chain—data collection, annotation and engineering—which represents 80% of the work done in AI. This 80% of the work is manually done by humans.
Failing to require disclosures on the AI supply chain will lead to bias, low-quality AI models and privacy issues. More importantly, it will lead to the violation of the human rights of millions of people on a daily basis.
Recent amendments have addressed some of the deficiencies in the act by including certain steps in the AI supply chain, as well as requiring the preservation of records of the data used. However, the law does not consider the AI development process as a supply chain, with millions of people involved in powering AI systems. No disclosure mechanism is put in place to ensure that Canadians are able to make informed decisions on the AI systems they choose, ensuring that they're fair and high-quality, and that they respect human rights.
If I unpack that statement, there are three takeaways that I hope to leave you with. The first is that the act as drafted does not regulate the largest portion of AI systems: data collection, annotation and engineering. The second is that failing to address this fails to protect human rights for millions of people, including vulnerable Canadians. In turn, this leads to low-quality artificial intelligence systems. The third is that the act can help protect those involved in the AI supply chain and empower people to choose high-quality and fair artificial intelligence solutions if it is enacted with disclosure requirements.
Let me dive deeper into all of these three points, with additional detail on why these considerations are relevant for the future iteration of the act.
Self-regulation in the AI supply chain is not working. The lack of a regulatory framework and disclosures of the data collection, annotation and engineering aspects of the AI supply chain is having a negative impact on millions of lives today. These people are mostly in the global south, but they also include vulnerable Canadians.
There is currently a race to the bottom, meaning that basic human rights are being disregarded to diminish costs. In a recent well-documented investigative journalism piece featured in Wired magazine, entitled “Underage Workers Are Training AI” and published on November 15, 2023, a 15-year-old Pakistani child describes working on tasks to train AI models that pay as little as one cent. Even in higher-paying jobs, the amount of time he needs to spend doing unpaid research means that he needs to work between five and six hours to complete an hour of real-time work—all to earn two dollars. He is quoted as saying, “It’s digital slavery”. His statement echoes similar reporting done by journalists and in-depth studies of the AI supply chain by academics from around the world, and international organizations such as the Global Partnership on Artificial Intelligence.
However, while these abuses are well documented, they are currently part of the back end of the AI development process, and Canadian firms, consumers and governments interacting with AI systems do not have a mechanism to make informed choices about abuse-free systems. Requiring disclosures—and eventually banning certain practices—will help to avoid a race to the bottom in the data enrichment and validation industry, and enable Canadians to have better, safer AI that does not violate human rights.
If we borrow from recently passed legislation Bill , Canada’s “modern slavery act”, creating disclosure obligations helps foster more resilient supply chains and offers Canadians products free from forced or child labour.
Transparent and accountable supply chains have helped respect human rights in countless industries, including the garment industry, the diamond industry and agriculture, to name only a few. The information requirements in the act could include information on data enrichment and specifically how data is collected and/or labelled, a general description of labelling instructions and whether it was done using identifiable employees or contractors, procurement practices that include human rights standards, and validating that steps have been taken so that no child or forced labour was used in the process.
Companies already prepare instructions for all aspects of the AI supply chain. The disclosure would formalize what is already common practice. Furthermore, there are options in the AI supply chain that create high-quality jobs that respect human rights. The Canadian government should immediately require these disclosures as part of its own procurement processes of AI systems.
Having a disclosure mechanism would also be a complement to the audit authority bestowed on the minister under the act. Creating equivalent reporting obligations on the AI supply chain would augment the current law and ensure that quality, transparency and respect of human rights are part of AI development. It would allow Canadians to benefit from innovative solutions that are better, safer and aligned with our values.
I hope you will consider the proposal today. You can have a positive impact on millions of lives.
Thank you.
:
My name is Bianca Wylie. I work in public interest digital governance as a partner at Digital Public. I've worked at both a tech start-up and a multinational. I've also worked in the design, development and support of public consultations for governments and government agencies.
Thank you for the opportunity to speak with you today about AIDA. As far as amendments go, my suggestion would be to wholesale strike AIDA from Bill . Let's not minimize either the feasibility of this amendment or the strong case before us to do so. I'm here to hold this committee accountable for the false sense that something is better than nothing on this file. It's not, and you're the ones standing between the Canadian public and further legitimizing this undertaking, which is making a mockery of democracy and the legislative process.
AIDA is a complexity ratchet. It's a nonsensical construct detached from reality. It's building increasingly intricate castles of legislation in the sky. It's thinking about AI that is detached from operations, from deployment and from context. ISED's work on AIDA highlights how open to hijacking our democratic norms are when you wave around a shiny orb of innovation and technology.
As Dr. Lucy Suchman writes, “AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is.” I hope you might refuse to continue a charade that has had spectacular carriage through the House of Commons on the back of this socio-psychological phenomenon of assuming that someone else knows what's going on here.
This committee has continued to support a minister basically legislating on the fly. How are we writing laws like this? What is the quality control at the Department of Justice? Is it just that we'll do this on the fly when it's tech, as though this is some kind of thoughtful, adaptive approach to law? No. The process of AIDA reflects the very meaning of law becoming nothing more than a political prop.
The case to pause AIDA and reroute it to a new and separate process begins at its beginning. If we want to regulate artificial intelligence, we have to have a coherent “why”. We have never received a coherent why for AIDA from this government. Have you, as members of this committee, received an adequate backstory procedurally on AIDA? Who created the urgency? How was it drafted, and from what perspective? What work was done inside government to think about this issue across existing government mandates?
If we were to take this bill out to the general public for thoughtful discussion, a process that ISED actively avoided doing, it would fall apart under the scrutiny. There is use of AI in a medical setting versus use on a manufacturing production floor versus use in an educational setting versus use in a restaurant versus use to plan bus routes versus use to identify water pollution versus use in a day care—I could do this all day. All of these create real potential harms and benefits. Instead of having those conversations, we're carrying some kind of delusion that we can control and categorize how something as generic as advanced computational statistics, which is what AI is, will be used in reality, in deployment, in context. The people who can help us have those conversations are not, and have never been, in these rooms.
AIDA was created by a highly insular, extremely small circle of people—tiny. When there is no high-order friction in a policy conversation, we're talking to ourselves. Taking public engagement on AI seriously would force rigour. By getting away with this emergency and urgency narrative, ISED is diverting all of us from the grounded, contextual thinking that has also been an omission in both privacy and data protection thought. That thinking, as seen again in AIDA, continues to deepen and solidify power asymmetries. We're making the same mistake again for a third time.
This is a “keep things exactly the same, only faster” bill. If this bill were law tomorrow, nothing substantial would happen, which is exactly the point. It's an abstract piece of theatre, disconnected from Canada's geopolitical economic location and from the irrational exuberance of a venture capital and investment community. This law is riding on the back of investor enthusiasm for an industry that has not even proven its business model out. On top of that, it's an industry that is highly dependent on the private infrastructures of a handful of U.S. companies.
Thank you.
:
Thank you for inviting me here to participate in this important study, specifically to discuss AIDA, a component of the digital charter implementation act.
I am here today in my capacity as the managing director of IAPP's AI governance centre. IAPP is a global, non-profit, policy-neutral organization dedicated to the professionalization of the privacy and AI governance workforces. For context, we have 82,000 members located in 150 countries and over 300 employees. Our policy neutrality is rooted in the idea that no matter what the rules are, we need people to do the work of putting them into practice. This is why we make one exception to our neutrality: We advocate for the professionalization of our field.
My position at IAPP builds on nearly a decade-long effort to establish responsible and meaningful policy and standards for data and AI. Previously, I served as executive director for the Responsible Artificial Intelligence Institute. Prior to that, I worked at the Treasury Board Secretariat, leading the first version of the directive on automated decision-making systems, which I am now happy to see included in the amendments to this bill. I also serve as co-chair for the Standards Council of Canada's AI and data standards collaborative, and I contribute to various national and international AI governance efforts. As such, I am happy to address any questions you may have about AIDA in my personal capacity.
While I have always had a strong interest in ensuring technology is built and governed in the best interests of society, on a personal note, I am now a new mom to seven-month-old twins. This experience has brought up new questions for me about raising children in an AI-enabled society. Will their safety be compromised if we post photos of them on social media? Are the surveillance technologies commonly used at day cares compromising?
With this, I believe providing safeguards for AI is now more imperative than ever. Recent market research has demonstrated that the AI market size has doubled since 2021 and is expected to grow from around $2 billion in 2023 to nearly $2 trillion in 2030. This demonstrates not only the potential impact of AI on society but also the pace at which it is growing.
This committee has heard from various experts about challenges related to the increased adoption of AI and, as a result, improvements that could be made to AIDA. While the recently tabled amendments address some of these concerns, the reality is that the general adoption of AI is still new and these technologies are being used in diverse and innovative ways in almost every sector. Creating perfect legislation that will address all the potential impacts of AI in one bill is difficult. Even if it accurately reflects the current state of AI development, it is hard to create a single long-lasting framework that will remain relevant as these technologies continue to change rapidly.
One way of retaining relevance when governing complex technologies is through standards, which is already reflected in AIDA. The inclusion of future agreed-upon standards and assurance mechanisms seems likely, in my experience, to help AIDA remain agile as AI evolves. To complement this concept, one additional safeguard being considered in similar policy discussions around the world is the provision of an AI officer or designated AI governance role. We feel the inclusion of such a role could both improve AIDA and help to ensure that its objectives will be implemented, given the dynamic nature of AI. Ensuring appropriate training and capabilities of these individuals will address some of the concerns raised through this review process, specifically about what compliance will look like, given the use of AI in different contexts and with different degrees of impacts.
This concept is aligned with international trends and requirements in other industries, such as privacy and cybersecurity. Privacy law in British Columbia and Quebec includes the provision of a responsible privacy officer to effectively oversee implementation of privacy policy. Additionally, we see recognition of the important role people play in the recent AI executive order in the United States. It requires each agency to designate a chief artificial intelligence officer, who shall hold primary responsibility for managing their agency's use of AI. A similar approach was proposed in a recent private member's bill in the U.K. on the regulation of AI, which would require any business that develops, deploys or uses AI to designate an AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business.
History has shown that when professionalization is not sufficiently prioritized, a daunting expertise gap can emerge. As an example, ISC2's 2022 cybersecurity workforce study discusses the growing cyber-workforce gap. According to the report, there are 4.7 million cybersecurity professionals globally, but there is still a gap of 3.4 million cybersecurity workers required to address enterprise needs. We believe that without a concerted effort to upskill professionals in parallel fields, we will face a similar shortfall in AI governance and a dearth of professionals to implement AI responsibly in line with Bill and other legislative objectives.
Finally, in a recent survey that we conducted at IAPP on AI governance, 74% of respondents identified that they are currently using AI or intend to within the next 12 months. However, 33% of respondents cited a lack of professional training and certification for AI governance professionals, and 31% cited a lack of qualified AI governance professionals as key challenges to the effective rollout and operation of AI governance programs.
Legislative recognition and incentivization of the need for knowledgeable professionals would help ensure organizations resource their AI governance programs effectively to do the work.
In sum, we believe that rules for AI will emerge. Perhaps, more importantly, we need professionals to put those rules into practice. History has shown that early investment in a professionalized workforce pays dividends later. To this end, as part of our written submission, we will provide potential legislative text to be included in AIDA, for your consideration.
Thank you for your time. I am happy to answer any questions you might have.
Ms. Wylie, the minister talked a lot about 300 consultations after he tabled the bill, not before. Looking at the list that he provided after we asked for it, I see that 28 were with academics and 216 were basically with big business and not really with people who are impacted, so it was sort of the converted talking to the converted.
I'd like you to talk a little more, if you could, to expand on your belief about why you think a proper consultation, with this bill defeated and reintroduced in a new format, would produce a better result.
I think, even with academics, they're not working in operations. The reason I listed the examples I gave is that I think AI starts to make sense when we talk about it in a specific context: as mentioned, in manufacturing, in health care, in dentists' offices. We could go through all of society here. We need to talk about people who are working in those spaces, not general specialists.
This is what I mean. Even within the critics, people have a vested interest in going way down into the complexity instead of zooming out and looking at this to ask why we are doing this. What are we trying to accomplish? The answers to those questions are going to be very different per sector. What looks beneficial and harmful per sector is a totally different thing.
I think that's why we need to restart the conversation from the point of what we are trying to do here, and then we can talk about how we would do it. You can't start the “how” before you get your “why” clear.
There are two things on this point. One of them is that harm is always contextual. Something can seem absolutely safe in terms of, say, data collection your doctor has, and you turn around and someone else has it. It's dangerous. It's never absent context and use, ever, so I would argue that structural categorization is incorrect.
The reason we look to Europe all the time and ask what Europe is doing.... I know it's appealing to say that what they are doing over there may be thoughtful, but geopolitically, from an economic perspective, they want their own Google, Amazon and Microsoft. When you gin up all this complexity, you protect your national industry. This is a way to enable the economy to grow, based on domestic rules.
There is, then, that broad harmonization conversation you're hearing. How well has that worked to date globally with data protection law? It has not. It has not worked with privacy either.
Those are the two pieces of a response to that.
:
Thank you very much, Mr. Chair.
One thing that I'm enjoying very much about this committee is the divergent perspectives that we're hearing, the level of engagement and the level of intelligence in approaching the issue.
The reality is that the genie is out of the bottle. My concern is that we're not going to go back to where we were before.
My first question is for Ms. Casovan.
In April 2023, you and 75 other researchers co-signed a letter calling on the government to move forward with the artificial intelligence and data act and saying that further postponing the act would be out of sync with the speed at which technology is being developed. Is your position the same today as when you co-signed that letter?
:
It's an excellent question.
The purpose of the group is to bring world-renowned experts and policy-makers together around the table to actually think about the practical applications of artificial intelligence.
One of the artifacts that recently came out from the working group on the future of work was 10 policy recommendations about what we have identified with the International Labour Organization as the “great unknown”, the idea that 8% of the working population, going forward, will be impacted in an unknown way by artificial intelligence, and there is an opportunity to act.
It's an incredible organization that brings stakeholders from around the world. We discuss, in very practical terms, the way to apply legislation. It would be very open to continuing to be consulted in this process, and it can help give concrete examples of how AI can be built responsibly and benefit humanity.
:
There are two aspects to consider.
The first one is how it's impacting work today in Canada and beyond. That's the first element. Then, how will it impact society and the place of work, going forward?
If we think about today, we see there are millions of people who actually work behind the scenes in AI systems to make them operate effectively. They are not protected under this law, nor are they protected under any legislation that's coming out on AI; therefore, there's an opportunity to legislate the AI supply chain for what it is, a supply chain with millions of people working on it.
In the second phase—the impact on workers going forward—there are a lot of unknowns around what will happen to workers and how their work will be influenced.
One of the advantages of the Global Partnership on Artificial Intelligence is that we have representatives from academia, industry and worker unions, as well as governments. The statement that was put out was essentially that we need to put in place studies on the impact of AI on future work. We need to invest in retraining. We need to invest in making sure we're transitioning some roles. We need to be aware, even most recently with the advent of generative AI, that there already are economic impacts on low-skilled workers, who will need to be retrained and given other opportunities.
The future of work needs that, and the Global Partnership on AI has a policy brief that is available online.
I'd like to thank all the witnesses.
I'll start with Ms. Casovan.
Ms. Casovan, during your time in the Government of Canada, you led the development of the first‑ever artificial intelligence policy, namely, the directive on automated decision‑making. This directive imposes a number of requirements on the federal government's use of technologies that assist or replace the judgment of a human decision‑maker, including the use of machine learning and predictive analytics. These requirements include the requirement to provide notice when the automated decision‑making system is being used, as well as the existence of recourse methods for those who wish to challenge administrative decisions.
In your opinion, should this type of notice or recourse provision be included in the Artificial Intelligence and Data Act?
As Ms. Wylie said, I could give you so many examples, right now, of specific types of harms, real-world implications and everything that's changing all the time, but I want to zoom out a little and talk about why labour is important to look at.
Before getting into who can do this, it seems paradoxical to me to want agility in technologies that are so complex. We don't understand them. Most people don't. The black box is still there. Engineers don't understand them still, to this day. Workers are being continuously impacted. When I say “impacted”, I mean negative impacts and harms. I submitted a brief to your committee with Dr. Renee Sieber, and we discuss those at length. You have multiple studies to look at, from multiple years. I've been following Sama for five years now, the company that is a self-proclaimed “ethical AI” company. When we look at who says they're ethical, and what ethical is, we should really question that, as well.
In my first five minutes, I said that AI being a societal benefit is being shoved down our throats. That is the case. “We need digital literacy. We need AI literacy. We know it's good and it's here to stay.” I'm here to sometimes reject that. We should be able to ban AI when we need to. We should be able to listen to the workers and see what they want and what they think. What does their day-to-day job look like? Do they have enough breaks? Look at what Amazon is doing, micromanaging every millisecond of their lives. The factory workers are living in a limbo space. I wouldn't even say “a limbo space”. They're in hell.
How do we prevent that? Why not go to labour departments that know those strengths? This is why ISED is not fit to do this alone. Earlier, I was asked what other agency could do this. It cannot just be one. It has to be multiple. This is a team effort. This goes back to democracy. Slow it down a bit and listen to the public. We don't know what the public wants, because the public wasn't involved. We need to listen to labour organizations, departments that deal with labour everywhere in this country, and the workers themselves. This is why we cannot just have people in these rooms. We cannot just have this televised. We need to have people come to you. We need you to come to the people. We need to look at town halls. We need to look at off-line methods. We need to look at different times and places to do public participation, because we live in a digitized world.
You're saying we need to change everything for AI. No. As Ms. Wylie said before, AI needs to change for us.
:
Thank you to all the witnesses here today.
I'm very concerned about this broken bill. As legislators, we around this table understand what's at stake here, but it's very disconcerting. For the second time since we started doing this bill, we received massive packages of information from the that completely changed the bill in front of us. I'm saying, “Minister, why did you screw up so badly, and where the heck was your department for years? Where were you?”
In the last meeting, I asked a number of experts whether Industry Canada or the Government of Canada even has the capacity. This was one of the first things I raised in Parliament when I got elected. I was on the HUMA committee reviewing data systems for the Department of Human Resources, because they were still using a binary code method from the 1970s. I think that's still in effect today. The Government of Canada has proven that, generally, they get a lot of things wrong and they're not up to date in the 21st century. I am so apprehensive about giving this department any more power over something most experts are still contemplating how to get right.
That said, I think that, despite the 's incompetence in this, his heart may be partly in the right place. He's trying to bring forward amendments and do something to fix his own mess. However, it is very scary that he's so incompetent that we're just getting thrown this information.
I'm sorry for that rant, but part of me is thinking now—
:
I'm in a position where the Liberal members of this committee may make a decision with the Bloc Québécois to support this going through. I'm not sure where we're going to land on that. We're openly having this deliberation about whether this part of the bill deserves to go forward. That's where we are right now, in good faith.
That said, if it does go through, is it worth it for committee members to look at some of the other amendments that we'll be putting forward in the first part of the bill, like really enshrining some protections for kids?
I am so concerned about the innocent. I have a 10-month-old daughter, a four-year-old son and an eight-year-old son. I'm so concerned about their innocence and the manipulation. The bill, I will admit, does address psychological harms, but I don't think one or two clauses are good enough when it relates to a data-driven economy that impacts kids from birth to death in today's day and age.
Could you comment on that a bit?
:
Sure. Actually, the reason I included my personal note was that I heard your line of questioning. It is concerning. It is not something that I typically speak to, but it was quite surprising, having the experience of working in this space for almost a decade—which is scary—to really think about the evolution of different types of technologies and therefore the societal impacts they have.
I was also nodding my head when you were mentioning some of the challenges that exist internally. Working inside government, I saw them up close and personal. Definitely, as with all organizations, there are concerns when we're using old technologies to try to fix modern problems. That said, the reality is that it does take a significant amount of time.
On the children's perspective, the fact that I had kids recently completely opened my aperture in terms of the harms. It made it more real and visceral than I could have ever imagined. Everything was abstract before.
I not only think that this should be included, but I think that when we see potential new classes of high-impact systems get added into these amendments, it would be nice to see something related to the protection of youth, similar to what we're seeing south of the border in the U.S.
Welcome, everyone.
Thank you for your respective testimonies on AI. It's fascinating. It's very complex, and it's given a lot of us as MPs and not specific subject matter experts a lot to chew on.
I do wish to go to the gentleman who is here virtually, Alexandre.
You mentioned several times the AI continuum and the idea of data collection, engineering and annotation in the AI supply chain. Can you elaborate on that point? Your first point was that we should go forward with the bill. If you can comment on both aspects, that would be great.
:
Essentially, when we look at artificial intelligence, there are many steps in that.
The first step is collecting data for an AI system. The second step is annotating that data. For example, if you have an image where you see a nose and eyes, there is somebody annotating that. Then there is the feedback loop where that data is enriched, so it goes through a software model, and ultimately the outputs of that are revalidated by a human. That's packaged into a proof of concept that's oftentimes launched, and then it becomes a product that's used by consumers or in the business context. That's the whole supply chain.
Right now, this legislation is geared only around the outputs, so we're missing all of the work done by humans to create the AI systems. I think it's important to have a law in place, because we need to start regulating the outputs as much as we need to regulate the supply chain.
My recommendation [Technical difficulty—Editor].
:
You're not the only one. It's something that I think is quite complicated.
One note that came in the amendments was related to the role of auditing within the commissioner's office. Something I'd like to see is more proactive use of auditing to ensure compliance, as opposed to the powers of the commissioner to require an audit when there is something that percolates that's problematic enough. It would be good to see that. That is done typically like a financial audit. You require those proactively every year with companies.
In this case, one thing we need to understand better is the scope of an AI system and, based on that, what those harms are and how you comply with that. What does that “good” look like, again, doing that through a public process? From there, you would require third party audits in a similar way that we have professional auditors in financial services to do the same thing.
What I would say is that, first, while AI systems look very impressive to consumers, millions of people on a daily basis are working behind the scenes to make them work. That spans from our interactions with social media to automated decision-making systems.
The scope of what I'm asking for is very simple. By having a disclosure mechanism in the law that requires companies to give information about the data they've collected and how they collected it, we essentially ensure that millions of people around the world who are annotating daily and interacting with AI systems in the back end are protected from exploitative processes and procedures.
Right now, nothing is in place in any jurisdiction in the world. Right now, this is a wild west and nobody is protecting these people. These are youth in Pakistan and women in Kenya. These are vulnerable Canadians who are trying to have a side job to make a bit more money. In all of these circumstances, they have nothing protecting them.
Mr. Shee, I'd like to continue with you.
Yesterday, CBC presented a report on artificial intelligence in the service of war. He was referring to the use of artificial intelligence and Gospel software by the Israeli army to better target the facilities assigned to Hamas. However, this technology increases the number of civilian casualties, according to experts, because there is less human interaction behind every decision made before going on the offensive.
In that case, is there some slippage in artificial intelligence? How can we regulate these practices to save human lives?
:
That's a great question.
I have no experience with artificial intelligence in war or defence situations. I can just comment on that as a sophisticated citizen.
I think we need a very clear framework that takes into account the rules of war that have already been established. Unfortunately, AI systems are used in war situations and they kill a lot of people. We have to be aware of the risk and take measures to manage it.
Very humbly, this is a bit outside my area of expertise. However, I think you raise an important point. Indeed, artificial intelligence will be used in war situations and systems [Technical difficulty—Editor].
Ms. Wylie, you didn't get a chance to get into the last conversation, so let me ask you this. If we had an AI commissioner or data commissioner, whatever it might be called, would the model of the Privacy Commissioner, an independent model like that, be something we should be looking toward?
Second to that, maybe you have another suggestion. How do we bring some independence and accountability to the table here that would also be empowered?
:
I just want to go back to my remark about making the same mistake for the third time. It's the same mistake that we saw with privacy and data protection, which is to treat these topics as objects that are independent from the rest of the world as it exists. We've seen the failure that thinking like this has gotten us to. While we talk about privacy a lot, what we're dealing with is a deeply privatized space where the control and power of the infrastructures—particularly with AI, never mind with data and software—are privately held.
If we think about our failures in access to justice for things like privacy and data protection, and we think about the failures of this sort of model, with privacy or data protection it's never about whether we should do it; it's always about “how”. If we want to turn the corner into a different world so that we have control over technologies, we have to talk about them in context.
For me, I go back to this. Who is the minister in charge of X, Y or Z sector? Who is in charge of making sure forestry is operating in a certain way, environmental protections are operating in a certain way and cars are operating in a certain way? Go from there every time. If we keep scaffolding more and more complexity, more and more compliance, and more and more of these sorts of complexities out into the sky, it doesn't serve justice. We have a fundamental access to justice problem as it stands right now. How many people have the time and energy to file a complaint with the Privacy Commissioner? What is the profile of someone or the demographic of someone who can bring that kind of a complaint forward?
In the same way that we're talking today about how you would even know if you were harmed by artificial intelligence, I recently heard the concept that in some cases it's like asbestos: It's in things and you don't know it's there. Whom will you go to and ask to hold them accountable? If you get hit by a car, there is a clearly accessible track of where you go to deal with that problem. I do not understand why we think it's a good idea to build an entirely new construct when we have a perfectly good physical and material world and a perfectly good set of governance standards. That's a place where we have public power. To me, the only people who benefit from scaffolding all this additional complexity are those with private interests. In a democracy—at this point in time we're 30 years in—public power has to be increased.
Do I want to see a commissioner for AI? No. I don't want to see a new regime for AI.
Thank you to all the witnesses.
As they say in Quebec, I am “sur le cul”.
[English]
I don't know if you know what that means. It means “I'm on my ass.”
[Translation]
I don't know if that translates into that.
I apologize to the interpreters.
Ms. Wylie, you're giving us a particularly interesting lesson.
Bill has been on the table for almost two years. It has been evaluated. It was created by public servants, obviously, in Ottawa. Some politicians have done some work to try to put in place legislation that would frame a problem that you don't really see. In fact, you are saying that all the legislation we need already exists. We simply have to proceed by sector to correct the elements that will be related to artificial intelligence.
At the committee, we have heard from people. Over the past few years, we have conducted studies on blockchain, the automotive industry, the right to repair, and so on.
Today, you are telling us that what we are doing is not working at all. You are telling us to take back the studies we have conducted and the existing legislation and to correct what will affect artificial intelligence, because it is already in all these sectors, let's face it.
My question is still for you, Ms. Wylie, but I would also like to know what Ms. Brandusescu and Ms. Casovan think of your position.
To build on Bianca's point, I think we need to regulate AI. We need to slow down. We can't move fast and break things with regulation. Again, AI is being regulated, but it's private regulation. It's self-regulation, and that's not working. Mr. Shee already said that in his first five minutes.
We need something different. We need it to be like the EU in the way that it needs to be for both the public and the private sector, and it cannot be centralized. I insist, because there's too much at stake to keep all of the power in one agency. I'm going to move on to also say that it can't just be the OPC. It cannot just be the Privacy Commissioner, because AI is more than privacy. AI is also about privatization.
What we see right now is the risk of regulatory capture, because every time there's a new summit being done, as in the U.K., at Bletchley Park, the major governments, including ours, get together and announce collaborations with a top firm. Now, we have the usual suspects—Amazon, Google and Microsoft—and then the new kids on the block, but it cannot be that.
Again, this isn't about perfection at all; it's that the process to get here was one and a half years of almost no public consultation, participation or understanding, even when, as Bianca said, we do have specific examples of harms over and over again. We do need to make sure that AI is regulated. We can use our imagination to do that with law.
:
I think I've shared repeatedly that I don't think AI is one monolithic thing. I do think that it needs to be broken down into sector-specific regulation.
I think what AIDA does is provide a framework that is then dependent on other types of sector-specific regulation. There is no contesting that how this was done is problematic. There needs to be more public consultation. I was really happy to see in the amendments that at least it speaks to what was heard and then how that's being addressed.
I think if we just put that aside—the process is for you guys to debate—it's very important to have regulation of AI systems. I've seen and experienced, by doing a lot of interventions with civil society organizations, harms that are occurring. I don't think that having rules or just leaving it up to self-regulation from companies to say, “We're doing the best we can do” is going to prompt the appropriate behaviour. I think legislators need that.
We need to be able to set the homework, too. We can't say, “You go and write your test, and then you mark it yourself.” I think it's very important that we as civil society organizations, in combination with industry and with government and academics, write what those tests are, the standards that I'm talking about, and then use that to assess industry.
Thanks to all of the witnesses for being here today. We have a great juxtaposition of perspectives. We've been hearing a diverse cross-section of perspectives during this undertaking.
I think we can all admit that this is a very big and important piece of legislation that is complex and challenging for all of us, both as legislators and as.... I'm not sure that any one stakeholder has the full view on how this should move forward. I think it's good to have conversations like this that are push-and-pull. There are lots of challenges here. I appreciate that.
I wanted to just say, first off, that this bill was initiated due to recommendations from the minister's AI advisory committee, which consisted of industry experts. The Facebook whistle-blower was also part of the context that led to this work.
I'd also say that, from my perspective, there were consultations of over 300 stakeholders, which included universities, institutes, companies, industry groups, associations, privacy experts and consumer protection groups. I think there are some other categories, but those are the ones that I can see. I have the list here. It has been provided publicly and to committee members.
I would also say, in terms of the way that parliamentary practice goes, that usually amendments aren't provided in advance, during a study where you hear from witnesses. The government has provided the amendments in advance. We've also heard from some witnesses.
There are varying perspectives on what the process should look like. We've heard from some witnesses that tabling a framework piece of legislation was a good way to get something on the Order Paper and then undertake a lot of consultation to inform amendments to that. Some people feel like that process is very justified.
I just wanted to make those statements off the hop.
Ms. Casovan, we've heard the point that you made, about balancing innovation and protection, from some other witnesses. What I've heard is that having responsible guardrails for AI will allow people to benefit from it while protecting them at the same time. I know that's a challenge. Like any legislation that we work on, it is a balancing act that we're constantly confronting.
Could you speak to how we will know if we get that balance right, from your perspective?
:
It would be if no one is harmed.
It's really difficult to address that. I think that, first, we need to try. We need to recognize that just leaving it to the free market is probably not going to result in the conclusion we want to see.
There's an amazing resource called the AI Incident Database. I don't know if you've seen it. It tracks different types of harms that exist. I'd love for that to be compiled and then we'd understand better, so we can articulate in more common ways what those are.
It's a difficult question to answer in the absence of having any of these in place. I think the requirement for collecting data through a commissioner's office that would have those use cases reported is important.
:
I think there are two key points here.
One is that we really need to have one point of accountability. There's a lot of interoperability between different types of AI systems, so knowing exactly.... If it's an automated vehicle, it might be very clear that this is going to fall into transportation, but if it's a health care system, it might have issues related to consumer protection or it might have issues related to the health and safety of somebody. Breaking those apart is difficult, so what I think this bill does is require those different types of regulators and regulations to work hand in hand with each other.
There are also gaps that exist.
Maybe, third, I would add—as I said in my opening statements—having the professionalization of an individual who would be responsible and accountable for the governance of these systems. You would then have some consistency across all of these different regulations.
:
Thank you very much, Chair.
Ashley, I want to follow up with you on a couple of things.
This has been a great discussion, by the way, especially on AIDA today.
We talk about the value of public and private data, especially for AIDA, and where this bill right now exempts that. Right now, under this bill, DND, CSIS and CSE are exempt from AIDA and there's provision for any federal or provincial department or agency to be exempted via regulation. That's the entire federal government and Crown corporations that are exempt.
When we talk about AIDA as a whole in this bill, in your opinion, is it right that we've exempted all of the public government from AIDA as a whole?
:
However, privacy and looking at an act that would govern data of AI and AI as a whole would certainly look over that. Procurement would only look at other sections, like the Investment Canada Act or other acts.
It's interesting to me that that's not in there. I think that is a glaring hole that I've just noticed today.
I want to switch to either Ms. Wylie or Ms. Brandusescu.
I really focus a lot on opposition to competition. We look at big, bossy conglomerates that exist within the system.
Ms. Wylie, you made an interesting comment that this seems to be going forward only for industry, because capital is looking for a place to go. The examples you gave are that it seems to be benefiting Amazon, Microsoft and Google. They're big, bossy conglomerates. They're huge companies that are only looking to get bigger, and obviously to benefit from this.
When it comes to competition, as the industry committee, we want small, scrappy competitors and companies to be able to enter the space and to ensure that they can compete and enter the market.
I agree with your arguments on where we are with AIDA. Let's talk about if we started anew. How do we create competition? Where do we start in terms of making sure that we get all the players in, not just the big ones but some of the smaller ones included within the discussions?
:
I have just two comments on this.
One, it's partially why, if we had a proper public engagement and started from the beginning, you'd have to map the infrastructural assets that make up artificial intelligence. There is no AI without big tech, full stop. You can't spin it up in your garage. You can't go and do your little software company because code is available to you. That's not how this industry works. This is what I mean. I'm concerned about the lack of homework that has been done to make sure we're starting from a place of material, physical, infrastructural reality, and how it relates to this industry. That's one thing.
The second thing I want to say, which relates back to the conversation we were having about centralization or decentralization, is that not only does the Canadian government not have much clout in terms of telling what the heart of this infrastructure can and can't do.... When we think about privacy legislation, if we start up here with an umbrella called “privacy”, and then we look at how that works in different sectors, we might know what that looks like sector to sector. If our umbrella is called “artificial intelligence”, it's artificial intelligence what? What exactly are we trying to do if our umbrella is called “artificial intelligence”? Are we trying to use it everywhere?
I just want to keep returning us to the fact that we're having a conversation within a frame that does not track to the reality of how this industry is set up, nor how our pre-existing legislation is set up.
I just want to say how little companies might come in on this. The start-ups are hoping no one is going to ask about their two- or three-year revenues, because all start-ups have to do is show scale. That's how the venture capital industry works. You just have to show that your thing is getting big; you don't have to show that it's making money. That's how similar it is to a casino.
That's why I think the fact that we're building into this sector without looking at the consequences on the rest of our whole economy is also a grave error.
:
To add to Bianca's point, I want to take us back four years ago, when Element AI was heavily invested in by the public and the private sector. It's a case that we just do not speak about anymore in Canada and Quebec. This is to Bianca's point about who owns the infrastructure and who owns the data centre versus the datasets. Again, without big tech, there may not be AI, but I would argue that without the military there would be no AI, because that's where it comes from, like most technology.
Element AI was a darling of Canada. In the end, the space that we had in the regulatory framework for competition did not allow it to survive. What happened? It was acquired by ServiceNow, a Silicon Valley company that does, frankly, worker surveillance.
I would like to know exactly, when we move on to this new ideation, what more shared prosperity in competition looks like across SMEs and big companies. I would like to reflect on the failures of AI in Canada within the industry space, and see where we went wrong and what happened to the massive amount of funding and government spending to prop up our industry with all the AI research expertise we have, with all of the centres of excellence. We should reflect on this before we even go and ideate on how competition should look. We should reflect on what happened, especially with Element AI.
:
Certainly. I've heard over and over again witnesses talk about scale, but not violence at scale. That's what we see—how AI is being used in the military. We have to go back to something I spoke about when Parliament did a study on facial recognition technology—that's companies that are defence contractors, which are now spun up as AI and data analytics firms. A famous one is Palantir. You may know of them.
Palantir is interesting, because it started in defence, but now it's everywhere. The NHS in the U.K. just gave them a contract of millions of dollars, despite so much opposition to it. Palantir promised that the U.K. government would be in charge of the data of the people, but in the end it is not so. We have past examples of Palantir abusing human rights. Let's bring that into context. For example, an Amnesty U.S.A. study showed how, in the U.S., government planned mass arrests of nearly 700 people and “the separation of children from their parents...causing irreparable harm”.
I'll go back to the military. What does this mean? The military is the biggest funder of AI. We see rapid, exacerbating killing at scale. When we are racing to move forward with making more AI, making it faster and creating faster regulation just so we can justify to ourselves that we use it, we are not thinking about what should be banned, what should be decommissioned—
:
Thank you, Ms. Brandusescu. I'll have to cut you off here. I was just interested in more information on that. To my knowledge, most of the biggest players in AI remain in the private sector, but thank you for the examples you provided.
We have bells ringing, colleagues, which means we do need unanimous consent to continue. I'm looking around the room to see if we have it, given that we're going to about 35 hours of voting, thanks to our friends to my left, but definitely to my right politically.
Do I have unanimous consent to continue for 10 more minutes?
Some hon. members: Agreed.
The Chair: I'll now yield the floor to MP Gaheer.
:
Facial recognition technology, as we know, hopefully is the low-hanging fruit of dangerous AI. It seems like harm is getting out of context. I will call it dangerous because that's what it is. Yet, we need to have these levels of imagination of banning certain technologies, and facial recognition technologies should be banned.
The public sector can make that choice because it is responsible to the public in the end. The private sector, as it stands, is responsible to the shareholder and to the business model of making more money. This is how capitalism works. This is what we're seeing.
That's not the job of the government. Again, when I say that AIDA should be out and reflected upon as public and private, that is exactly what I'm thinking about. I'm thinking about facial recognition technology used by law enforcement, national security, in IRCC and in immigration. Now it can be used maybe in Service Canada, or maybe in the CRA the way the IRS wanted to use facial recognition for doing taxes. Again, these technologies aren't domain-bound. Just like Palantir went from the military to health, FRT, facial recognition technology, works the same way. The public sector needs to be involved and to be publicly accountable to its people.
I really am coming back to Bianca's points about democracy. Participation is messy, but we need to participate in a way that there is dissent, discussion, non-compliance across the board and consensus, because it is important to make sure that these technologies will no longer be used because they are too dangerous. We saw what happened with Clearview AI. That is a privacy case, but it is also a mass surveillance case, besides the obvious, which are the dangers and harms it has done to so many marginalized groups.