:
I will call the meeting to order.
Welcome to meeting number 88 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities. Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force.
Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders. Members are appearing in person and are appearing remotely using the Zoom application.
For the benefit of everyone, I would ask that, before you speak, you wait until I recognize you. For those appearing virtually, use the “raise hand” icon to get my attention. For those in the room, simply raise your hand.
You have the option of speaking in the official language of your choice. If translation services are discontinued, please get my attention. We'll suspend while they are being corrected. For those appearing virtually, use the globe symbol at the bottom of your screen for translation services. For those in the room, translation is provided via the earpiece.
I would also remind those in the room to please keep your earpiece away from your microphone for the protection of the translators, who can incur hearing injuries from the feedback.
All of our witnesses are appearing virtually today. We have James Bessen, professor of technology and policy—
:
Thank you very much for this opportunity to share my thoughts with all of you.
Generative AI renews concerns for job stability, education and the future of work, because generative AI is capable of things that were unimaginable from AI systems just 10 years ago. The conventional wisdom from labour economics recognizes that technology does not automate occupations wholesale, but instead automates specific activities within a job.
The challenge is that workplace activities and AI applications vary across the entire economy. Therefore, efforts to predict automation and job stability need to rely on simplifying heuristics. Cognitive, creative and white-collar workers are assumed to be safe from automation, for example, because creativity is difficult to assess objectively and because the creative process is difficult to describe algorithmically.
However, generative AI, including tools like large language models like ChatGPT and image generators like Midjourney are doing creative work when they write essays, poetry or computer programming code, or when they generate novel images from just a prompt. This means that today's AI shatters the conventional wisdom that has been used to inform economic policy and economic research.
For example, unlike past automation studies, a recent report from OpenAI and the University of Pennsylvania found that U.S. occupations with the most exposure to large language models tended to be the occupations requiring the most education and earning the highest wages. Departing from a heuristic-based approach to predicting automation will require some new data that reflects the more direct implications of generative AI.
However, just like past technologies, generative AI performs specific workplace activities, which means that AI's most direct impact on occupations is through a shift in workers' skills and activities towards other skills that would complement AI. However, if workers fail to adapt then a job separation can occur. These separations include workers quitting or being fired by their employer. Job separations will lead workers to seek new employment, but if they struggle to find a job, then they may receive unemployment benefits to support them while they continue job seeking.
This lays out a pipeline of AI impact that identifies the most and least direct implications from AI and highlights that data that better reflects shifting skill demands, job separations by region or industry or occupation, and even the data on the unemployment risk experienced by occupations across the economy, will improve efforts to predict AI's impact on workers.
There are some emerging data sources, including job postings, workers' resumés and data from unemployment insurance offices, that offer some new options for describing these details in the labour market but are often missed by traditional government labour statistics.
Finally, because shifts in skills are the most direct consequence from exposure to generative AI, prudent policy should focus on the mechanisms for skill acquisition. If generative AI will mostly impact white-collar jobs, then we should focus on the skills taught during a college education since a college education is the typical mechanism for getting students into those white-collar jobs.
While labour statistics abound, insight into college skills are more difficult to find. If college skills are quantified, then just as we study generative AI in the workforce, we can also assess the colleges, students and major areas of study with the greatest exposure to AI. However, educational exposure to generative AI should not be shied away from. Recent case studies find that generative AI tools do not out-compete or significantly improve the performance of experts, but they do make a big difference in raising the performance of non-experts to be more comparable to that of the experts in those applications.
If this observation holds across contexts, then incorporating generative AI into learning curricula has the potential to improve learning objectives, especially for underperforming students, and therefore could strengthen educational programs.
In summary, generative AI is new and exciting and will impact the workforce in new ways from previous technology. In fact, generative AI shatters the conventional wisdom used to predict automation from AI in the past, because it does the work of occupations that were previously thought to be immune to automation.
A better path forward would focus on the data and insights reflecting what AI can actually do from the perspective of workplace skills and activities as well as the sources of those skills among workers in the workforce.
With that, thank you.
:
Thank you very much for this opportunity to speak here today.
I'm an associate professor in information and communication technology policy at Concordia University. My research addresses the intersection of algorithms and AI in relation to technology policy. I submit these comments today in my professional capacity, representing my views alone.
I'm speaking from the unceded indigenous lands of Tiohtià:ke or Montreal. The Kanienkehaka nation is recognized as the custodians of the lands and waters from which I join you today.
I want to begin by connecting this study to the broader legislative agenda and then providing some specific comments about the connections between foundational models trained off public data or other large datasets and the growing concentration in the AI industry.
Canada is presently undergoing major changes to its federal data and privacy law through , which grants greater exemptions for data collection as classified for legitimate business purposes. These exemptions enable greater use of machine learning and other data-dependent classes of AI technologies, putting tremendous pressure on a late amendment, the artificial intelligence and data act, to mitigate high-risk applications and plausible harms. Labour, automation, workers' privacy and data rights should be important considerations for this bill as seen in the U.S. AI executive order. I would encourage this committee to study the effects of C-27 on workplace privacy and the consequences of a more permissive data environment.
As for the relationship between labour and artificial intelligence, I wish to make three major observations based on my review of the literature, and a few recommendations. First, AI will affect the labour force, and these effects will be unevenly distributed. Second, AI's effects are not simply about automation but about the quality of work. Third, the current arrangement of AI is concentrating power in a few technology firms.
I grew up in St. John, New Brunswick, under the shadow of global supply chains and a changing workforce. My friends all worked in call centres. Now these same jobs will be automated by chatbots, or at least assisted through generative AI. My own research has shown that a driving theme in discussing AI in telecommunication services focuses on automating customer contact.
I begin with call centres because, as we know through the work of Dr. Enda Brophy, that work is “female, precarious, and mobile.” The example serves as an important reminder that AI's effects may further marginalize workers targeted for automation.
AI's effects seem to already be affecting precarious outsourced workers, according to reporting from Rest of World. Understanding the intersectional effects of AI is critical to understanding its impact on workforce. We are only beginning to see how Canada will fit into these global shifts and how Canada might export more precarious jobs abroad as well as find new sources of job growth across its regions and sectors.
Finally, workers are increasingly finding themselves subjected to algorithmic management. Combined with a growing turn toward workplace surveillance, as being studied by Dr. Adam Molnar, there is an urgent need to understand and protect workers from invasive data-gathering that might reduce their workplace autonomy or even train less skilled workers or automated replacements. According to the OECD, workers subjected to algorithmic management have a larger reported feeling of a loss of autonomy.
All the promises of AI hinge on being able to do work more efficiently, but who benefits from this efficiency? OECD studies have found that “AI may also lead to a higher pace and intensity of work”. The impact seems obvious and well established by past studies of technology like the BlackBerry, which shifted workplace expectations and encouraged an always-on expectation of the worker. Other research suggests that AI has the biggest benefits for new employees. The presumed benefit is that this enables workers to make a contribution more quickly, but the risk is that AI contributes to a devaluing or deskilling of workers. These emphasize the need to consider AI's effects not just on jobs but on the quality of work itself.
The introduction of generative AI marks a change in how important office suites like Microsoft Office, Google Docs and Adobe Creative Cloud function in the workplace. My final comment here is less about AI's particular configuration now, but instead about a growing reliance on a few technology platforms that have become critical infrastructure for workplace productivity and are rapidly integrating generative AI functions. AI might lock in these firms' market power as their access to data and cloud computing might make it difficult to compete, as well as for workers to opt out of these products and services. Past examples demonstrate that communication technology favours monopolies without open standards or efforts to decentralize power.
I am happy to discuss remedies and solutions in the question and answer period, but I encourage the committee to do a few things.
One, investigate better protection of workers and workers' rights, including greater data protection and safeguards and enforcement against invasive workplace surveillance, especially to ensure workers can't train themselves out of a job.
Two, consider arbitration and greater support in bargaining power, especially for contracts between independent contractors and large technology firms.
Three, ensure that efficiency benefits are fairly distributed, such as considering a four-day workweek, raising minimum wage and ensuring a right to disconnect.
Thank you for the time and the opportunity to speak.
I want to thank our witnesses for joining us today. It's pretty exciting to hear from experts on this very interesting subject matter. Thank you for your time.
Maybe I'll start with Mr. Frank.
You suggested that generative AI may impact white-collar workers more than those we traditionally refer to as blue-collar workers, and I'm assuming that automation would affect white-collar workers less than AI would.
Can you explain a bit more about those two technologies? I know that this is a study on AI, but I think it's important, because I guess the second part to the question is this: How is the integration or the intersection between those two technologies going to impact both workforce sectors?
:
Thank you so much for that response.
Mr. McKelvey, I have a quick question for you. You talked about protecting workers' rights or just protecting workers in general when it comes to the adoption of AI as it's integrated more and more into the space that workers operate within.
How do we as policy-makers, as folks who build regulation, develop policy, regulations and legislation that keep up with the changes that are happening so quickly? Even in your space and with your expertise, I would say that you probably couldn't predict with accuracy what's going to happen even next year.
How do we get ahead of it by making sure that the policy, legislation and regulations we put in place actually are aligned with where we're going? Do you have any thoughts on that when it comes to workers' rights?
:
Honourable member, I can assure you that I can't predict what will happen next, nor when the light will go out in my office.
I will tell you that I think there are actually long-term trends. I feel that one thing that is important to recognize is that generative AI is arriving in a pretty well-established policy context when you have growing debate and concerns across the government about the influence of large technology firms.
Really, two things come to mind as key points. One has been an approach that governments elsewhere have been trying to look at around arbitration and being able to allow for and support our collective bargaining power when there's such asymmetry between a large platform and a worker on those platforms. I would add that many of the creative sectors working online now are out front and centre on the impacts of algorithms and how that will impact content creation.
I would think that one part is trying to figure out how to, in places, step in to alleviate bargaining asymmetries. The second is trying to deal with actually the contracts and contract law, because in many ways you're dealing with service arrangements with large institutions and cloud providers. This is another key point where we need symmetries in place. I think those are two key sites of identification.
I think the third thing is just being mindful of the changes that are taking place in workplace surveillance. This is a long-standing trend. Certainly things like the turn toward algorithmic management and employee monitoring programs are not going away. I think sustained attention could be dedicated there.
I'd like to thank the witnesses.
I was pleased to see Quebec hosting an important forum on framing artificial intelligence last week, with a number of players in attendance. Even though the data is lacking, we're starting to see some interesting impact studies. I wanted to point that out.
My first question is for you, Mr. McKelvey.
In your speech, you talked about Bill . I should point out that our committee is not studying this bill. Another committee is studying it. One of my colleagues told me that the committee had only reached data protection in its study of the bill. Therefore, the committee hasn't yet gotten into the real challenges posed by artificial intelligence.
You have made us aware that the Standing Committee on Human Resources, Skills Development and the Status of Persons with Disabilities could study the effects of Bill C‑27. In your opinion, should the two committees do it simultaneously rather than one after the other? Can you tell us more about that?
:
First, some of my comments were also drawn from the forum for AI presentation in Montreal and some of the panel discussions around labour. I actually think it is important to recognize the differences in Quebec's leadership on addressing the social impacts of artificial intelligence. That was an important milestone in trying to push an agenda of trying to think about AI as not simply economic policy but also as social policy.
The challenge, presently, with Bill is that it's complex enough in itself, and then there is the added AIDA amendment. It's a really challenging moment to make very important legislation work, so having more eyes on it, particularly attention from your committee on the labour impacts of Bill C-27, would be welcome.
Given the time that this committee will have to investigate the multitude of changes, I don't think there is going to be enough time to address those effectively. This is an important way of coordinating AI policy across the government, which in my own research I found lacking.
:
I want to first clarify that artificial intelligence is a complicated term presently.
I appreciate Dr. Frank's work in distinguishing between the present discussions of generative AI and the broader term that we use for artificial intelligence. Certainly, there is a wholesale conversation about AI's impact, but I think in this moment right now what we're talking about is generative AI.
The two parts that stand out to me are that, one, Canada's position, at least in the generative AI landscape, is different from its position in the broader AI ecosystem. You've really seen movement from a few large American firms to launch some of the main products—you hear about ChatGPT and the other ones—which I think are not necessarily part of the Canadian ecosystem. That, I think, raises the first question about where we fit in our own workplace autonomy, what tools we are able to use and how much we are kind of following. I think that's an important shift.
The second thing is that my background is largely in studying media systems. My closest proxy to understanding the distributive effects of artificial intelligence is looking at creators online and around platform regulation. I would say that a lot of the impacts of artificial intelligence are around automated ad generation.
Facebook is launching new features to auto-generate AI in ads. A lot of the content is this kind of high-level creative stuff, and I think the daily churn of information production is an important place where this impact is going to take place. Partially, I think our information systems are really primed for high-volume, low-quality content. That's been a kind of wide concern, and certainly one of the impacts that we have in journalism presently is that you see workers attuned to generating press and stories for the algorithm.
My first concern is one where you could see a kind of devaluing of the type of labour that's being done, because it could be done quicker or more efficiently. The second of my comments is that I think—and this is from my read of the OECD literature—there is also this potential of a deskilling, saying that we are automating this and that enables certain types of tasks. I think that's specifically generative AI and the generative AI that's being approached in a top-down way. It's being embedded in key productivity suites and kind of rolled out with the expectation that people are figuring out how to use it.
I think an important point to make is that how OpenAI, which launched ChatGPT, has been deliberately trying to kind of hack and disrupt the workplace. That open demo—what was ChatGPT—demonstrates that is a business strategy we want to attend to.
Thank you to the witnesses.
I'm going to ask my question initially of Mr. McKelvey.
We're obviously in the very early stages of this, as legislators, and I'm sure that it's going to evolve over time. Right now I'm wanting to focus on the obvious traps that we should be legislating. I really appreciate the three that you brought forward.
I'm interested in your expanding a little bit more on this “efficiency benefits are fairly distributed”. With the intersectional lens that you brought to this discussion, which is gender.... There could be other intersections, of course. This committee looks at disability inclusion, so I'm also very interested in how that would benefit or harm persons with disabilities and bring them into the workforce.
With that lens, I'm wondering if you could explain a little bit how workers can be protected and benefit from the obvious evolution.
:
I do want to acknowledge that there are opportunities here. One of the parts that I think is important with generative AI in these opportunities is thinking about how they're changing the barriers to access, particularly when it comes to things like passing as a native English speaker.
If we're adapting and trying to understand the multiple layers, I think one part is trying to acknowledge one of the potential benefits, recognizing some of the proxies we have for workplace competency, like English writing, which is something that might ultimately be beneficial in allowing people who are non-native speakers to actually access those skills. That kind of goes back to things like grammar.
Part of what we're looking at here is attending to the different.... There are two parts that I think are coming up. There's one dealing with change in the precarious workforce, when you're talking about more contract work, shift work or gig work. AI doesn't change that, but I think AI adds to the importance of studying the shifts in the labour market towards more user platform arrangements, like what we see with Uber.
That's really where I feel there's going to be one potential point of impact: whether you're going to see AI as part of what we call the “algorithmic management” of those platforms. Those are often people turning to those as jobs of last resort or jobs that they're looking to.... I think that in one sense it's an important way of protecting workers who are in those kinds of gig jobs.
The second part, then, I think, is trying to look at the way that, more broadly, we have this silent arrangement with a few large technology firms that are providing critical infrastructure and how conscious they are of understanding the ways their data collection practices are affecting the workforce and might be in place.
I think those are my best guesses as remarks. I think there is a challenge here about, really, this deeper question: Is the driving force of this kind of productivity just going to be something...? Where is it going to be adopted and where are the drivers here? Part of what I see is that generative AI is incentivizing further automation in places that already seem automatable, like in content creation. There is, I think, a way of saying that jobs that have already been deskilled or marginalized are going to become exacerbated by this turn towards generative AI.
:
In preparation for this, I was trying to look for evidence of where these impacts would be coming from. I wasn't able to find anything that's been published that's talking about the impacts in Canada, necessarily, with the gender-based, I would say, and I'd add that this is really an important part of what's going on in these discussions about artificial intelligence, especially in generative models: the biases that embed and reproduce.
I would like to acknowledge that, when you're talking about what voices, it's also important to recognize what voices these systems reproduce. This is really fantastic work. When you look for and ask for a generative AI model to depict a doctor, is it more likely to be a male than a female? It's the same thing when it comes to depicting.... If you describe someone from a different country, how do they reproduce certain key stereotypes?
I think one part—to add to what I clearly agree with you is a need to identify how automation and generative AI will impact jobs from an intersectional framework—is that this is clear investigative work, clear work that needs to be done. There is also a clear concern about the biases built into and baked into these technologies that are being rolled out as solutions to workplace productivity.
:
Thank you so much. I appreciate the time.
I just wanted to say that I think what's interesting is that, in at least the writers strike, what was being negotiated was access to data and trying to ensure that workers were able to understand their place in the organization, which I think is an important thread.
There was also a concern that, if you're talking about franchise models and about generating the next Marvel movie, then you're talking about a type of cultural production that is really oriented towards keeping the same type of content being churned out. I think that's where workers were concerned that their scripts or their content would be used to train models that ultimately would either undermine their bargaining power or replace them. That's important only to point out the kind of context and where there is a benefit or perceived value in this kind of automated content generation.
The third thing is what actors are negotiating for—and this seems like a clear split—which is whether they have a right to their face and whether studios have, in perpetuity, access to modify their images. That I think all speaks to the idea that workers need to have data rights and privacy rights. I think the actors guild and the writers guild have really been the ones at the forefront of demonstrating what is a broad concern, not just in Hollywood.
:
Okay. Thank you for that.
I'm one of those people who are still sort of struggling to grasp what exactly the scope of AI is, I guess, but there's no question that, in the last 50 years, technology has advanced and changed in an exponential fashion.
I'm wondering if you can give me an example of a technology from even a generation ago that had a similar kind of impact on our labour markets and on our society, about which there was this level of concern or caution or interest.
The question is for both witnesses.
:
I was just thinking back about that. When we were talking about the early days of the dot-com boom, and stuff like that, we weren't talking about the same magnitude or influence of companies. If anything, we've learned, and I think we can.... Partially, what I'm here for is that I'm trying to be more conscious about how those technologies have been rolled out in a more thoughtful way.
When the Internet was coming about, I think there was this idea that it was connectivity and it was going to bridge digital divides, and some of those privacy concerns fell by the wayside.
What has really become more prominent, at least with mobile technology and the ways mobile phones are really part of a fairly elaborate ad tracking and surveillance network, is that those concerns have become more prominent. Where we are now is that I hope we have learned from our debates and from the challenges we have now about platform governance and know that, when I'm talking about a procurement hack with open AI, to me, it's that type of strategy we've seen companies do time and time again. I hope we're better and quicker at raising concerns about privacy and concerns about users' data than we were in the past.
I think that's something I'd give back at least to the BlackBerry. It was a cool gimmick, but now I have to check my email all the time because I've been trained to do it, and I regret, in some way, that I didn't think about that sooner.
Perhaps I'm starting to reflect on my age, but building on the transformation of technology that Mr. Aitchison referenced, I think about Netflix and how that has eliminated video stores. I think about Apple Music and how that has eliminated record stores and tapes. I think about how Uber has transformed the taxi business. I think about Zoom as opposed to a phone call. The technology has changed consumer behaviours and consumer demands, so it will have a very dramatic impact, I think, on the labour force.
My first question is for Mr. Frank. I've often been concerned that regulatory bodies regulate through the rear-view mirror as opposed to through the windshield, which is where we should be focusing our attention. That draws the dilemma of what we can predict, with a reasonable degree of certainty, and what we cannot predict.
:
I really wish I had five minutes' speaking time, Mr. Chair.
My next question is a short one and it's for Mr. McKelvey.
In your opening remarks, you gave the example of call centres. Personally, I am in contact with many unions, and I have to say that when it comes to telecommunications, it's pretty appalling. I wasn't aware of some of the current realities. We don't need to look any further than Bell, Videotron or Telus; call centres are being relocated around the globe. That's causing a fairness issue for reports and working conditions.
What difficulties will generative AI add to all that?
:
Sure. That's a wonderful question.
The same type of volume of research about AI and its implications for skills in the workforce hasn't been carried out for the mechanisms by which workers get skills. Education would be one of the major mechanisms by which people get skills before entering the workforce, but I think AI is a tool that will really help educate students today.
I'll give you a simple example. I'm a professor. Right now I have to field every email from every student when they have questions that need clarification from me. You could imagine, with some of the clarifications that I or my teaching assistant provide, maybe having an AI tool available to them instantly, in real time, at any hour of the day, could help them get an understanding. If there's still confusion, then they could submit a question to me or their TA.
The other thing we see, at least in the few studies I've seen that are actually random controlled trials, where some workers have access to generative AI compared to those who do not, is that generative AI's biggest effect is in bringing up non-expert performance to the level of expert performance. If this observation holds in a variety of cases, what it could mean in the classroom is that underperforming students are able to reach the levels of high-performing students with access to these tools. That could be a great dynamic or great result that makes everyone reach the same type of bar in higher education.
I am going to follow up on this topic or vein with Mr. Frank. We've talked for decades about intellectual property and how the intellectual property belongs to the company. It doesn't necessarily belong to the worker. It belongs to the company. We're now having conversations about cognitive property. A lot of the data that's already captured by large organizations came from someone's ideas, their education, their thoughts and their opinions, and it's now being monetized by someone else.
I'm very interested in how we protect workers' cognitive property, especially now, in situations where we're starting to build a lot of that cognitive property into AI tools. Do you have some thoughts on how we can protect workers, Mr. Frank, when it comes to their opinions, education, skills, knowledge and talents?
:
All right. I understand better.
I would say that this is not new to AI, this dynamic where the ideas, the thoughts and the perspectives of workers are getting coded into AI, just as the perspectives of programmers who build social media websites get encoded into the programming and the code behind the website.
I would say that this maybe isn't a new topic. I think that having workers who are thinking about these issues—for example, representation and how we account for different viewpoints—and having people with those ideas embedded into the engineering side of these tools is really powerful for exactly that reason.
Another thought that comes to mind is that the generative AI tools we're seeing now that are making big waves, things like ChatGPT and Midjourney for image generations, these are not things that I could produce here with my laptop or even with the computers I have at my lab at the university. These really are things that require collaboration between smart people who can write very effective code and huge amounts of resources on the computing side and the training side of these AIs. I don't think that something like ChatGPT would have emerged without a collaboration between the smart people who do the coding and the resources that the company can put behind a project like that.
Thank you to all the witnesses for being here today.
Before I get into my lines of questioning, I would like to move the following motion:
The committee immediately undertake a five meeting review on the disproportionate impact the carbon tax has on low income individuals.
This has been circulated to committee members.
We know that the carbon tax is impacting vulnerable Canadians by raising the cost of basic goods like gas, home heating and groceries. The Liberal government has admitted that it's doubling down on their carbon tax plan, including quadrupling the carbon tax on Canadians. The temporary pause the Liberal government has announced for the carbon tax on home heating oil won't help 97% of Canadians. The committee needs to study how proceeding with the government's carbon tax policy adds costs to the lives of the most vulnerable.
This is relevant to this committee specifically, because the mandate of this committee talks about studies that this committee can do and should prioritize. In our mandate, it includes income security and disability issues. The carbon tax affects income security by raising the price of basic necessities. As well, the carbon tax increasing costs impacts the most vulnerable in our society, especially persons with disabilities. We heard a lot of testimony at this committee during the legislation, where persons with disabilities were finding it hard to pay for basic necessities. We even heard of people considering medical assistance in dying, MAID, because they couldn't afford to live. All of that testimony was actually before the most recent carbon tax increase that happened this summer.
I have moved this motion. I hope the committee will support it.
Thank you very much, Mr. Chair.
I actually appreciate the comments around the mandate of this committee.
We do know that many families are struggling and many people are struggling, and the Canada disability benefit is something we'd all like to see advanced much more quickly.
I want to discuss something. In March of this year I brought forward a motion that I didn't table. I just sent it out to committee. Really, I'm interested in tax credits. What are the tax credits like? How can we increase income for people?
I know that one thing for sure is that seniors and persons with disabilities often don't file their taxes. They don't get their taxes in on time, and then they lose their GIS and they lose some of their income supports and entitlements. I found out over the past two years that there are students coming out of school who don't understand what entitlements they have and what income supports they have.
Although I'm all for trying to understand how we can increase income for people, I'm concerned that this one is narrow in its scope, that it's just looking at the carbon tax. It's too wide of a scope. I would like this committee to sit together. Maybe we can have a discussion about really taking a look at income supports that vulnerable people need, income supports for vulnerable people that they haven't accessed, entitlements that they're allowed and that they deserve but that they haven't been able to access because of different barriers and maybe even because they haven't filed their taxes.
Something I am thinking about is automatic tax filing. It would be a great opportunity to increase income.
Although I like the spirit of it, I think we need to have a wider discussion about how we support vulnerable people in this country.
I'll just leave it there.
That was really unfortunate, considering how much people are hurting, but I'll go into the questions I have for the witnesses here today.
I have the same question for both of the witnesses. I'm wondering if I can get your feedback. The U.S. has just released their AI rules. I'm wondering if you have had an opportunity to go through those. Specifically, do you believe there's a benefit to having Canada potentially harmonize our rules with the AI rules that the U.S. is using and perhaps other countries, like those in the EU, are using? I'm wondering if you can comment on that.
Maybe we'll start with Mr. Frank.
It's a good question. I haven't reviewed all of the details of the Biden administration executive order. I know there's a lot of concern there about jobs, about data privacy and about IP and ownership. I think there is a big risk that each county having its own regulations on each of these dimensions would create a real problem so that no country's regulations would end up being effective.
The thing about AI is that it's digital, so it's easy to ship data from one country to anywhere else in the world, to use that in an AI system and to ship the results or even the code base for the AI itself. It's easy to share across borders.
I would expect that it would be much more effective if countries could collaborate to agree on a standardized set of regulations along all the dimensions they think are of concern.
:
Yes, I've been able to review it briefly, but not in complete depth. I'd say that it certainly demonstrates the clear gaps that I see in Canada's approach to the artificial intelligence and data act. You see much more fulsome treatment of potential harms and willingness to engage in the sector-specific issues around artificial intelligence. I think it's a document worth studying just to demonstrate the complexity of the challenges facing regulators and legislators...and then in comparison to AIDA.
I would agree with Dr. Frank that there is probably a need for a harmonized approach. Canada is quite active in that to some degree, whether it's participating in a global partnership on AI or in some of its bilateral agreements with France or the United Kingdom. I think there is a debate that Canada is going to have to position itself where it's at least working—and I know there are efforts to talk about treaties with the EU around AI—in parallel with the United States.
The one thing I would say is that with Bill and Quebec's Law 25, I think there is a big test about GDPR compliance. Really, what should be front and centre when we are talking about our legislative agenda for AI is understanding it in relationship to the movement that's happening in Europe around the AI act, and I think to a lesser degree with the United States, although I commend what that order has been able to accomplish.
You answered part of the next question that I was going to ask.
I'll pose this to Mr. Frank, then.
When we're looking at future trade negotiations, how do you see that this might fit in? Are there any trade issues that we should be aware of now—anti-competitive effects for Canada?
You only have a minute to respond, so what are your thoughts on that?
:
I love this question. I spend a lot of time researching this question.
What you'll find from research on automation is a lot of use of the word “exposure”. Workers are exposed or tasks are exposed to AI. There is not a lot of commitment to what “exposure” means. That is because some workers are freed up by technology to do other things that complement AI, so they become more productive and more valuable with AI. In an extreme case, where many tasks are automated by AI, then you can be completely substituted for, and that would be a negative outcome for the worker.
I think we need to be more specific than just saying that a worker or a task is exposed moving forward. The way to do that is to get data on how skill sets shift in response to the introduction of AI. When a new tool is introduced, in a dream world, we would have data that reflects what every worker is doing all the time.
Of course, there are a lot of privacy concerns with that, but for the sake of conversation, let's just imagine that world. We would have very good information on what changes when a worker is introduced to a new tool. You can even imagine having these little natural experiments, where there's randomization in who does and does not have access to a technology. You could start to get at the causal impact of technology shifts.
That would be the ideal. I think there are some things that are a few steps away from the ideal that would also be very useful.
I'm much more familiar with the labour statistics we get in the U.S. than in Canada. Those of you who read my brief probably picked that up very quickly.
Very important labour dynamics like job separation rates or unemployment are not typically described by industry, firm or job title. Clearly getting at those concepts at the more granular level would be much closer to the consequences in shifts of skill and would allow for more proactive policy interventions—not just from AI but from any labour disruption moving forward.
I have a question for Professor McKelvey.
We just completed a summit here, a caregiving summit, on Parliament Hill. In terms of caregiving, there are eight million Canadians who are caregivers. A lot of them are unpaid caregivers. We also have a large paid caregiving part of our economy. We're talking about nurses, PSWs, home care, child care and whatnot.
I wanted to ask you if you could talk about how AI might impact caregiving in Canada. If you're not able to speak about that in particular, what questions could we be asking to find out what the potential impact of AI is on caregiving in Canada?
:
Obviously this is not my area of expertise, but I will say that I have been pulled into some of these discussions because, in Montreal, there was a proposal to introduce a robot into a seniors' home as a way of taking care. That led me down a bit of an investigation into what the effect of this is.
The best place to look at as a parallel is Japan. I think there have been a lot of efforts in the automatization of caregiving in the Japan context, but it largely hasn't been effective, because many of these technologies cost as much to maintain as it would to actually properly resource the caregivers in place.
I think there's a kind of shifting of values, where, again, I think it's targeting the cultural impacts of artificial intelligence. They think the technology is going to do a better job than just paying a nurse or a caregiver properly for that function. I would say that, in anything I have seen, the benefits are overstated compared to the potential, and also that, really, this is something that has to fit within a larger holistic system of care that evaluates even the kinds of benefits—which I'm not saying there isn't—within making sure there are actually proper resources to support our fundamental frontline caregivers.
:
Acknowledging my poor powers of prediction, I can't draw a direct correlation between a rise in precarious work and artificial intelligence.
I would say that where we'll see a significant place, and where we want to attend to AI's impacts, is around precarious workers and gig workers, because we know that these are the workers that are already subject to algorithmic management, already subject to new forms of workplace surveillance and ultimately have complicated data arrangements with their platform providers, which are often trying to figure out ways of managing them.
The other part, I would say, is that if we're looking at a shift towards more hybrid environments and changing ways that organizations are being designed, there is certainly I think a push towards trying to create more services that are on demand and that invite, potentially, a kind of precarious relationship because of the type of gig worker. I feel as though what's partially at risk here is that the way shifting platforms are also reorganizing how workforces are taking place could give rise to more plug-and-play types of jobs. That would be something that doesn't have the same risk, because largely you would be contractors.
Yes, I think the impact of generative AI and any other technology will be biased towards certain industries. It's not just a blanket impact across the whole economy usually. In the case of generative AI, I imagine that we'll see a lot of advances that are a boon for workers and a boon for capital in tech, but we'll also see new opportunities as a consequence of these new tools that are not necessarily involved in development, such as in the areas of medicine, communications and media.
I think a lot of spillover effects are yet to materialize. There are a lot of people working on it, and I expect that they will produce something.
:
Thank you very much, Chair.
Thank you to both presenters today.
I'll ask you both the same question. It's a very general question. I like to do this, because it does help summarize what we hear from witnesses, especially when the topic is broad and also very important for public policy. Obviously, we're looking at AI with specific reference to labour, but there are many ways to look at that.
How can we take from your testimony the most important parts? What would you say are the key things that you would want us as a committee to keep in mind when looking at this issue going forward and when we ultimately provide recommendations to the government on the way forward?
:
First, there is a need to consider this around Bill and the ways in which we're trying to understand privacy and data. Partially what is really important now is recognizing our data power. What AI demonstrates is that there's power in collecting large amounts of data. You can now mobilize it. Really, it's trying to think about privacy law and data as bigger than the traditional concerns about personal information. That's an important broader shift that we've been witnessing, but it just hits it home.
I think the second thing is then trying to understand these uneven and disparate impacts. Certainly we're going to hear ample evidence about the benefits of artificial intelligence. I think it's incumbent on the government to understand and protect those marginalized and precarious workers who might be on the outside of those benefits.
That's certainly part of what's going on with generative AI. We're trying to understand a different class. That's why there's so much attention right now. It's a different class of workers, typically white-collar creative workers, who are potentially now facing greater competition from automated solutions. That's not to say that the effects are going to be easy to predict, but it's also saying that we're seeing a marked shift. That needs to be taken into consideration in how we're going to understand this relationship with AI and the labour market.
Finally, it's to ensure that we are making sure that we have strong protections for workers and making sure that this is something that we value as a society and part of how we frame our legislative agenda.
My question is for both witnesses. I should have time to get brief responses.
The study we're doing specifically looks at the impact of artificial intelligence on the workforce. We could also ask whether these technologies have a greater impact on women and people with disabilities, and whether that constitutes discrimination against them.
At this stage of our study, if you had one or two recommendations for us, what would they be, Mr. Frank?
:
Mr. Frank, I'm going to ask you a question about data, because I want to be sure I understood you correctly.
In your testimony, you said that more data was needed. That was also part of your recommendations.
You said that unemployment would provide us with more data, if I understood what you meant correctly. That worries me somewhat. I'm all for using technology to perform certain tasks, but not to replace employees. If it puts jobs at stake, in our opinion, it shouldn't be a solution.
I want to hear your thoughts on this. When we use a new technology, shouldn't we be aiming for requalification rather than unemployment?
:
I would say two things briefly. Bill builds in large exemptions for what types of data can be collected, so if it is anonymized or for legitimate business purposes. I feel like that actually warrants more consideration of what that entails and of the potential impacts it has on workers.
The second part is that, really, what these exemptions do is.... They are backstopped by AIDA—the artificial intelligence and data act, which is at the end—which really causes some notable concerns because it's putting a lot of the investigative powers in a loosely defined data commissioner role. I actually feel as though part of the task, ahead of the legislative agenda, is changing it from AI to being simply a matter of an economic strategy, and also thinking about ways of mitigating its potential negative and positive social impacts.
Yes, I think some ways of addressing how this impacts labour and trying to make sure that there is targeted legislation would be a boon, because I think this is not something that is going to be addressed by an omnibus bill.
:
Thank you, Ms. Zarrillo.
Dr. McKelvey and Dr. Frank, if you want to provide a written response to Ms. Zarrillo's question on companies that would be of interest for the committee to hear from, you can provide that in writing to the clerk of the committee.
With that, I want to thank both of you for appearing before the committee today and providing very informative testimony on this emerging topic that will be discussed for some time.
We will conclude this portion of the meeting, suspend for a few moments and come back in camera for committee business.
Dr. McKelvey and Dr. Frank, you can exit Zoom at your wishes. Again, thank you so much.
We are suspended.
[Proceedings continue in camera]