:
Members, the clerk has advised me that we have a quorum and all witnesses have been sound-tested and are okay. With that, I call to order meeting number 89 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities.
Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force.
Today’s meeting is taking place in a hybrid format, meaning that members are attending in person in the room and virtually.
You have the option to choose the official language of your choice. To participate, those appearing virtually can use the globe icon at the bottom of their Surface. If there is an interruption in the translation services, please get my attention by using the “raise hand” icon, and we'll suspend while it's being corrected.
I will remind members appearing in the room to keep their headsets clear of the mike to avoid the translators getting hearing feedback from your device. I would also ask members to speak clearly and slowly for the benefit of the interpreters.
We have two panels today.
With the first panel, we have, as an individual, appearing by video conference, James Bessen, professor and director of the technology and policy research initiative at Boston University; in person in the room, Angus Lockhart, senior policy analyst at the Dais at Toronto Metropolitan University; and, appearing by video conference, Olivier Carrière, executive assistant director to the Quebec director of Unifor.
Welcome back. I believe, Mr. Carrière, that there were issues the last time. Thank you for coming again.
We will begin with opening statements, beginning with you, Mr. Bessen, if you are ready with your opening statement for five minutes or less.
:
Sure. I'll say just a few words.
AI has gotten an awful lot of media hype, and I think that makes it very confusing to understand what its impact will be.
I tend to view it as much more continuous with the kinds of changes that information technology has been bringing about for the last 70 years, particularly regarding the role of automation.
There are tremendous and exciting things that AI can do. Some of them are very impressive. Many of them, unfortunately, are still very far removed from the point at which they can replace labour.
In fact, what tends to happen—and this has been true throughout the period—is that automation mainly pertains to automating specific tasks of a job rather than the entire job, and a lot of people misunderstand that. There are very few jobs that have been completely automated by technology. I looked at the U.S. census and identified occupations that had been completely automated by technology. I found only one, which was elevator operator. Other jobs were lost and other occupations disappeared because technology became obsolete or tastes changed, so we no longer have telegraph operators and we no longer have housekeepers of boarding houses.
That's been over a period in which technology has had a tremendous impact on automating tasks and affecting labour and productivity. What it means, basically, is that there's been a lot of fearmongering about AI causing massive unemployment. We've been using AI since the 1980s, and we're not seeing massive unemployment. I don't think we're going to see massive unemployment any time in the next couple of decades, but we are going to see many specific jobs being challenged or disappearing, and new jobs being created.
The real challenge of AI for the labour force is not that it will create mass unemployment but that it will require people to change jobs, to acquire new skills, to maybe change locations or to learn new occupations. These transitions are very costly, can become burdensome, and are really a very major concern.
There's a second thing I'll point out, but I don't want to be long here. Another major impact—and this has been true of information technology for the last two decades—is that AI has done a lot to increase the dominance of large firms. We see that large firms are acquiring a larger share of the markets. They're much less likely to be disrupted by innovators in the traditional Schumpeterian fashion, where the start-up comes along with the bright new idea and replaces the incumbent. That's happening less frequently.
That's important for a number of reasons, but it also affects the labour force in a couple of ways. One is that large firms tend to pay more, in part because they have advanced technology, and this tends to increase wage inequality. Information technology has been leading to boosting differences in pay, even for the same occupations. We'll see big differences so that the same job description will pay much more at a large firm.
The second thing is that partly because of that, there's a really significant talent war, with these new technologies requiring specific skills that work with the technology. I'm talking not just about STEM skills but all sorts of skills of people who have experience adapting their skills to work with the technology. They're in great demand, and large firms have an upper hand in the talent wars. They'll pay more; therefore, they can recruit more readily.
There's nothing bad with their paying more—we want labour to earn more—but at the same time, it means that smaller firms, particularly innovative start-ups, are having a harder time growing.
We see that the growth of start-ups declines in areas where large-firm hiring is predominant. That becomes sort of an indirect concern for labour.
I will just wrap it up with that. Thank you.
:
Thank you, Mr. Chair, for the invitation to address this committee today.
My name is Angus Lockhart. I'm a senior policy analyst at the Dais, a policy think tank at Toronto Metropolitan University, where we develop the people and ideas we need to advance an inclusive, innovative economy, education system and democracy for Canada.
I feel privileged to be able to contribute to this important conversation today. In addition, I have a brief I co-authored with Viet Vu. I'm submitting it, and it will, hopefully, be available soon.
Today I would like to talk about three things—what we know about past waves of automation in Canada, what the Dais has learned from our research into the impact of automation on workers, and how the current wave of automation is different from what we have seen before.
First, I want to set some context for my remarks. The concern for workers in an age of automation is not new. In fact, it has been ongoing for more than 200 years, since machines started to enter the economy. What we have seen through many waves of automation, in the end, is not mass unemployment for the most part, but increased prosperity.
Our research at the Dais suggests that AI is much like past waves of automation. The risk from AI to those whose jobs are likely to be impacted is smaller than the risks to Canada of not keeping pace with technological change, both on productivity and on remaining internationally competitive. This, however, does not mean that there aren't any bad ways to use this technology, or that adoption won't hurt at least some workers and specific industries. The question has to be how we can support workers and be thoughtful about how we adopt AI, not whether we should move ahead with automation.
The good news is that we're still in the early stages of AI adoption in Canadian workplaces. Our recent research shows that just 4% of businesses employing 15% of the Canadian workforce have adopted AI so far. Less than 2% of online job postings this September cited AI skills. Most people are not yet exposed to AI in their workplace. This is likely and hopefully going to change over the next decade, making now the time to act and put in place frameworks that support responsible adoption and workers.
In order to do so, we ought to understand how this technology differs from what came before it. Probably the biggest change in the latest wave of large language models is how easy they are to use and how easy it is to judge the quality of their outputs. Both the inputs and outputs of tools like ChatGPT are interpretable by workers without specialized technical skills compared to previous waves of automation that required technical skills to implement in the workplace and produced outputs that were often not interpretable by lower-skilled workers.
This means that the new wave of AI tools are uniquely positioned to support lower-skilled workers rather than automating entire tasks that they previously did. Evidence from some initial experimental research suggests that in moderately skilful writing tasks, the support of a GPT tool helps bridge the gap and quality between weaker and stronger writers.
That said, we also want to acknowledge that previous waves of automation and digitization in Canada have not had fully equitable outcomes. While, in general, increased prosperity has improved quality of life for all Canadians, the benefits have nonetheless been disproportionately concentrated among historically advantaged groups. With AI we run the risk of this again being the case. It's currently being adopted most quickly by large businesses in Canada, and those tend to be owned by men. However, because we are still in the early stages of AI adoption in Canada, there is time to make sure it's not the case. We can't afford to miss out on the prosperity that AI offers, but we need the prosperity to uplift all Canadians and not just a select few.
I want to end by saying there's still a lot of work to do here. At the Dais we're going to continue to research and try to understand how generative AI can be and already is used in the Canadian workplace and what the impacts for working Canadians are.
Our work relies on data collected by Statistics Canada in surveys like the “Survey of Digital Technology and Internet Use”. We're glad to see that this committee is taking a serious look at this issue. Continued support for and interest in this kind of research puts Canada in a better position to tackle these challenges.
Thank you again for the opportunity. I will be happy to answer questions when we get there.
The fundamental problem with algorithmic management is that wehave no information. There’s no framework for all kinds of elements. There seems to be a wish to pass this problem on to unions and employers, but unions can’t be the solution for managing artificial intelligence in the workplace, when we know that the unionization rate is around 15% in the private sector. This will require a regulatory framework deployed by every level of government.
Nothing is known. No doubt the clauses in collective agreements relating to technological change were used to address artificial intelligence issues, and that was a mistake. It was a mistake because, often, the triggers for technological change clauses are related to job losses or potential job losses. Unfortunately, that doesn’t address issues related to artificial intelligence, which deals with a multitude of situations that don’t result from job loss.
We hear about artificial intelligence as if it’s something positive that will lighten the load on workers. Unfortunately, there’s a downside, such as reduced autonomy and increasingly intrusive surveillance. Workers are constantly being monitored, since algorithms need data to do their jobs. We don’t know how this data is stored, how it’s analyzed or how it’s reused. The ability to collect data is not regulated. We therefore need to regulate data and what is done with it, but above all we need to regulate and mandate dialogue between employers and employees to understand the whole issue of explainability and transparency. There isn’t any.
For years now, we’ve been using tools that make decisions on behalf of workers, but they haven’t been presented as algorithmic management or artificial intelligence tools. They were simply described as new tools. For example, at Bell Canada, there’s the Blueprint tool for customer service staff. When speaking with a customer, workers are required to follow a decision tree that tells them what to do based on the customer’s stated problems. The employee’s judgment is completely removed from the process. What’s more, the employee must enter data into the tool to ensure that the various interpretation scenarios are effective and appropriate for the customer.
This is done in various industries, such as transportation, where algorithms make decisions for truckers, whether it’s about the best route or the best driving practice to use. This completely eliminates the individual’s judgment and ability to drive their vehicle. They are required to follow the tool’s instructions. They must be managed.
The Organization for Economic Cooperation and Development, or OECD, has laid down four principles: artificial intelligence must be oriented towards sustainable development, it must be human-centred, it must be transparent and explainable, and the system must be robust and accountable. At present, we have none of those things, because there’s no disclosure obligation. In our view, this is the first step that needs to be taken. It’s about knowing the tools, understanding their effects and then implementing solutions that are truly benefiting from the efficiency or added value of technological tools in the company.
We’re in a period marked by a shortage of workers. It is simply untrue that we’re going to transform a customer service operator into someone who will program or manage algorithmic tools. In any case, in Quebec, there’s currently a shortage of 9,000 to 10,000 workers in the IT sector, and our workers who can’t fill the gap. There’s a kind of vicious circle that has to stop, and it has to start with the implementation of mandatory disclosure or mandatory dialogue between employers and their employees.
Thank you very much.
:
Thank you, Mr. Chair, and thank you to all the witnesses for being here.
My first questions are for Angus Lockhart from Toronto Metropolitan University.
You stated, as part of an article, that, “While some medical practices benefit from the inclusion of AI, there are serious privacy risks in feeding private medical data into a computer model that must be addressed.”
I just want to confirm that this was something you wrote.
Do you believe Canada's privacy laws are adequate to address these privacy issues?
:
Thank you very much, Mr. Chair.
Thank you to our witnesses today. I found each of your testimonies interesting. They complemented one another as well.
I'm going to start off with the gentleman from Unifor. Mr. Carrière, you spoke about the way unions will be positioned in this as we further adopt AI. I thought it was interesting the way you spoke about a regulatory framework being necessary. I understand that part of it.
The piece that is interesting to me is, outside of the government regulations, if unions are not involved in the big private sector jobs that are growing.... Mr. Bessen talked about how these big corporations will dominate a lot of the space. Outside of the public jobs, what is the strategy for organized labour, to make sure they and their workers are protected through the collective agreement process if they're not necessarily part of the growth that's taking place? Do you have any thoughts on that?
:
That makes total sense.
We saw that just 2% of all jobs have any kind of AI skills. You're exactly right in saying those AI skills are traditional tech-based skills—things that require advanced training to use. There is a generation of new, generative tools that take natural language inputs and don't require the same technical skills to use.
That said, there is still a whole range of technologies that require those digital and technical skills to use. The new technologies aren't necessarily replacing them. They're more additive. They're operating in new areas in which the old technologies didn't help. There is still going to be increased demand and need for AI skills, broadly.
The same workers who don't have AI skills and are being asked for AI skills are going to be able to adopt the new tools, but they might not necessarily be able to use any of the older, existing tools.
I was very fascinated, Mr. Bessen, with how you started off your conversation.
You said there was a lot of media hype around AI and that this is just a continuation of a 70-year process. Hopefully, over the course of the remainder of the time, I can get a little more detail on that. It is a very fascinating and popular subject. I'd like to hear more about why you think it's part of a long story, rather than something new.
Thank you so much, Mr. Chair.
:
Thank you very much, Ms. Chabot.
Presently, we seem to want unions and employers to find the magic bullet or the magic wand. Instead, I think it’s going to take the federal and provincial governments to put regulations in place, according to their respective areas of jurisdiction.
The first step to understanding the effects of algorithmic management is being aware of what’s going on. Employees must be informed and consulted. This will ensure transparency and explainability. The only effects of algorithmic management that we are currently seeing are negative ones. We see work decreasing rather than increasing.
What we see is a decision-making tool, a computer application, making decisions and diagnosing anomalies instead of the individual. Our impression is that, in unionized workplaces that apply an algorithmic management program, workers find themselves dehumanized. Dehumanization is a strong word. In fact, the individual is clearly told that their judgment is no longer needed, because a computer tool does the thinking for them. That demotivates people, since they become automatons, i.e., they perform a task without thinking.
Currently, people are unaware that they are being replaced. What’s more, they’re being asked to feed data into the tools that are going to replace them. We need to get back to basics. We need to impose, probably through the Labour Code, a conversation about the kinds of technology companies want to use, and we need to determine its impact.
:
Yes, there have been numerous consequences. This is hardly a new phenomenon. Technological changes have had such repercussions for many years, even decades.
Take Bell Canada, for example, in the telecom sector. For 15 years, surveillance tools have been capturing and recording all data relating to workers’ production in order to measure and analyze their performance or incompetence, as the case may be. At Bell Canada, for instance, a performance management system based on forced ranking was introduced. Under this system, an individual ranked in the bottom quartile is met by the employer because algorithmic tools have determined that their performance is weaker than that of others. Because an employee is weaker than others, a performance management plan is applied, notwithstanding the manager’s judgment. The manager relies on the algorithmic tool to make a decision. That’s what we’ve seen in the telecom sector.
In the transport sector, every single driver is monitored 24/7. All data is captured and recorded. Once again, algorithmic tools are superseding the judgment and expertise of individuals. These tools will tell a truck driver, for example, where to go to get from point A to point B, because it’s more efficient. We’re completely removing the worker’s judgment and replacing it with an algorithm.
There are several similar examples, but, in general, we’re unaware of it, because it hasn’t been disclosed. If it doesn’t involve employers cutting jobs, it isn’t discussed. And yet, many jobs disappeared five, six or eight years after this kind of tool was integrated. So this dialogue never happens. That’s why we first need to develop mechanisms to inform and consult employees. Then, we need to work together to build the tools. Finally, we need to give ourselves the means to adapt them, if necessary.
:
I think that's probably a very challenging question to answer in a short time.
What we certainly view as part of a responsible framework is making sure that when artificial intelligence is implemented, it's not being done in a way that's harmful to the workers who are using it explicitly. There are always risks of increased workplace surveillance and facial recognition being used in the workplace, and we definitely want to avoid any kind of negative impacts from that.
Beyond that, there's a huge risk from AI that businesses will be able to implement AI and reduce labour, and that the increased productivity and benefits from that could be concentrated among just the ownership of the business. That runs the risk, obviously, of increasing wealth inequality in Canada. At the Dais we strongly believe that prosperity and GDP growth are beneficial for Canadians, but only when they are distributed among all groups.
I don't think I have an answer for how to make sure the benefits that come from increased productivity for workers are distributed among all of the workers and the people in the firm, but I do know that's going to be an important part of keeping up with AI adoption.
:
As a union, we see that a number of tools exist to make the employee’s job easier. As I mentioned, there are negative impacts. Workers are being stripped of their autonomy and capacity for judgment. We’re turning individuals into automatons following a recipe previously determined by an algorithm.
I’ll use Bell’s Blueprint as a case in point. Communication systems installation technicians are required to enter their objective and all the steps involved in their task into the program. This is a basic step. It’s not a complex process, but workers have to explain what they want to do, and the program tells them how to do it. Workers become mere implementers.
In the job categories we represent, no one holds intellectual property on their ideas, because they’re already performing a job as an implementer. Workers are reduced to their simplest expression. They are stripped of their ability to judge, their expertise and the effect of having a great deal of experience in the sector, under the pretext that an algorithm can take anyone and have them do the same job. The impact is negative for workers. Work is becoming boring and so easy that there’s no challenge. As a result, people are leaving the company to work elsewhere. Artificial intelligence is being used as a partial solution to a labour shortage, but by making the work uninteresting, it’s causing turnover. It’s driving attrition.
It’s not so much a question of protecting workers’ ideas, but of ensuring that human beings are contributing their skills, values and knowledge to their business. Currently, we’re seeing that tools aren’t having that effect.
Mr. Bessen, I'm going to start with you.
I'm actually just going ask a question about housing, frankly. That's my portfolio. I know that there's a huge challenge with housing in the United States, as well as here in Canada. A big part of the problem is the lack of supply and the pace at which things get approved—with plans and all of that kind of stuff.
I'm wondering if you could speak a little to the application of tools like AI to speed up the approvals process, for example, in municipal zoning and that kind of thing. When you made your comments, I kept thinking about how this is a tool to be used, not to be afraid of. It presents opportunities. I'm hopeful that maybe it presents some opportunities in the housing sector.
:
I will refrain from answering that particular question about the use of artificial intelligence and housing issues. I don’t think I have anything new to add.
However, I will reiterate that we need to learn more about these tools. The way to better understand them is to talk about them, to provide a framework that forces employers to explain to their employees what they want to do, the goal they’re trying to achieve, the changes that will be made to their workplace and the repercussions on people’s autonomy.
In a context where augmented work will occur, that’s terrific. In a context where we’re only getting diminished work results, it’s problematic. It all begins with knowledge. We need to know what we’re dealing with. We don’t even know whether we’re dealing with algorithmic tools for automated decisions or semi-automated decisions or whether they’re symbolic algorithms or machine learning algorithms. Those are things we simply don’t know. Workers don’t know if the algorithmic tool is capable of thinking for itself or if it’s just following a decision tree.
We’re a long way from understanding. We need to develop mechanisms to learn more. Once we do…
:
Thank you for the question.
Presently, this is not something that’s openly and clearly discussed at the bargaining table. We aren’t discussing it. For example, recently, the St. Lawrence Seaway was closed for eight days. Could an algorithmic management tool one day manage the locks remotely? Very likely. Will this lead to job losses? Quite possibly. Is this being discussed at the bargaining table? No, it’s not on the table at all. There is no disclosure.
It’s like asking workers to use up all their bargaining capital, an expression we use. Instead of seeking to improve their working conditions, they’d be asked to use all their bargaining capital to ask for transparency about artificial intelligence. That’s not something workers are interested in. Employers are not disclosing how such tools are being integrated, or what their future impact will be. There’s a huge demand on workers’ participation to populate the databases of these tools and to correct the margins of error, but they’re not told how this will affect their jobs or the evolution of their jobs.
So the dialogue is non-existent. We have to start somewhere. Of course, the bargaining table is a start, but for all the sectors that are not represented, there have to be mechanisms in place for that dialogue to take place.
:
Yes, there are plenty of conversations between the groups, because unions are sharing what little knowledge they’ve acquired. We realize that all of this is in its infancy. Certain aspects of technology were introduced 15 years ago, and today, with the advent of artificial intelligence, they’re taking on incredible dimensions.
Unions, not just American and Canadian unions, but international unions too, are exchanging best practices or examples of framework measures that could be included in collective agreements or in legislation.
So there are discussions, but the observation remains the same: our knowledge on this subject is in its infancy. We know nothing. This dialogue needs to take place with employers to devise solutions. The aim is not to limit or reduce the effect of AI-related technologies, but to ensure that they represent a positive addition to the workplace, rather than the opposite.
Mr. Carrière, I’d like to ask you a question about the employer-employee relationship.
When an algorithm that has built a decision tree is used to perform a function, what happens if something goes wrong? Who’s the boss in such a situation? I think this changes the employer-employee relationship.
I’m quite surprised to see that currently, there isn’t more upstream dialogue about what’s going on. At the same time, I’m not surprised either. If we take the concrete example of Bell, what does this mean for a worker?
:
Bell Canada uses a tremendous amount of data and conducts extensive monitoring in all types of jobs. Everything is recorded. Every activity is recorded in a computer. Every action taken and every gesture made by a worker is known. It’s the same for technicians on the road and people working on the networks. Everything is analyzed and everything is known.
People’s performance is managed on the basis of targets to be achieved. Those are determined by the outcome of data analysis. If a technician is told that it takes 25 minutes to connect a line, but in fact takes 35 minutes to make the connection, he will be penalized. The vagaries of weather, for example, are not anticipated by the algorithm. The technician will be told that he’s doing a bad job because he’s not meeting the targets set by the algorithm. That’s where we stand now.
Has the manager’s judgment been substituted by a ready-made solution from an algorithm? The answer is yes, and has been for quite some time. Again, this is an unknown for us, because we don’t really measure what it takes into account. When we ask the employer to share the criteria used for their management tool, we don’t get an answer, because it’s so specific. We’re not given the information.
The manager is being replaced by an algorithmic management tool. At the end of the day, what is the basis for challenging the decision? This is where the question you raised, Ms. Chabot, is significant. You can’t go before an arbitrator or the courts and ask an algorithmic management tool why it made this decision rather than another. That’s why I mentioned earlier that we need to give ourselves the necessary means to correct the effects of algorithmic management decisions. This is the impression we get from people in the field. Managers today pass on messages, but all the tasks that involve judging a worker’s performance are carried out by this tool.
I'm going to ask Mr. Carrière.... Hopefully, we can keep it to about a minute, because I would also like to ask Mr. Lockhart about equity.
Thank you so much, Monsieur Carrière, for bringing back the humanity part of this discussion. We are a committee that has “human resources” at the beginning of its title.
I want to revisit something. The CLC—the Canadian Labour Congress—testified in front of this committee and recommended an advisory council on artificial intelligence.
I'm wondering whether you agree with this recommendation—that the federal government should have an advisory council that looks at the impacts on human resources. If so, who should be on that advisory council? Who should be represented?
:
AI has the potential both to promote equity and to harm it.
If we look specifically at persons with disabilities, there are examples in which AI has been used to improve the capacity of people with disabilities to operate in a workplace. There is a café that recently opened in Tokyo that uses robots to help increase the motor function of people with disabilities in order to help them fully operate within that workplace.
At the same time, if you don't take an equity lens when you're implementing artificial intelligence, those marginalized groups—people with disabilities and other groups like them—are going to be the first people harmed by the introduction of AI in the workplace.
You have to start from a place of asking how AI can help uplift and increase the participation of everyone, and use that as your framework, instead of starting with, “We have AI. What can we get rid of with it?”
:
Thank you for having me. My name is David Autor, and I am the Ford professor of economics at the MIT Department of Economics, and also co-director of the MIT “shaping the future of work” initiative. I am honoured to speak with you today about my research on artificial intelligence and the future of work, and I apologize for my cold.
AI presents obvious threats to workers and the labour force. While machines of the past could only automate routine tasks with clear rules, AI can quickly adapt to problems that require creativity and judgment. It seems reasonable to worry that AI will suddenly make huge swaths of human work redundant. I believe these concerns are somewhat misplaced, however. Strong demand for labour has persisted throughout past periods of technical change, like the industrial or computing revolutions, and all signs point to growing labour scarcity, not the opposite, in most industrialized countries, including Canada.
Instead, the important question to ask is how AI will impact the value of human expertise, by which I mean the skills and judgment in specific domains like medicine, teaching and software development, or modern crafts such as electrical work or plumbing. Will new technologies augment the value of human expertise, or will it make human judgment valueless?
In industrialized economies, expertise is the primary source of labour’s market value. Consider the jobs of air traffic controllers in comparison with crossing guards, both of whom have the job of protecting lives by preventing vehicle collusions. Air traffic controllers in the U.S. are paid four times more than crossing guards. Why? It's because they have scarce expertise, painstakingly acquired and necessary for their important work. The value of that expertise is augmented by tools: Without GPS, radar and two-way radio, an air traffic controller is basically a person in a field staring at the sky. Crossing guards provide a similar socially valuable social service, but most able-bodied adults can serve as crossing guards without formal training and without any expertise, and this virtually guarantees low wages.
While technology makes air traffic controllers' expertise valuable, it can also make human expertise redundant. London cab drivers used to train for years, memorizing all the streets of London. GPS made this expertise economically irrelevant. It's no longer necessary. You might ask, why isn't all expertise eventually made superfluous by automation? The answer is that human expertise becomes relevant because its domain expands with social needs. Jobs like software developers, laparoscopic surgeons and hospice careworkers emerged only when technological or social innovations made them necessary. In fact, my co-authors and I estimate that around 60% of all jobs that people do in the U.S. today didn’t exist in 1940. Technology and other social forces can just as readily create opportunities for high-quality work as they can automate it.
I believe that AI can create novel opportunities for non-college workers—low and middle-educated workers. With the support of AI tools, these workers could perform tasks that had previously required more costly training and highly specific knowledge. For example, medical professionals with less training than doctors could tackle more complicated tasks with the assistance of AI. In the U.S., in part due to technological innovations such as software that prevents the dispensing of harmful drug interactions, nurse practitioners have proven effective at tasks formerly reserved for doctors with five more years of medical education. AI could push this further, helping workers with less training deliver high-quality care. This is not to say that AI makes expertise irrelevant. It's just the opposite: AI can enable valuable expertise to go further. AI tools enable less experienced programmers to write better code faster. They help awkward writers to produce more fluid prose.
This positive future of which I'm speaking is not guaranteed. We must make collective decisions to build it. For example, China has made substantial investments in AI technology, in part to create the most effective surveillance and censorship systems in human history. This is not a preordained consequence of AI, although it depends on it, but it's a result of a particular vision of how to use this new tool. Similarly, it is far from inevitable that AI will automate all of our jobs. That's a vision that many AI pioneers are pursuing. I think this would be mistake. To shape this protean technology, AI, to constructive ends, political leaders must work with industry, NGOs, labourers and universities to build a future in which machines work in service of minds.
Let me end by saying what government can do. I don't claim to have complete answers here, but let me say a couple of things. First, governments should germinate and fund human-complementary AI research. The current path of private sector development has a bias towards automation. Government can correct this by supporting the development of worker-augmenting AI in industries like health care, education or skilled crafts work.
Second, I would prioritize protections for workers. Using AI for undue surveillance for high-stakes decisions like hiring and firing and to appropriate workers' creative works without compensation should be disallowed. Empowering workers to collectively bargain and including them in rule-making is a critical step.
I'm also concerned about AI safety. I think governments are comparatively well equipped to regulate safety.
Let me end by saying that rather than asking, “What will AI do to us?”, we should ask, “What do we want AI to do for us?” Answering that question thoughtfully and acting decisively will help us build a future that we all will want to inhabit and that we will want our children to inherit.
Thank you very much. I welcome your questions.
:
Thank you very much. Good afternoon.
My name is Gillian Hadfield. I'm a professor of law and of strategic management at the University of Toronto, where I hold the Schwartz Reisman chair in technology and society and the Canada CIFAR AI chair at the Vector Institute for Artificial Intelligence. I'm appearing in a personal capacity.
Thank you for this opportunity to speak to you on this subject of such critical importance.
I want to highlight four key aspects of the impacts of AI on the labour market.
First, AI is a general-purpose technology that is likely to transform almost all aspects of our economy and our society.
Second, the latest advances in AI can be adopted relatively quickly, but Canadian businesses to date have been slow to adopt AI.
Third, current AI systems are rapidly evolving to perform highly sophisticated tasks, meaning that high-income and high-education occupations may face the greatest exposure to this latest round of automation.
Fourth, the profound impacts of AI across our economy and society demand regulatory shifts to ensure that the full benefits of AI can be realized.
Let me go through each of these in a little more detail.
First, AI is a general-purpose technology. This means it will transform almost all aspects of our economy and society, similar to the impact of the steam engine or information technology. For example, publicly available large language models such as generative pretrained transformers, GPTs, demonstrate the potential for AI to radically reshape the nature of work. These systems are designed to understand and generate human-like text, including computer code, on a massive scale, increasingly to reason and problem-solve, facilitating an almost unlimited range of applications.
Second, the latest advances in AI can be adopted relatively quickly. ChatGPT's swift integration into everyday applications over the last year demonstrates this and suggests that the most recent strides in AI can be implemented relatively quickly, outpacing the adoption rates seen with earlier iterations of this technology. This presents an opportunity for Canadian business and policy-makers to boost productivity and economic growth; however, the committee should take note that Canada has to date been slow to adopt AI. According to a study by Statistics Canada, only 3.7% of companies were using AI at the end of 2021. Studies conducted by IBM and the OECD also suggest that Canada lags behind other economies according to AI adoption metrics.
Third, AI systems are rapidly evolving to perform highly sophisticated and complex tasks. Specifically, AI is being fine-tuned in sector-specific software applications. A notable instance from my own field is CoCounsel, which is a LLM system built on top of GPT-4, functioning as an AI legal assistant for tasks such as legal research, writing and document analysis. CoCounsel has managed to achieve a higher score on the American uniform bar exam than the average test taker—in fact, 90% of test takers. It is also designed to address inherent risks such as AI hallucinations.
Other examples beyond LLM systems include things like AlphaFold, which has solved the protein folding problem, described by a leading computational biologist as the first time an AI system has solved a major scientific problem. These advancements mean that AI can be harnessed more safely and effectively, particularly in sensitive and cognitively complex domains like law, science and health care.
In one study, OpenAI researchers found that GPT exposure was higher at the higher income and education levels. That's something for us to take into account, thinking about how this would look different than in previous innovations.
This brings me to my final and crucial point. The profound impacts that AI will have across our economy and society demand regulatory shifts to ensure that the full benefits of AI can be realized. Our current legal and regulatory frameworks were designed for a pre-AI era and may restrict innovative and productive uses of AI in workplaces. To harness the benefits of AI, we must update these frameworks to address the unique challenges and opportunities that AI presents. Furthermore, given that the nature of AI is rapidly developing technology, effective governance of AI demands that policy-makers move quickly to adopt an AI-enabling regulatory posture that seeks to properly regulate risks, as we do with all other economic activities, while supporting innovation and investment.
In conclusion, we stand at the cusp of a transformative era, and we should be acting to ensure that the benefits of AI are realized equitably and responsibly.
Thank you.
:
Good morning, and thank you very much.
My name is Théo Lepage‑Richer, and I'm a post-doctoral researcher at the University of Toronto.
First, I want to thank you for the opportunity to share a few thoughts with you today. These are the product of my research on artificial intelligence governance, a topic I address by combining historical research with public policy analysis.
In previous meetings, several members of the committee raised the following question: how can we develop governance frameworks adapted to technologies that are evolving as quickly as artificial intelligence? This is indeed a legitimate issue that is regularly raised by the providers of this technology to encourage some restraint by public policy-makers. However, I'd like to qualify this question by pointing out the broader trends that the history of artificial intelligence in Canada highlight.
The first federal AI programs provide a useful historical precedent to examine the impact of this technology on the organization of work.
Starting in the 1960s, the Pearson government identified artificial intelligence as a promising technology to reduce the costs associated with hiring qualified public servants.
In 1965, the National Research Council of Canada was mandated to develop a first artificial intelligence program to address the translation of official documents from English to French. As a strategy, program managers opted for the development of software tools that would allow the translation process to be broken down into simple sub-tasks. One of those tools, for example, was designed to produce literal translations of common names and verbs in a text, with the idea that operators would then take care of them by fine tuning them, adding the necessary determiners and revising everything. The purpose of these tools was to standardize specialized tasks such as translation, to the point where they could be assigned to workers without prior training, and, above all, at a lower level on the pay scale.
Although inconclusive, this program launched a series of reforms aimed at reducing the federal government's dependence on skilled workers and, above all, restoring a certain level of control over the federal machinery.
Under Pierre Elliott Trudeau, initiatives such as the CANUNET network and the Télidon system were put in place to create the necessary infrastructure to produce new data on the work of federal employees. In a recent article published by Fenwick McKelvey and myself, we suggest that the objective of these programs is to quantify the work of public servants so that it can be framed more narrowly using new data analysis tools developed in government and elsewhere.
Fifty years later, the applications of artificial intelligence in the Government of Canada and elsewhere have changed. However, there are early warning signs of broader trends that can be identified in these early programs. Rather than completely replacing positions, artificial intelligence tends to be deployed in such a way as to restructure tasks so that they are assigned to workers with more precarious status, limit the opportunities that workers have to exercise their judgment, reduce the dependence of organizations on certain forms of expertise and replace investments in training and workforce development.
These trends go beyond artificial intelligence, of course. However, as Paola Tubaro and her colleagues point out, these trends nevertheless tend to characterize the platforms, management practices, reforms and business models that depend on the deployment of this technology.
As such, it is therefore urgent that the impact of artificial intelligence on the workforce become a key perspective for developing tailored policy responses. This position is shared by a number of people, including Emanuel Moss and Valerio De Stefano, who point out the inability of the risk-based approach that characterizes the current regulatory instruments to account for issues related to worker protection. To reflect the impact of artificial intelligence on the workforce, these instruments would have to take into account the impact of this technology on the distribution of wealth, the quality of jobs and the loss of salaried jobs to precarious or subcontracting positions.
Until now, artificial intelligence has been perceived in Canada as an industrial policy issue, and not without success. However, it is crucial that investments in the AI industry complement, rather than replace, similar investments in human capital.
While future applications of AI are difficult to predict, the structural effects of AI on the organization of work remain stable and can therefore inform policy responses to the test of technological change.
Artificial intelligence is a challenge both in labour law and in industrial policy. I therefore encourage the members of this committee to consider the trends in which artificial intelligence has been embedded over the past 60 years to put in place the necessary safeguards to ensure that workers also benefit from the deployment of this technology.
Thank you very much.
:
Thank you for the invitation to share my thoughts with the committee today. My name is Nicole Janssen. I am the co-founder and co-CEO at AltaML. It is the largest pure-play applied AI company in Canada. We create custom AI software solutions for enterprise-level clients in both the private and public sectors. We're not quite six years old, but we've already worked with over 100 companies on over 400 AI use cases.
I base my thoughts today on my observations of those projects and the current and near-term capabilities of AI, as well as my knowledge of the AI ecosystem.
I want to start by saying that AI will absolutely disrupt the Canadian labour force at all levels and professions and across all sectors, and that the work this committee is doing to understand those impacts is incredibly important.
I will address the elephant in the room around massive job losses due to AI, which seems to be the largest concern we hear around jobs and AI. Of the 100 companies we have worked with, not one has implemented an AI solution and then made resulting job cuts. What we are seeing is that the AI tools are being used to increase productivity and, in most instances, are capable only of augmenting humans, not fully capable of replacing them.
The fear of job losses from AI is consistent with the fear that has come for hundreds of years when a new technology is introduced. It can be traced as far back as the introduction of the mechanical loom. However, every single new technology advance in history has led to more jobs at higher wages.
Overall, there will be net gains in the job market from AI, but certain jobs will absolutely see disruption. In fact, we're already starting to see that disruption. Jobs that require consuming large amounts of information and synthesizing it, such as content creation, or in the legal profession—paralegals, for example—will see significant disruption, as will jobs that require manipulating large amounts of numerical data, like research analysts or financial analysts. AI can identify trends in the market much faster than a human being can. Jobs that provide some form of external, repeatable assistance—like call centres, receptionists, or even executive assistants—will see disruption. Software engineering will be disrupted and already is being disrupted as AI becomes more and more capable of writing high-quality code.
From what I have seen, these jobs won't disappear, but rather the individuals in them will need to adapt how they do their jobs, and their time will be focused on the higher-value work.
You may have noticed that lots of those jobs I just mentioned are white-collar jobs. AI is designed to mimic cognitive function, and it's likely that higher-paying white-collar jobs will have the most exposure to the technology. That said, industrial robots and drones also use AI technology, and that's the one place I see a high likelihood of replacement of jobs, such as in the factory line or for warehouse workers. This is a transition that we have been seeing for many years, though, through automation. Then there's a likelihood of delivery persons over time being replaced by drones.
Jobs in which a human touch or relationships with people skills are required will become more and more important. While these professions likely will use AI to support them, they will not see the same kind of disruption—so professions such as teachers, nurses, doctors, therapists, human resource managers, sales managers and public relations.
Then we'll also have new jobs emerge in AI development, cybersecurity, ethical oversight, change management for AI integration, for data labelling professionals, hardware specialists for AI and prompt engineers. These jobs didn't exist a few years ago, and now we see job ads for them everywhere.
What's clear is that the people of any profession who choose not to adopt and use the new technology will be the ones who lose their jobs to those individuals who do adopt the technology, because they'll be far more productive. Sectors that adopt AI will be more productive and put more people to work faster. If we use AI to approve building permits significantly faster, that puts a lot of people to work a whole lot faster. If we use AI to perfect our preventative maintenance in our plans, that will ensure more uptime and more work for more people.
The productivity growth that AI has the potential to create is a huge advantage for Canada, as we currently face both a long-term labour shortage challenge and incredibly low productivity as a country.
We must embrace AI with careful consideration and proactive measures, investing in education and training programs that equip individuals with the skills needed in an AI-driven economy. We must also implement policies that ensure inclusivity, diversity and ethical use of AI.
I'm here today to share my thoughts, partly because I know how important this topic is, but also because I knew I could rely on ChatGPT to formulate the version one of my comments, which I could then edit and put my own thoughts to, allowing me the efficiency to say “yes” to this request.
Thank you. I welcome your questions.
:
Thanks, Mr. Chair, and thank you to our witnesses for being here today to testify in our study on AI and its implications for employment and the labour force.
Ms. Janssen, if I can, I'll start with you. I really enjoyed your testimony. I loved that you used ChatGPT to help you through this. It's an interesting tool. I see it happening as well with our students in education, using it as a tool.
One of the things you talked about is fear of the change, and that is almost a human psychology thing. We've seen this, as you mentioned, throughout time. It's natural evolution; it's inevitable progression to move forward.
In the past, what was the defining factor that pushed it forward? Do you have any sort of historical reference for that? You talked about the mechanical loom. When we look at vehicles and airplanes, we see that so many people resisted that change, but it ultimately increased prosperity. It did not replace jobs.
:
I'll speak maybe to the projects we've done in AI where we have had pushback from fear. There are a lot of them.
The key is that the individual whose job will be impacted, the person whose workflow will change, has to be a part of the process of developing AI from the outset. They have to feel like they're part of the solution, not that this is being done to them.
They also need to see that the goal in implementation of the AI is not to replace them. As soon as someone feels that their job is going to be taken as the outcome, they, the end user, will absolutely not implement this new technology.
That's really helpful. I love that you said on record “augment” and not “replace”. I think that's really important to have on record.
If I may, I'll jump to Mr. Autor.
I liked your testimony in regard to health care, as we have this massive doctor shortage, and we have a lot of advancements happening with AI in health care.
I'm curious as to whether you have any input on privacy. One of the biggest concerns a lot of people have is how this is going to impact privacy, with health care being one of those areas of privacy.
What do you think the government should be keeping an eye on in terms of ensuring citizen privacy when using AI?
:
I think it's a very big issue, and depending on the regulatory regime, I don't think privacy is guaranteed in what can and cannot be tracked. Our phones are full-time surveillance devices that not only know all the things we do but report that information to third parties for money. That is then resold. Privacy will be compromised unless regulation prevents it and unless people have ownership of the right to privacy. I think it's a very serious concern.
If I may, Mr. Chair, I'll respond very quickly to something that Ms. Janssen just said about AI and jobs. I do not think we should take it as a historical fact that technology has always improved jobs. The Luddites were absolutely correct that power mills wiped out their employment. Not only that, but wages didn't rise for six decades, and growth was stunted; starvation increased.
I'm not saying that these advances weren't ultimately beneficial, but these technological changes are never uniformly an improvement for all jobs or all people. There are almost always losers—peoples whose expertise is devalued—and when we make these big transitions, we should be prepared to help people adjust to those transitions. This will not be costless—
:
I'm sorry. I'm going to intervene. Thank you.
I think you bring up valid points about learning from history. Ultimately, however, I think prosperity prevails in advancement. That's why these committees and studies are critical.
On that point, I will go back to Ms. Janssen.
You referred to a lot of white-collar jobs having first access to AI. How do you think we prevent that divide between the haves and have-nots, which could happen—and probably will happen—and the creation of a more polarized society? People will have access to technology that will further them economically, socially, etc.
It is moving very quickly, and we need to be thinking about agile methods for gaining increased visibility for government. You want to be very careful not to say you'll do another two-year study, because it's moving much faster than that.
I think there's a lack of visibility for government into how these technologies are developing, because for the first time in history, it's almost entirely behind corporate walls.
I do think it's really important to get that ground level. Again, Ms. Janssen's testimony is very helpful in terms of what this looks like on the ground level.
For the CoCounsel example I gave you, I spoke to law firms that were implementing this and asked if they had laid off all their junior lawyers yet. They said that they actually had more work than they knew what to do with, because they could now take somebody's call one afternoon and be ready by the next day to give them good advice and take steps.
There's actually a lot of unmet demand for effort, but you need to be at the ground level to find that out.
I would say that developing agile methods for increasing government visibility into how things are changing on the ground is critical—like SWAT teams.
:
That's a very good question. I can't think of an international example off the top of my head.
In fact, we have to think about the hidden costs that are often associated with artificial intelligence. When you interact with a platform like ChatGPT, the human work behind it tends to be somewhat erased. However, behind a system like ChatGPT and all the other artificial intelligence systems that are trained using large amounts of data, humans have to work to label that data, format it and organize it, among other things, which is not a well-paid job.
There are a lot of countries, especially emerging countries, that are going to train a whole workforce to do these tasks at a very low cost. The example that comes to mind is not necessarily an example that Canada wants to emulate because, when we talk about low-paying jobs associated with artificial intelligence, we can think about data labelling. This work is essential to all the artificial intelligence systems we use and develop, but it depends on thousands of workers who manually crunch data for pennies. The international examples that come to mind are not necessarily examples that we want to emulate, but that we want to keep in mind to remember the social and human costs associated with the development of these technologies.
I'd like to thank all the witnesses.
Mr. Lepage‑Richer, according to the OECD, and as the union representative reminded us during the first hour, artificial intelligence objectives must be oriented toward sustainable development and must be human-centred, and we must act responsibly. You talked about fairness and accountability.
Everyone agrees that artificial intelligence will be deployed, as was the case with robotics and automation. Things are going to change, but I want to talk about what happens when we hit our cruising speed. What does it take upstream, both from a regulatory and ethical standpoint, for this to have a positive effect on the workforce, not a negative effect?
:
The approach currently used in Canada to assess and anticipate risks and the impact of this technology is mainly based on self-assessment. The proposed artificial intelligence and data act promotes the idea that we must create a model so that businesses can govern themselves by taking certain parameters into account, while making sure that the effects on their work are as limited as possible.
One of the problems I see with this approach is that AI is deployed in a very wide variety of sectors. Therefore, at some point, these tools need to be tailored to each sector and industry in which AI is deployed. This will allow us to properly represent the reality of workers and users whose quality of life, work and well-being are directly influenced by this technology.
One of the first ideas that comes to mind is that risk assessment tools should be set aside for different industries. In fact, at all levels of government, there are specific frameworks to assess the environmental, financial, social or human impact. However, we do not see the same degree of precision in the evaluation of this technology when it is deployed.
Off the top of my head, I would say that we need a more specific development of analytical tools.
I do have only six minutes, and I have some committee business I want to speak to first.
Just so that witnesses can prepare, I'm going to ask witness Autor and witness Janssen this question. It has been proposed in this committee that a federal advisory council be struck. I wonder if I could ask both of you, after I finish my other committee business here, what top three topics each of you feel need to be considered at a federal advisory council and, I guess, first of all, if you think that's a good idea.
Mr. Chair, before I go to the witnesses, I want to respond to the letter the committee received back from Air Canada on our request for Mr. Rousseau, the CEO, to appear before committee. We received a letter that, I think, outlines that Mr. Rousseau does not plan on coming to committee.
I was wondering if I could get consensus from the committee that we reach back to Air Canada and say that we strongly encourage Mr. Rousseau to come, because we don't want to have to summon him.
:
Does that address...? You still have time to get back.
I think it's very clear that the committee is unanimous. It is the CEO the committee wants to have appear before it at the earliest opportunity, and the CEO can bring support staff, as Mr. Aitchison pointed out, but the committee wishes to have the CEO.
Seeing no dissent on that, I will ask the clerk to clearly get back to Air Canada on the wishes of this committee.
Madam Zarrillo, you can go back to.... You still have several minutes.
I thank the committee members, and I apologize to the witnesses.
I wonder if I could go to witness Autor and then witness Janssen. If we run out of time, if either of those witnesses would like to respond to the committee in writing, that would be great.
My question is around a federal advisory council that was proposed by past witnesses. I wonder, Mr. Autor, if you could talk about the top three topics that you think should be considered in a federal advisory council, and then Ms. Janssen could respond to the same question.
I do support the idea of a federal advisory council, as all folks here today have testified. This is moving very fast. It poses new opportunities and new challenges. Bringing in top expertise in an advisory role is an excellent idea.
Of the three topics I would most address, one is how to use the technology to augment labour rather than automate it. I don't think we should take as a given that augmentation necessarily occurs. Countries steer technologies. Nuclear energy is used by North Korea solely for offensive weapons. It's used by Japan solely for energy generation. They have no offensive nuclear weapons. That's a choice of a country; it's not a characteristic of technology.
How to use it well to augment workers is the first thing.
The second thing is protection for workers. As I noted, undue surveillance, high-stakes decision-making by opaque algorithms, and AI's appropriation of workers' creative work without compensation should be regulated. We have fair use when it comes to intellectual property, but the laws were not written for AI.
The final thing I would say is on visibility into these technologies. They are opaque. They're making high-stakes decisions, and often the creators of technologies will not even disclose what sources of data have been used for training. I don't think that's acceptable.
I think there's a public interest in making sure that machines that are making important decisions—and valuable decisions; I use and support AI—need to be understandable to regulators and to consumers.
:
The cat's out of the bag with AI. We're not putting it back.
By the way, I am fully supportive of the committee. I would have the committee focus on the education, upscaling and supports we need to provide to our workforce as this begins to roll out. This means identifying those early, disrupted professions, following them, seeing what lessons can be learned from those professions, and then perfecting that change management as it rolls out across all sectors and professions.
Then there is the responsible AI piece, which is the transparency, accountability, privacy and all of those pieces that come with responsible AI. That and the direct impacts on workers would be the areas I would focus on.