:
Committee members, the clerk has advised me that we have a quorum and that those appearing virtually have been sound-tested. All are okay except for one witness, but there is another witness from the same group who is okay.
With that, I will call the meeting to order. Welcome to meeting number 90 of the House of Commons Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities. Pursuant to Standing Order 108(2), the committee is resuming its study on the implications of artificial intelligence technologies for the Canadian labour force.
Today's meeting is taking place in a hybrid format, meaning there are members and witnesses appearing virtually and in the room. You have the option of choosing to participate in the official language of your choice by using interpretation services. There are headsets in the room. As well, if you are virtual, please use the "world" icon at the bottom of your Surface tablet and choose the official language of your choice.
If there is an interruption in translation, please get my attention. We'll suspend while it is being corrected.
I would ask those participating to speak slowly, if possible, for the benefit of the interpreters. To those in the room, keep your earpiece away from the mike to protect the hearing of the translators.
Again, if you're appearing virtually, to get my attention use the “raise hand” icon at the bottom of your Surface.
Today we will be meeting from 4:30 to 6:00 with witnesses.
As an individual, we have David Kiron, editorial director, Massachusetts Institute of Technology Sloan Management Review, by video conference. From the Canadian Union of Public Employees—Quebec, we have Danick Soucy, president, political official, committee on new technologies, by video conference; and Nathalie Blais, research representative. Nathalie does have issues with sound. If they those cannot be resolved, she will not participate. From SAP Canada Incorporated we have Yana Lukasheh, vice-president, government affairs and business development.
With that, we will start with five minutes for each group, beginning with David Kiron. You have five minutes or less, please.
:
Distinguished members of the committee, thank you for organizing this important study and inviting me to participate in it.
I'll discuss how AI is influencing four categories of work: designing work, supplying workers, conducting work and measuring work and workers. AI-related shifts in each category have policy implications. In aggregate, these shifts raise questions about how to optimize producer flexibility, worker equity and security. More broadly, these trends create policy opportunities for increasing productivity at the national level and strengthening social safety nets.
Although we need policies to address worker displacement from AI-related automation, policy also needs to address AI's influence on a wide range of business activities, including human-machine interactions, surveillance and the use of external or contingent workers. Policy addressing AI in workforce ecosystems should balance workers' interests in sustainable and decent jobs with employers' interests in productivity and economic growth. The goal should be to allow businesses to meet competitive challenges while avoiding dehumanizing workers, discrimination and inequality.
I refer to "workforce ecosystems" rather than "workforces". Our ongoing research on workforce ecosystems demonstrates that more and more organizations rely on workers other than employees to accomplish work. These include contractors, subcontractors, gig workers, business partners and crowds. Over 90% of managers in our global surveys view non-employees as part of their workforce. Many organizations are looking for best practices to ethically orchestrate all workers in an integrated way.
I'll start with designing work. The growing use of AI has a profound effect on work design and workforce ecosystems, including greater use of crowd-based work designs and disaggregating jobs into component tasks or projects. Consider modern food delivery platforms like Grubhub and DoorDash that use AI for sophisticated scheduling, matching, rating and routing, which has essentially redesigned work within the food delivery industry. Without AI, such crowd-based work designs wouldn't be possible.
AI is also driving recent trends to create work without jobs. On the one hand, this modularization of work can facilitate mobility within the firm and improve employee satisfaction by efficiently matching workers with tasks. On the other hand, designing work around tasks and projects can increase reliance on contingent workers for whom fewer benefits are required. Greater numbers of Canadian contingent workers can increase burdens on government-sponsored safety nets.
Now I'll move to supplying workers. On the one hand, AI is transforming business access to labour pools. On the other hand, workers have more opportunities to work across geographic boundaries, creating opportunities for more workers. Using AI to find suitable workers can have both negative and positive consequences. For example, AI can perpetuate or reduce bias in hiring. Similarly, AI systems can help ensure pay equity or contribute to inequity through the workforce ecosystem, by, for example, amplifying the value of existing skills while reducing the value of other skills. It remains an open question on whether AI-driven work redesigns in the global economy will increase or decrease the supply of workers for Canadian businesses.
I'll go to conducting work. In workforce ecosystems, humans and AI work together to create value, with varying levels of interdependency and control over one another. As MIT Professor Thomas Malone suggests, people have the most control when machines act only as tools. Machines have successively more control as their roles expand to assistants, peers and finally, managers. Emergent uses of generative AI in each category raise a variety of policy questions regarding worker liability, privacy and performance management, among other considerations.
The last category where AI is influencing work is measurement. Firms increasingly use AI to measure behaviours and performance that were once impossible to track. From biometric sensors to corporate email analysis to sentiment analysis, advanced measurement techniques have the potential to generate efficiency gains and improve conditions for workers, but they also risk dehumanizing workers and increasing discrimination in the workplace.
That's about five minutes. I'm happy to continue. I have a conclusion, but I'm also happy to stop there.
:
Mr. Chair and committee members, thank you for inviting us.
My name is Danick Soucy, and I am the political representative of the Committee on New Technology, Quebec division of the Canadian Union of Public Employees, CUPE for short. CUPE Quebec's Committee on New Technology is attempting to gain a better understanding of emerging technologies that could impact the work of our members, including artificial intelligence, or AI. The committee's objective has never been to oppose technological breakthroughs, but, instead, to find ways of adapting to them.
One year ago, rapid advances by ChatGPT surprised the world and even AI specialists. We now know that generative AI systems are able to perform a variety of tasks. Not only can they allow for the automation of manual labour, but they can also perform numerous professional creative tasks or those normally undertaken by office staff. Generative AI has immense possibilities and could cause serious upheavals in the working world and in Canadian society if no guardrails are put in place.
We believe that it is imperative that action be taken immediately to regulate AI before companies undertake large-scale implementation, so that everything possible is done to avoid bringing in systems that cause problems for workers or for society at large. The old saying an ounce of prevention is worth a pound of cure certainly pertains to AI, which, in spite of its usefulness, can cause dangers on many different levels.
One of the dangers is that many AI systems were trained using the Internet. As a result, they have incorporated biases and inaccurate data that can lead to discrimination or disinformation. However, commercial AI systems are more non-transparent than ever, and their suppliers do not always reveal what data sets they were trained on. In addition, the autonomy of AI systems makes it more complex to determine who or what is responsible when harm is done. The public and employers must be educated on this issue.
In the workplace, this can mean rejections of either applications or promotions, or non-compliance with workers' fundamental rights in terms of privacy or the protection of personal information. AI systems used to assign duties to workers can also impact their health and safety by intensifying their work or by limiting, for example, their decision-making leeway, which is recognized as a work-related psychosocial risk.
AI should not lead to discrimination, result in increased occupational health and safety problems or jeopardize an employee's privacy or personal information.
Available data on the possible impacts of AI systems on labour vary greatly. However, a shocking study published by Goldman Sachs, a U.S. investment bank, estimated last March that AI could result in the automation of 300 million full-time jobs worldwide. This estimate includes the disappearance of a quarter of the work currently done in the U.S. and Europe. This is, by far, the most alarming assessment of which we are aware.
In such a scenario, what would happen to laid-off workers? Would employment insurance be all they could count on?
Would companies be responsible for their retraining?
Would they be required to train their staff whose work was transformed by AI?
Would they compensate governments for income tax revenues lost because of the use of AI to protect our public services?
The government cannot consider the use of AI solely from the angle of innovation, productivity and economic growth. It must also take into account the adverse impacts that AI systems would have on citizens and their ability to contribute to Canadian society more generally.
To this end, CUPE Quebec recommends that governments maintain a dialogue with all groups in civil society, including unions, on the subject of AI and that the government entrust Statistics Canada with the mandatory collection of information on the progression of AI and its impacts on work and on labour.
Furthermore, the regulations to be implemented quickly should at least address the following four elements.
First, employers should be obligated to declare any use of AI in the workplace and involve workers or their union representatives prior to the design and implementation of AI systems.
Second, employers should be required to train or requalify personnel affected by the adoption of AI.
Third, implementation of a legal framework is necessary to protect the fundamental rights of workers and to identify those responsible for AI systems.
Fourth and finally, requirements should be imposed relating to the responsible development of AI for the granting of any public funding.
Thank you for your attention.
:
Thank you, Mr. Chair and members of the committee. We appreciate the opportunity to appear before you today to contribute to the study regarding the implications of artificial intelligence technologies for the Canadian labour force.
SAP is a software technology application enterprise with long-standing operations in Canada spanning over 30 years. We work with organizations of all sizes across the public and private sectors to enable them to become part of a network of intelligent and sustainable enterprises.
Our secure and trusted technologies run integrated AI-powered business processes in the cloud. More specifically, our applications cover enterprise resource planning, human resources and procurement and finance management, including travel and expense claims.
SAP is a global enterprise present in 140 countries, with Canadian operations of strategic importance. We contribute $1.5 billion annually to the Canadian GDP and have a total of 7,000 jobs in our ecosystem from coast to coast to coast. Our innovation labs where our R and D is conducted are located Montreal, Waterloo and Vancouver.
We understand that Canada's labour force today is confronted by the fast-paced evolution of AI technology, and workers are increasingly faced with a series of complex decisions related to implementation and training as organizations are evolving within this new digital era. As AI is increasingly used to automate decisions that have a significant impact on people's lives, health and safety, we recognize that governments have an important role to play in promoting innovation while safeguarding public interest.
Concerns, which we hope to discuss as a part of our testimony today, are common and are often overlooked practices associated with a general lack of AI integration, which we have seen impact many industries, including Canada's public sector. For example, I'm referring to the use of disconnected or complex legacy systems across organizations, outdated manual processes, limited interoperability and few end-to-end processes across human capital management functionalities. When not addressed, these have implications on recruitment, retention and skills training, not to mention the cost associated with the operation such legacy systems.
The boundless potential of generative AI could bolster Canada's economy by $210 billion, greatly boosting Canadian workers' productivity. It's important that organizations seek experienced industry partners that are equipped to guide operations and organizations through their digital transformations, leveraging technologies like AI to level up the workforce. At SAP, we see that potential and opportunity to unlock productivity and value across our economic sectors. For example, AI can address some of the top workforce challenges of our times from recruiting and training to increasing employee engagement and retention.
I'll run through a few examples. Recruiting AI software can remove unconscious biases in job descriptions. Recruiting automation can lighten the administrative burden by automating the delivery and receipt of necessary documents. Specialized AI-enabled training is interactive; it's continuously learning and adapting to each worker's learning style, whether it's visual, auditory or written. AI analytics, specifically sentiment analytics, can identify how workers are feeling. AI performance analytics allow managers to extract bias-free insights from continuous real-time assessments via multiple sources.
Another area where technology can support is accessibility. Software solutions can enable the inclusion of members of the disability community into today's workforce. As a co-founding member of the ministerial advisory board that established the Canadian business disability network, SAP advocates for the acceleration of the adoption of technologies that embed tools like AI to onboard members of the disability community into today's workforce.
Canada's potential in this space is vibrant and remains globally competitive, with a diverse AI ecosystem that attracts more AI talent and brings more women into AI-related roles than all of our G7 peers.
The high concentration of talent in Canada contributes to a rising volume of AI patents filed nationally and the highest number of AI publications per capita in 2022. It is even more important that public policy favour retention of top AI talent in this country to uphold our competitive edge and support sustained innovation.
The impact of AI to Canada's labour force remains undeniable, and public policy must allow for better digital integration with Canada's industrial base to strengthen our local ecosystem that is inclusive of SMEs, minority-owned businesses and indigenous businesses.
Mr. Chair, thank you. I'm happy to take questions.
I think you'll notice that within our customer base, they realize the value that AI brings into their business processes, and they see the value it can unlock.
I'll probably use, at a very high level, a few examples. Take banks, for example. They have a lot of financial reports and data that they have to manipulate through different data sources. AI can automate a lot of these tasks and summarize a lot of that data. That would provide a lot of efficiency for the workforce in that particular bank to dedicate the time to a lot more strategic work, instead of a lot of the data analysis.
Another example would be within manufacturing. Some of our customers are leveraging AI technologies to look at sales performances, identifying where the underperforming regions are, looking at their procurement and their supply chain, and looking at their HR and trying to find efficiencies across....
That's probably what I would give as an example.
:
Thank you so much, and thank you to the witnesses for being here.
This is our last day listening to witnesses. All of the witnesses have really contributed to a very interesting conversation on the subject. Many and various points have been brought forward that have complemented each other.
I want to speak to a point that's been brought up by more than one person, specifically about how machine learning works and how data.... I think you just referenced data coming from many, many different sources. If the data that we're using is building the AI through machine learning, there's no question that bias will be embedded into the technology we're building.
Technology mirrors society as a whole. Here's a good example. If AI were being used in the judicial system, it would look at, let's say, the last 70 years of court cases. If that were the case, and if we acknowledged that the AI would be built from that machine learning and datasets that have lasted the 70 years, we would now be making decisions based on that data, and there would be a bias embedded in it if we acknowledged that the system had systemic barriers in place.
The big question is this. I think Mr. Soucy brought up the fact that we need to be careful that the technology that we're putting forward doesn't set bias against some workers. I guess my question for the union representative is, how do we use the collective agreement process and how do we hold companies accountable when the datasets they're using are often in a black box-based information set that's not shared with the public? These algorithms are private.
How do we ensure that we can find a balance between what's being built and how it serves workers in general?
That question is for Mr. Soucy.
Thank you for your question. It's a good one.
I was at a telecommunications symposium recently, and one of the issues discussed was how reliable AI systems were when trained on data that aren't entirely reliable. For example, when the Internet is used to train an AI system, it really captures everything out there, even though some of that information is false and some is true.
How do you make sure an AI system trained on those data is reliable?
When that question was put to business people in the telecommunications sector, they all evaded the question. The reason I'm telling you that story is that, afterwards, I spoke with the person moderating the panel discussion. She, herself, is a technology expert, and she said that the only way to make sure the data are high quality is to require companies to disclose where the data used to train their systems came from. Developers would have to tell companies purchasing AI software whether the systems were trained on data pulled from the Internet, private corporate data, academic data or government data.
:
Thank you for that response.
I will move over to SAP, based on the response to the question I asked. If we're going to use technology like AI for recruitment and training, which you mentioned earlier in your testimony.... Part of a company's competitive edge is making sure that the algorithms and software it's using are private, because that's intellectual property. At the same time, we need to make sure that the datasets that are being used are fair and come from reliable places. Many big organizations like the Amazons and the Microsofts may not be unionized, so there's a disconnect with that collective agreement process.
How do we make sure that big companies like SAP are bringing forward AI based on machine learning that is equitable and transparent? How do we go about doing that? At the same time, how do you keep your competitive edge? That's a tough question, but maybe you have some thoughts on that.
There are compliances we have to address and abide by, as do other industry members in the different sectors as well. The data that goes behind....SAP does not own that data. It is the customer's data. We provide the technology, the tools, and the customer maintains that data. It's hard for me to answer that question from that perspective, but there's a lot that goes into these tools and how they're used.
The data, depending on the sources it's coming from, yes, has to be validated. It has to be verified to make sure that it does not cause bias and unintended consequences. The developers in our industry are consistently looking at how to improve that technology and how to improve leveraging of the good or clean data, I should say.
Thank you to the witnesses for being here.
This is our last day hearing from witnesses on the implications of artificial intelligence for the Canadian labour force, and I'm not sure we've gone as far as we need to. We are actually still missing quite a bit of the information we need to measure the impact.
Mr. Soucy and Ms. Blais, thank you for your input. Some of your fellow union representatives told the committee that it is detrimental to workers when they aren't told ahead of time about the implementation of new technologies like AI or the purpose of those technologies.
You said that AI could even cause upheavals in the working world—hence the importance of regulating AI.
What do you mean by regulating? My Liberal colleague pointed out that not all workers are unionized. How do we regulate AI in a practical and effective way?
:
Thank you, Mr. Chair, and thank you to all of our witnesses for being here.
My first question is for Mr. Kiron.
You stated in an article you co-wrote that “These analytic systems, which we call smart KPIs, can learn, and learn to self-improve, with and without human intervention.”
Do you believe that due to this, AI would be able to collect private data? If so, are there gaps in privacy legislation that you would recommend the government amend or implement?
:
That's a fascinating question.
On the whole issue of KPIs and acquiring private data to improve KPIs and help them learn, to the extent that businesses use private customer data and that's part of their datasets, there's definitely regulation that constrains how businesses can use that personal data outside of the organization.
Within the organization, there's obviously.... You can't see social security numbers outside of HR. So with the fact that KPI data is being used to help train new KPIs or better KPIs, and the KPIs themselves can learn from this data, it could be limited to whatever is appropriate within the enterprise's uses of the data, if that makes sense.
:
Oh, yes, and it already has.
The large language models, for example, have been trained on datasets that include published works by writers around the world. I think there's a class action suit going on with writers like Stephen King saying, “Look, your tool that you're making billions of dollars from—you have like a $90-billion capital valuation—is piggybacking on my work and it's completely uncompensated.”
There's that kind of rip-off of intellectual property—absolutely.
Then, in terms of privacy, there are so many different ways that AI is going to interfere with people's privacy. If you just take generative AI, we've talked about ChatGPT. There's Claude, and Bard from Google. There are all of these LLMs that are out there.
These companies are trying to stay ahead of the issue by putting in guardrails that are ethical and all that, but what we haven't talked about is that there is going to be a grey market for large language models that are free of these constraints that governments and companies in the public eye are focused on. What do you do with that?
Ms. Chabot, it's very hard to do a moratorium on that kind of thing.
:
Thank you very much. Thank you for that explanation. It was very helpful.
Mr. Chair, I would like to move in a different direction here for a moment and pause.
I would like to move a motion. This has been circulated to the committee. I'll just read the motion here:
the Auditor General of Canada recently issued a scathing report on the Liberal Government’s Benefits Delivery Modernization programme, identifying delays, cost overruns and concerns on the viability of increasingly outdated technology;
this project was budgeted for $1.75 billion when launched in 2017 but has nearly doubled in cost, to $3.4 billion;
new reports from ESDC projects a revised cost estimate of $8 billion marking a 357% increase from the original price tag;
the completion date for the project has been pushed to 2034;
That the committee undertake a study of no less than four (4) meetings to review the government’s Benefits Delivery Modernization program and the Auditor General of Canada's report on this matter and that, the Auditor General of Canada, the Minister of Citizen’s Services, the Minister of Employment, Workforce Development and Official Languages, the President of the Treasury Board, and all relevant officials from these departments be invited to appear before the committee on this matter for two hours each; and that the committee report its findings and recommendations to the House.
Mr. Chair, just to put this into perspective, the benefits delivery modernization programme is the largest IT project ever taken on by the Canadian government. It was projected, as I said, to cost $1.75 billion. According to reports, it's now projected to cost an estimated $8 billion.
Costs have gone up. Expensive consultants have been hired. Timelines are extended. Liberal ministers need to answer questions to be held accountable for this chronic pattern of lack of oversight and mismanagement with yet another IT project. As an example, the ArriveCAN app project didn't work. It cost taxpayers $54 million and is now under criminal investigation. The Liberals recently paid over $600,000 to consultants to advise on how to reduce spending on consultants.
The government does not deserve the benefit of the doubt here. This is a massive spending project of taxpayer dollars. This human resources committee needs to scrutinize this.
I hope to have support of all members of this committee.
Thank you, Mr. Chair.
:
We've looked at this. We did a study with the Boston Consulting Group and a professor from Boston College, Sam Ransbotham, on this very topic.
The ways that machines and humans interact fall into different categories. I'll try to keep this as concise as possible, but you can have the machine doing.... Take decision-making. The machine makes the decision all by itself, and it's an automated thing. Take fraud detection. These AI technologies are sifting through so many parameters that no human could do it, possibly. It's making decisions about what constitutes fraud.
There are other kinds of things where the AI would contribute to a decision, but the human would have final decision-making authority over it. Similarly, the human could contribute to the AI making a final decision. Take fraud. There's another fraud instance, but it reaches a level where it's not really clear whether or not it's fraud, so the human might play a role in that kind of decision.
There's a whole spectrum, and what we found is that AI at a very high level, when humans are working with AI, emboldens and strengthens teamwork on the part of humans. Humans are more satisfied working with AI than teams not working with AI. It increases collaboration.
:
Indeed, as co-chair of the Canadian Chamber of Commerce's Future of AI Council, we did have our first executive summit today, and it was a successful one.
We had members from all sizes of companies and from all different industries come together. We discussed AI technology as an emerging new technology, where it's going and where it's headed. We all came to a consensus that it is fast-paced. It is consistently evolving, and it is going to continue evolving in all our different sectors.
Currently, there is legislation before Parliament that looks at how to regulate AI. The conversation around whether Canada is going in the right direction, around legislating and regulating AI, is a mixed bag in terms of the sentiment around the current legislation. Overall, we can all agree on the fact that we do need some level of principles and regulations in this space.
We appreciate that different companies than are currently leveraging this AI technology are unlocking value and benefits from it. They're seeing realized and happen fairly quickly.
Looking at the productivity of AI and looking at how we in Canada can create an ecosystem that is both domestically and globally competitive was also an interesting conversation that we broached,in terms of how AI can play a factor into that.
I'll stop there, but I'm happy to speak more about it.
:
I can talk about the telecommunications sector. Canada's big telecom companies outsource work overseas, to workers in countries that don't have the same laws we do. That's true for call centres, IT helpdesks, planning design and so on. That alone raises concerns around the privacy of Canadian customers and the employees of those companies.
We've also noticed that AI tends to enhance the capabilities of other technologies. For instance, when combined with AI, 5G technology, which is currently being deployed, will allow for the automation of numerous activities in telecom companies, possibly leading to the demise of highly skilled jobs.
I don't know how those employees would be retrained. Companies are reluctant to do that as of now. That's what we have realized. Companies prefer to use contractors to do all the work within the company or hire people straight out of school.
The government talks a lot about the middle class. What's going to happen to middle-class workers whose jobs are in the process of being automated? Will they be retrained to do work equally as technical as the jobs being taken over by AI? That's something to consider.
:
Great. Thank you for your comments on that and for that comparison. I appreciate that.
Mr. Chair, I would like to go forward with moving another motion here. This has been circulated to the committee.
I will read the motion:
That, pursuant to the Order of Reference of Thursday November 9th, 2023, the Minister of Employment, Workforce Development and Official Languages, the Minister of Housing, Infrastructure and Communities, the Minister of Diversity, Inclusion and Persons with Disabilities, the Minister of Labour and Seniors, the Minister of Families, Children and Social Development, and the Minister of Citizens’ Services, appear before the Committee for no fewer than 2 hours each to consider the Supplementary Estimates (B) before Friday, December 1st, 2023.
It is a normal practice for us to have ministers come to the committee, so this is formally requesting them to do this. This is also particularly important considering that motion I previously put forth, which was not successful, to look at the benefits delivery modernization programme.... In fact, the Liberal member opposite noted that it would be something that could be brought up when the ministers come here to talk about estimates, so this is perfect timing. Therefore, this should be easily supported by the members here.
This is really important considering that we're looking at the numbers; we're looking at the extra spending of the government. We also have a new in here as well with a new portfolio, and so this is really timely to have this minister come forth. We haven't had this minister before the committee.
As I mentioned earlier, we also have the Auditor General's report, which hasn't been addressed yet, and we can question the ministers on that as well.
Thank you very much, Mr. Chair.
We now have, on the motion, Mr. Aitchison, and I believe, Mrs. Falk, Ms. Ferreri and Mr. Kusmierczyk.
Again, to the witnesses, this is in order before the committee.
Mr. Aitchison, if you soon don't get the floor, I will go to Mrs. Falk.