:
I call the meeting to order.
Good afternoon everyone, and welcome to meeting No. 101 of the House of Commons Standing Committee on Industry and Technology.
Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.
Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill .
I’d like to welcome our witnesses today, Mr. Jean-François Gagné, an AI strategic advisor, who will be given an opportunity to give his opening address when he joins us a little later. We also have with us Ms. Erica Ifill, a journalist and founder of the Podcast Not In My Colour, and from AlayaCare, Mr. Adrian Schauer, its founder and chief executive officer.
[English]
I want to thank you, Mr. Schauer, for making yourself available again today. I know we had some technical difficulties before, but the headset looks fine this afternoon. Thanks for being here again.
Thank you, Madam Clerk, for the help, as well.
We have, from AltaML Inc., Nicole Janssen, co-founder and chief executive officer; and from Gladstone AI, we have Jérémie Harris.
[Translation]
And last, we will have Jennifer Quaid, associate professor and vice-dean research, civil law section, Faculty of Law, University of Ottawa along with with Céline Castets-Renard, full law professor, Faculty of Civil Law , University of Ottawa.
As we have several witnesses, we will begin the discussion immediately. Each of you will have five minutes for an opening statement. Mr. Gagné, please begin.
[English]
Madame Ifill, the floor is yours.
:
Good afternoon to the industry and technology committee as well as a lot of their assistants and also to whoever may be in the room.
I am here today to talk about part 3 of Bill , an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Part 3 is the Artificial Intelligence and Data Act.
Firstly, there are some issues, some challenges, with this bill, especially in accordance with societal effects and public effects.
Number one, when this bill was crafted, there was very little public oversight. There were no public consultations, and there are no publicly accessible records accounting for how these meetings were conducted by the government's AI advisory council, nor which points were raised.
Public consultations are important, as they allow a variety of stakeholders to exchange and develop innovative policy that reflects the needs and concerns of affected communities. As I raised in the Globe and Mail, the lack of meaningful public consultation, especially with Black, indigenous, people of colour, trans and non-binary, economically disadvantaged, disabled and other equity-deserving populations, is echoed by AIDA's failure to acknowledge AI's characteristic of systemic bias, including racism, sexism and heteronormativity.
The second problem with AIDA is the need for proper public oversight.
The proposed artificial intelligence and data commissioner is set to be a senior public servant designated by the and, therefore, is not independent of the minister and cannot make independent public-facing decisions. Moreover, at the discretion of the minister, the commissioner may be delegated the “power, duty” and “function” to administer and enforce AIDA. In other words, the commissioner is not afforded the powers to enforce AIDA in an independent manner, as their powers depend on the minister's discretion.
Number three is the human rights aspect of AIDA.
First of all, how it defines “harm” is so specific, siloed and individualized that the legislation is effectively toothless. According to this bill:
(a) physical or psychological harm to an individual;
(b) damage to an individual's property; or
(c) economic loss to an individual.
That's quite inadequate when talking about systemic harm that goes beyond the individual and affects some communities. I wrote the following in The Globe and Mail:
“While on the surface, the bill seems to include provisions for mitigating harm,” [as said by] Dr. Sava Saheli Singh, a research fellow in surveillance, society and technology at the University of Ottawa's Centre for Law, Technology and Society, “[that] language focuses [only] on individual harm. We must recognize the potential harms to broader populations, especially marginalized populations who have been shown to be negatively affected disproportionately by these kinds of...systems.”
Racial bias is also a problem for artificial intelligence systems, especially those used in the criminal justice system, and racial bias is one of the greatest risks.
A federal study was done in 2019 in the United States that showed that Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
A study from the U.K. showed that the facial recognition technology the study tested performed the worst when recognizing Black faces, especially Black women's faces. These surveillance activities raise major human rights concerns when there is evidence that Black people are already disproportionately criminalized and targeted by the police. Facial recognition technology also disproportionately affects Black and indigenous protesters in many ways.
From a privacy perspective, algorithmic systems raise issues of construction, because constructing them requires data collection and processing of vast amounts of personal information, which can be highly invasive. The reidentification of anonymized information, which can occur through the triangulation of data points collected or processed by algorithmic systems, is another prominent privacy risk.
There are deleterious impacts or risks stemming from the use of technology concerning people's financial situations or physical and/or psychological well-being. The primary issue here is that a significant amount and type of personal information can be gathered that is used to surveil and socially sort, or profile, individuals and communities, as well as forecast and influence their behaviour. Predictive policing does this.
In conclusion, algorithmic systems can also be used in the public sector context to assess a person's ability to receive social services, such as welfare or humanitarian aid, which can result in discriminatory impacts on the basis of socio-economic status, geographic location, as well as other data points analyzed.
I think this will be an interesting perspective side-by-side with Erica's.
I'm the founder and CEO of AlayaCare. It is a home care software company. We deliver our solutions both to the private sector providers and the public sector health authorities.
In the machine learning domain, we have all sorts of risk models we deliver. One of the things that you can imagine our ultimately building up to is a model that, on the basis of an assessment and patient data, will help at a population health level determine where the health system's resources get optimally allocated. In that use and case, it's definitely a high-impact system.
I really like two things about the framework in this bill. One is that you're looking to adhere to international standards. As a developer of software looking to generate value in our society, we can't have a thousand fiefdoms. Let me start with a thanks for that. The second thing I really appreciate is your segmentation of the actors between the people who generate the AI models, those who develop them into useful products, and those who operate them in public. I think that's a very useful framework.
On the question of bias, I think it raises some interesting questions. I think we have to be very careful about legislating against bias in the right way. In developing the model, really the only difference between a linear regression—think of what you might do in Excel—and an AI model is the black box aspect. Yes, if you're trying to figure out how to allocate health system resources, you probably don't want to put in certain elements that could be bigoted into your model, because that's not how a society wants to be allocating health resources. With a machine learning model, you're going to feed a bunch of data into a black box and out comes a prediction or an optimization. Then you can imagine all sorts of biases creeping in. It might be that a certain identity, for example, that left-handed people can actually get by with a bit less home care and still stay out of the hospital.... That wouldn't be programmed into the algorithm, but it could certainly be an output of the algorithm.
I think what we need to be careful of is assigning the right accountability to the right actor in the framework. I think the model developers need to demonstrate a degree of care in the selection of the training data. To the previous example—and I can say this with some certainty—the reason that the facial recognition model doesn't perform as well for indigenous communities is that it just wasn't fed enough training data of that particular group. When you're developing the AI model, you need to take care and demonstrate that you've taken care of having a representative training set that's not biased.
When you develop and put an algorithm into the market, I think providing as much transparency as possible to the people who will use it is definitely something that we should endeavour to do. Then, in the use of that and the output of that algorithm you have a representative training set and the right caveats. I think we have to be careful that you don't bring inappropriate accountability back to the model developers. That's my concern. Otherwise, you're going to be pitting usefulness against potential frameworks for bias.
What I think we have to be careful about with this legislation is to not disproportionately shift societal concerns on how resources should be allocated—you name the use case—to the tool developer and sit them appropriately with the user of the tool.
That's my perspective on the bill.
:
Thank you and good afternoon, Mr. Chair and members of the committee.
I'm here on behalf of Gladstone AI, which is an AI safety company that I co-founded. We collaborate with researchers at all the world's top AI labs, including OpenAI and partners in the U.S. national security community, to develop solutions to pressing problems in advanced AI safety.
Today's AI systems can write software programs nearly autonomously, so they can write malware. They can generate voice clones of regular people using just a few seconds of recorded audio, so they can automate and scale unprecedented identity theft campaigns. They can guide inexperienced users through the process of synthesizing controlled chemical compounds. They can write human-like text and generate photorealistic images that can power, and have powered, unprecedented and large-scale election interference operations.
These capabilities, by the way, have essentially emerged without warning over the last 24 months. Things have transformed in that time. In the process, they have invalidated key security assumptions baked into the strategies, policies and plans of governments around the world.
This is going to get worse, and fast. If current techniques continue to work, the equation behind AI progress has become dead simple: Money goes in, in the form of computing power, and IQ points come out. There is no known way to predict what capabilities will emerge as AI systems are scaled up using more computing power. In fact, when OpenAI researchers used an unprecedented amount of computing power to build GPT-4, their latest system, even they had no idea it would develop the ability to deceive human beings or autonomously uncover cyber exploits, yet it did.
We work with researchers at the world's top AI labs on problems in advanced AI safety. It's no exaggeration to say that the water cooler conversations among the frontier AI safety community frames near-future AI as a weapon of mass destruction. It's WMD-like and WMD-enabling technology. Public and private frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years. Our own research suggests this is a reasonable assessment.
Beyond weaponization, evidence also suggests that, as advanced AI approaches superhuman general capabilities, it may become uncontrollable and display what are known as “power-seeking behaviours”. These include AIs preventing themselves from being shut off, establishing control over their environment and even self-improving. Today's most advanced AI systems may already be displaying early signs of this behaviour. Power-seeking is a well-established risk class. It's backed by empirical and theoretical studies by leading AI researchers published at the world's top AI conferences. Most of the safety researchers I deal with on a day-to-day basis at frontier labs consider power-seeking by advanced AI to be a significant source of global catastrophic risk.
All of which is to say that, if we anchor legislation on the risk profile of current AI systems, we will very likely fail what will turn out to be the single greatest test of technology governance we have ever faced. The challenge AIDA must take on is mitigating risk in a world where, if current trends simply continue, the average Canadian will have access to WMD-like tools, and in which the very development of AI systems may introduce catastrophic risks.
By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times beyond what we see today. I don't know what capabilities will exist. As I mentioned earlier, no one can. However, when I talk to frontier AI researchers, the predictions I hear suggest that WMD-scale risk is absolutely on the table on that time horizon. AIDA needs to be designed with that level of risk in mind.
To rise to this challenge, we believe AIDA should be amended. Our top three recommendations are as follows.
First, AIDA must explicitly ban systems that introduce extreme risks. Because AI systems above a certain level of capability are likely to introduce WMD-level risks, there should exist a capability level, and therefore a level of computing power, above which model development is simply forbidden, unless and until developers can prove their models will not have certain dangerous capabilities.
Second, AIDA must address open source development of dangerously powerful AI models. In its current form, on my reading, AIDA would allow me to train an AI model that can automatically design and execute crippling malware attacks and publish it for anyone to freely download. If it's illegal to publish instructions on how to make bioweapons or nuclear bombs, it should be illegal to publish AI models that can be downloaded and used by anyone to generate those same instructions for a few hundred bucks.
Finally, AIDA should explicitly address the research and development phase of the AI life cycle. This is very important. From the moment the development process begins, powerful AI models become tempting targets for theft by nation, state and other actors. As models gain more capabilities and context awareness during the development process, loss of control and accidents become greater risks, as well. Developers should bear responsibility for ensuring the safe development of their systems, as well as their safe deployment.
AIDA is an improvement over the status quo, but it requires significant amendments to meet the full challenge likely to come from near-future AI capabilities.
Our full recommendations are included in my written submission, and I look forward to taking your questions. Thank you.
:
Mr. Chair. vice-chairs and members of the Standing Committee on Industry and Technology, I am very pleased to be here once again, this time to talk about Bill .
[English]
I am grateful to be able to share my time with my colleague Céline Castets-Renard, who is online and who is the university research chair in responsible AI in a global context. As one of the preeminent legal experts on artificial intelligence in Canada and in the world, she is very familiar with what is happening elsewhere, particularly in the EU and the U.S. She also leads a SSHRC-funded research project on AI governance in Canada, of which I am part. The project is directed squarely at the question you are grappling with today in considering this bill, which is how to create a system that is consistent with the broad strokes of what major peer jurisdictions, such as Europe, the U.K. and the U.S., are doing while nevertheless ensuring that we remain true to our values and to the foundations of our legal and institutional environment. In short, we have to create a bill that's going to work here, and our comments are directed at that; at least, my part is. Professor Castets-Renard will speak more specifically about the details of the bill as it relates to regulating artificial intelligence.
Our joint message to you is simple. We believe firmly that Bill is an important and positive step in the process of developing solid governance to encourage and promote responsible AI. Moreover, it is vital and urgent that Canada establish a legal framework to support responsible AI governance. Ethical guidelines have their place, but they are complementary to and not a substitute for hard rules and binding enforceable norms.
Thus, our goal is to provide you with constructive feedback and recommendations to help ready the bill for enactment. To that end, we have submitted a written brief, in English and in French, that highlights the areas that we think would benefit from clarification or greater precision prior to enactment.
This does not mean that further improvements are not desirable. Indeed, we would say they are. It's only that we understand that time is of the essence, and we have to focus on what is achievable now, because delay is just not an option.
In this opening statement, we will draw your attention to a subset of what we discuss in the brief. I will briefly touch on four items before I turn it over to my colleague, Professor Castets-Renard.
First, it is important to identify who is responsible for what aspects of the development, deployment and putting on the market of AI systems. This matters for determining liability, especially of organizations and business entities. Done right, it can help enforcers gather evidence and assess facts. Done poorly, it may create structural immunity from accountability by making it impossible to find the evidence needed to prove violations of the law.
I would also add that the current conception of accountability is based on state action only, and I wonder whether we should also consider private rights of action. Those are being explored in other areas, including, I might add, in Bill , which has amendments to the Competition Act.
Second, we need to use care in crafting the obligations and duties of those involved in the AI value chain. Regulations should be drafted with a view to what indicators can be used to measure and assess compliance. Especially in the context of regulatory liability and administrative sanctions, courts will look to what regulators demand of industry players as the baseline for deciding what qualifies as due diligence and what can be expected of a reasonably prudent person in the circumstances.
While proof of regulatory compliance usually falls on the business that invokes it, it is important that investigators and prosecutors be able to scrutinize claims. This requires metrics and indicators that are independently verifiable and that are based on robust research. In the context of AI, its opacity and the difficulty for outsiders to understand the capability and risks of AI systems makes it even more important that we establish norms.
Third, reporting obligations should be mandatory and not ad hoc. At present, the act contemplates the power of the AI and data commissioner to demand information. Ad hoc requests to examine compliance are insufficient. Rather, the default should be regular reporting at regular intervals, with standard information requirements. The provision of information allows regulators to gain an understanding of what is happening at the research level and at the deployment and marketing level at a pace that is incremental, even if one can say that the development of AI is exponential.
This builds institutional knowledge and capacity by enabling regulators and enforcers to distinguish between situations that require enforcement and those that do not. That seems to be the crux of the matter. Everyone wants to know when it's right to intervene and when we should let things evolve. It also allows for organic development of new regulations as new trends and developments occur.
I would be happy to talk about some examples. We don't have to reinvent the wheel here.
Finally, the enforcement and implementation of the AI act as well as the continual development of new regulations must be supported by an independent, robust institutional structure with sufficient resources.
The proposed AI data commissioner cannot accomplish this on their own. While not a perfect analogy—and I know some people here know that I'm the competition expert—I believe that the creation of an agency not unlike the Competition Bureau would be a model to consider. It's not perfect. The bureau is a good example because it combines enforcement of all types—criminal, regulatory, administrative and civil—with education, public outreach, policy development and now digital intelligence. It has a highly specialized workforce trained in the relevant disciplines it needs to draw on to discharge its mandate. It also represents Canada’s interests in multilateral fora and collaborates actively with peer jurisdictions. It matters, I think, to have that for AI.
I am now going to turn it over for the remaining time to my colleague Professor Castets-Renard.
Thank you.
:
Thank you very much, Mr. Chair, vice-chairs and members of the Standing Committee on Industry and Technology.
I would also like to thank my colleague, Professor Jennifer Quaid, for sharing her time with me.
I' m going to restrict my address to three general comments. I'll begin by saying that I believe artificial intelligence regulation is absolutely essential today, for three primary reasons. First of all, the significance and scope of the current risks are already well documented. Some of the witnesses here have already discussed current risks, such as discrimination, and future and existential risks. It's absolutely essential today to consider the impact of artificial intelligence, in particular its impact on fundamental rights, including privacy, non-discrimination, protecting the presumption of innocence and, of course, the observance of procedural guarantees for transparency and accountability, particularly in connection with public administration.
Artificial intelligence regulation is also needed because the technologies are being deployed very quickly and the systems are being further developed and deployed in all facets of our professional and personal lives. Right now, they can be deployed without any restrictions because they are not specifically regulated. That became obvious when ChatGPT hit the marketplace.
Canada has certainly developed a Canada-wide artificial intelligence strategy over a number of years now, and the time has now come to protect these investments and to provide legal protection for companies. That does not mean allowing things to run their course, but rather providing a straightforward and understandable framework for the obligations that would apply throughout the entire accountability chain.
The second general comment I would like to make is that these regulations must be compatible with international law. Several initiatives are already under way in Canada, which is certainly not the only country to want to regulate artificial intelligence. I'm thinking in particular, internationally speaking, of the various initiatives taking being taken by the Organisation for Economic Co‑operation and Development, the Council of Europe and, in particular, the European Union and its artificial intelligence bill, which should be receiving political approval tomorrow as part of the inter-institutional trialogue negotiations between the Council of the European Union, the European Parliament and the European Commission. Agreement has reached its final phase, after two years of discussion. President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence also needs to be given consideration, along with the technical standards developed by the National Institute of Standards and Technology and the International Organization for Standardization.
My final general comment is about how to regulate artificial intelligence. The bill before us is not perfect, but the fact that it is risk-based is good, even though it needs strengthening. By this I mean considering risks that are now considered unacceptable, and which are not necessarily existential risks, but risks that we can already identify today, such as the widespread use of facial recognition. Also worth considering is a better definition of the risks to high-impact systems.
We'd like to point out and praise the amendments made by the minister, Mr. Champagne, before your committee a few weeks ago. In fact, the following remarks, and our brief, are based on these amendments. It was pointed out earlier that not only individual risks have to be taken into account, but also collective risks to fundamental rights, including systemic risks.
I'd like to add that it's absolutely essential, as the minister's amendments suggest, to consider the general use of artificial intelligence separately, whether in terms of systems or foundational models. We will return to this later.
I believe that a compliance-based approach that reflects the recently introduced amendments should be adopted, and it is fully compatible with the approach adopted by the European Union.
When all is said and done, the approach should be as comprehensive as possible, and I believe that the field of application of Bill is too narrow at the moment and essentially focused on the private sector. It should be extended to the public sector and there should be discussions and collaboration with the provinces in their fields of expertise, along with a form of co‑operative federalism.
Thank you for your attention. We'll be happy to discuss these matters with you.
I'm pleased to be here to testify as an individual.
I'm a strategic advisor in artificial intelligence. I' ve spent my entire career using AI technology, which became available in the early 2000s. I worked in operational research, artificial intelligence, and applied mathematics. I developed tools and software that have been used around the world. In 2016, I founded Element AI and was the company's president until it was sold to ServiceNow in 2021.
I have frequently collaborated internationally. For two years, I was the co‑chair of the working group on innovation and marketing for the Global Partnership on Artificial Intelligence. I also represented Canada on the European Commission's high-level expert group on artificial intelligence. Canada was the only country to have participated that was not in the European Union. I co‑chaired the drafting of the main deliverable on regulation and investment for trustworthy artificial intelligence.
I was involved in many events held by the Organization for Economic Co‑operation and Development and the Institute of Electrical and Electronics Engineers, in addition to many other international contributions. I was also a member of federal sectoral economic strategy tables for digital industries.
Despite Canada's track record in artificial intelligence research, and its undeniable contribution to basic research, it has gradually been losing its leadership role. It's important to be aware of the fact that we are no longer in the forefront. Our researchers now have limited resources. Conducting research and understanding what is happening in this field today is extremely expensive, and many innovations will emerge in the private sector. It's a fact. Much of the work being published by researchers has been done in collaboration with foreign firms, because that's how they can get access to the resources needed to train models and conduct tests, so that they can continue to publish and come up with new ideas.
Canada has always been somewhat less competitive than the United States, and although things have not got worse, they haven't improved. For a technology as essential as artificial intelligence, which I like to compare literally to energy, we're talking about intelligence, know-how and capabilities. It's a technology that is already being deployed in every industry and every sphere of life. Absolutely no corner of society is unaffected by it.
What I would like to underscore is the importance of not treating artificial intelligence homogeneously, just as the various regulations and statutes for oil, natural gas and electricity are not so treated. I could even start breaking it down into all the subsidiary aspects of production for each of these resources. It's very difficult to treat artificial intelligence in the same way for each of its applications. Everything is moving forward very quickly and it's highly complex, and when you put all the facts together, we feel overwhelmed. That, unfortunately, is what we hear all too often in the media. We've been here for quite a while and we've already heard words like "fear" and "advancement". there has also been talk of uncertainty about the future.
So, to return to the subject at hand, yes, it's absolutely urgent to take action. I am in no way hinting that measures ought not to be taken, but they ought to be appropriate for the situation now facing us.
We are facing a rapidly evolving complex situation that affects every sphere of society. It' s important to avoid adopting a single, straightforward and overly forceful response. What would happen if we took that kind of approach? We would perhaps protect ourselves, but it would certainly prevent us from taking advantage of opportunities and promoting the kind of economic development and productivity growth that would enrich the whole country. That's simply a fact. We can't deal with every single potential situation, because it would be too complex.
If we try to do everything and cover all aspects, our regulations will be too vague, ineffective and misunderstood. The economic outcome of vague regulation—you know this better than I do—will be that investments will not flow in. If consequences are unclear or definitions left until later, companies will simply invest elsewhere. It's a highly mobile digital field. Many Canadian workers compile and train models in the United States, beyond the reach of our own rules for our companies and our universities. It's important to be aware of that.
I believe that these are the key elements. They are central to our deliberations about how to write the rules, and in particular the way that they will be fine-tuned. Not only that, but they will guide the effort required to do the work properly to come up with a clear and accurate regulatory framework that promotes investment. With a framework like that, we'll know exactly what we are going to get if we make such and such an investment, and would understand exactly what the costs will be to provide transparency, to be able to publish data and to check that they have been anonymized.
That would enable organizations to invest as much as they and we want. If we are clear, organizations will be able to do the computations and decide whether or not to invest in Canada and deploy their services here. It will then be up to us to determine whether the bar has been set too high and whether the criteria are overly restrictive.
Vague regulations would guarantee that nothing will happen. Companies will simply go elsewhere because it's too easy to do so. Various other elements are on my list and I will summarize these. Please excuse me for not having done so prior to my presentation. I will send the committee all the details and recommendations with respect to the adjustments that should have been made.
In this regulatory framework, I believe that transparency will be very important if there is to be a climate of trust. It's important to ensure that users of the technology are aware that they are interacting with it. Some questions and subjects arise in all industries. It's important to be able to know what we are getting.
I'm talking about the underlying principles: stating what services we can access, their parameters and their specifications. If a service changes or its model is updated, that would enable us to assess the repercussions of using it. There are also all the other principles that would ensure people are not being manipulated and that require compliance with ethical and other issues. These are fundamental principles that must be part of the regulatory framework.
One of my most serious concerns is the lack of specificity and the possibility that the law would be too broad in scope. I learned a lesson from my participation in what led to the European Union's artificial intelligence law. Europe tried to come up with exhaustive legislative measures that attempted to include almost everything. However, many of the recommendations made by the committee at the time focused on the need to work with industry, the need for accuracy and avoiding a piece of legislation that tried to cover everything.
Of course, something new always comes up. It could be generative artificial intelligence or the next generation of artificial intelligence as applied to cybersecurity, health and all aspects of the economy, services and our lives. There's always something that has to be amended or altered.
My view is that caution is needed in this respect, as well as an extremely surgical approach that would lead to the development of regulations specific to each and every industry sector, with their assistance, the automobile sector for instance.
Thanks to all the witnesses.
Earlier, Mr. Harris, I had the impression I was in a movie in which a parliamentary committee was conducting a study on an artificial intelligence bill. You were telling the people on this committee that the third world war was about to arrive and that it would be technological, by which I mean that no weapons of any kind would be used. Listening to you today, I felt like swearing, but unfortunately, I couldn't.
My greatest frustration, and I think I'm not alone around this table to feel that way, is that the bill before us includes a series of elements, underpinned by three principles, which are privacy, the courts and artificial intelligence. However, according to the testimony we heard today, artificial intelligence should have been dealt with in a separate bill.
We are being told that there have already been major advances in artificial intelligence since the start of our study, including the signing of a memorandum of understanding in England. Some countries decided to introduce a voluntary code while awaiting the adoption of various bills.
Ms. Castets-Renard, you spoke about a trialogue that would address certain issues. You are no doubt talking about Europe. Mr. Gagné, you also spoke earlier about measures that were proposed in reports you submitted to the European Union. Are you talking about the same thing? I'm not sure I've understood properly.
:
I'm coming at this with my corporate criminal liability hat, and this statute is primarily criminal law. That was one of the astonishing things when I first read this bill. Relying on criminal enforcement comes with some costs in terms of how you prepare evidence and put things together.
What I'm concerned about is when we don't have transparency about who's involved with what decisions in relation to this technology. I can't speak to how it's actually done. I think the experts here can say something about that. What we need to insist on is transparency about who does what, because you cannot convict a corporation or an organization in this country without knowing who did what, what their status is and what their decision-making power is in the organization. I will direct you to section 2 of the Criminal Code, if you want to read it.
Even in the case of regulatory liability, where an employee can engage the liability of the organization, you don't need to have a status-based association that they're a senior officer. You still need to know who did what, otherwise you have no evidence. I think it's really important to make sure we create a regime that forces the information out so that then we can assess.
That doesn't mean we're going to convict all the time or that we're going to prosecute all the time, but if everything is hidden, then this is just window decoration. You will never, ever get a prosecution, or even administrative liability, in my view.
:
Thank you for that. It's very helpful testimony.
Mr. Harris, I want to ask you a question similar to that of Mr. Généreux's.
I similarly had the experience of listening to you and feeling like I was in a horror movie, a sci-fi novel or some intersection of those when I heard you talk. I know that you're bringing up these risks and potential harms as a very real thing, so I don't want to take that lightly, but it is quite scary to hear.
I want to ask you a bit of an ethical or philosophical question. You had talked about mitigating the risks. You had talked about a blanket ban on, or explicitly forbidding, certain types of AI or advanced AI systems. One question that occurs to me when we're dealing with, essentially, advanced AI, is whether it is surpassing human intelligence. I think that's what I'm hearing. You talked about the superhuman and the power-seeking behaviours as being a real risk.
I'm interested in how we develop an ethical and/or legal framework. I think that is a core challenge in this work, which I'm grappling with. A lot of our ethical and our legal concepts rely on things like reasonably foreseeable futures. They rely on concepts of duty, etc., most of which rely on humans' ability to look at what the outcomes might be, given our past experience.
You talked about how some of our national security assumptions had been invalidated. Are some of our ethical assumptions and our legal assumptions being invalidated by the advancement of AI? How do human beings create a system or a set of guidelines for something that is actually beyond our intelligence?
It's a tough question.
:
I think those are excellent questions.
I think, fortunately, we're not without tools for dealing with them. To piggyback off the testimony that Jennifer just gave, I think it's actually quite right to ask, “How can we massage this into a form that fits within our legal frameworks?” We're not going to overhaul the Constitution tomorrow. It's not going to happen.
One thing we can do is to recognize the fact that we can't predict the capabilities of systems at the next level of scale, so safety by design would seem to imply “until we can”. We're not talking about a blanket ban. We're saying, “until we can”, let's incentivize the private sector to make fundamental advances in the science of AI and to give us a scientific theory for predicting the emergence of those dangerous capabilities.
I'd also say we can draw inspiration from the White House executive order that came out recently. One of the key things they do—again, to piggyback off this idea, like sunlight is the best disinfectant, to bring this all out to the fore so that we can evaluate what's going on—is have a reporting requirement in the executive order. If you train an AI system that uses above a certain amount of computational power in the training process, you need to report the results of various audits you've performed, various evaluations. Those evaluations have to do with bioweapon design capability, chemical synthesis ability and self-replication ability. That's all baked into the executive order.
Seeing something like that, where we have a tiered process that essentially mirrors what we see in the EO, where we base it on computational processing power thresholds; above this line, you have to do this, and above that line, you have to do that. It's that sort of thing.
Thank you to all the witnesses.
Mr. Harris, I remember when you came to tell us, as legislators, about the risks of things going wrong with artificial intelligence. If I'm not mistaken, in your address you gave a potential example. You said that if someone wanted to get to Toronto more quickly, they could use artificial intelligence to simulate a major police intervention following an accident or some kind of attack. That would clear the road for them to get there more quickly.
In a situation like the truckers convoy near the Hill last year, it would be all too easy to use artificial intelligence to show an image of the Parliament Buildings on fire, as part of a serious disinformation ploy.
Was it actually you who gave that talk?
:
I agree on the fact that it's urgent to establish a base.
You know things work with legislation and other such matters better than I do. I don't know how long it would take to start over from scratch, but I think it would be a lengthy process. I feel that an effort should be made to come up with a version that provides a solid foundation that applies to most instances and, most importantly, is specific. That, in my view, is the way to go.
The danger arises when you start adding things. I read the amendments. I also felt bad when Mr. Généreux said that they had not been published, because I had read them on the train on my way here. I asked myself why I had been given access to the text of the amendments.
The list of high-impact artificial intelligence system categories was presented. On that, I'd like to say that there are so many applications that I was wondering why there is a separate category. It's important to be specific and more transparent, to comply with the regulations, and to factor in all the costs of implementing the infrastructure. If any thought is being given to the health, media or social media sectors, more precision is needed. If the field is too broad, it leaves room for interpretation.
If startup companies conducting research are attempting to develop products for the health field, they will need capital to put something very elaborate in place, and the costs will be high. Those are the kinds of factors that have to be kept in mind. It's important to be specific in what you're looking for.
I was gratified when I heard your testimony, because I've been reading about artificial intelligence issues for several months. My first observation is that while Canada was once a leader in AI, that is no longer the case, unfortunately.
We need to adopt the best existing approach rather than attempt to invent something ourselves. Personally, as a Quebecker, I am always concerned about preserving our cultural distinctiveness and finding a way to protect the future of our young companies. That has an economic impact.
One of the criticisms of the bill is its lack of clarity in terms of criminal liability. The bill covers industry, and if there is to be legislation, it's not going to be for those who are behaving, but rather those who are not. Are the bad guys afraid of what's in the bill? Are these regulations really binding? How can we regulate the offenders in the industry?
:
There are many ways this could materialize into something that is not beneficial for some groups. For example, predictive policing is one way that we see artificial intelligence in use to predict criminal activity, but the training data that's used is historical. If you're using historical or certain types of data to train the AI system, you're going to get a compounded effect whereby those neighbourhoods that are overpoliced become even more policed.
Another way it comes about is in hiring. Hiring agencies have used AI to search for executives for executive positions. Unfortunately, a lot of that data is also historical, which means there's a bias against women, because traditionally, women haven't held those positions.
These are very real consequences that are at scale, and I think the scale and the speed at which this could happen are very concerning. I believe the Edmonton police recently used a system using DNA to predict the facial features of one of the suspects of a sexual assault, and what it came up with was a 14-year-old Black boy. That's the other thing. This adultification of Black boys is another way AI manipulates what we see and what we consider as victims and as perpetrators, or anything like that.
I think the problem is that it has to do a lot with the training data, but the systems.... I'm not sure if the right questions have been asked or the right assumptions have been made to create the model itself.
I'd like to add something about Bill . A risk-based approach would avoid treating all artificial intelligence systems in the same way, or placing the same obligations on them. Other options include the high-impact concept, and the amendments introduced by the minister, Mr. Champagne, explain what this concept means in seven different sectors of activity.
I therefore don't think it's fair to say that it would be applied everywhere, on everyone, and haphazardly. It's possible to discuss how it's going to be applied in seven different activity sectors. Some, no doubt, would say that doesn't go far enough, but it is certainly not a law that will lack specifics, because the amendments specify the details.
To return to what was said earlier, it also means that there can be a comprehensive approach with general principles, and an separate approach for each sector or field. That's what the European Union has done with its amendments. That's why statutes being adopted in other countries need to be considered.
As for what was said about the United Kingdom earlier, Canada has signed a policy declaration which has no legal or binding value. It's a very general text that adds nothing to what we have already said about the ethics of artificial intelligence. It definitely does not prevent Canada from following its own path, as the United States did when it issued its executive order right before the summit in England. The Americans were not willing to wait for England to take the lead.
Those are the details I wanted to add.
:
I just want to circle back to this notion of whether you regulate the model or the end applications. That is pretty central here. We're going to have to walk and chew gum at the same time. There are risks that, irreducibly, come from the model. Look at OpenAI's ChatGPT, for example. They build this one system, one model. I don't know if I can.... In fact, I know that I can't. I know that no one, technically, can count the full range of end-use applications that a tool like that would have. You'll use it in health care today, and you'll use it in space exploration tomorrow and software engineering the day after.
The idea that we're going to be able to take a general-purpose model like this and regulate it as if somehow we can play this losing game of whac-a-mole.... This is just not going to track reality, unfortunately. This is true for a certain subset of risks—the more extreme ones. We can look at the risks, for example, from general-purpose models that can orient themselves in the world and have high context-awareness. You have to regulate the model at that point because that is the source of the risk, irreducibly.
For other things, yes, we need to have application-level regulation and legislation. Again, you see that in the executive order—that we're doing both things. However, I just want to surface that although there might seem to be a tension between these two approaches, they are actually not at all incompatible. In fact, in some ways, they are deeply complementary.
I just wanted to prop that thought.
:
You've given me a challenge. I'll do my best.
The idea is the following.
I'm not saying that we absolutely imitate the Competition Bureau; there are some things that we could do differently. The kind of legislation that is imagined here has some similarities to the kind of legislation that is in the Competition Act. That is to say, it's responsible for a whole array of responses: true criminal, regulatory, administrative and civil. There is a specialized tribunal with that, but we don't need to talk about that right now.
I think the point is that it has developed an expertise and it has a large permanent staff divided into directorates. It's developed a digital intelligence agency.
Those things support what I think other witnesses have been skeptical about, which is the capacity to actually deliver on this.
The U.S. has basically not made a secret of it. They've just said, “Let's use our strong antitrust institutions while we wait to create something else”. In some ways, we would be consistent with what's being done there.
The other thing I really want to insist on—sorry, it's an extra 10 seconds—is that having an agency headed by an independent commissioner will allow Canada to participate in the international arena. That is how you get around these enforceability problems: You have to work with your friends.
In Canada, we might only target local things, but we need to work with allies and we need a player at the table for that.
:
I think it's crucial. There's no question that education is a fundamental, especially when we're talking about children. I think it's going to be challenging, and I don't want to understate it. Once again, I defer to the technical experts who know what is in the technology, but there is no question that education helps.
I'm going to beat my horse this entire meeting just using the example of the Competition Bureau, which does a lot of education proactively—a lot to do with misleading advertising, and so do all the consumer protection agencies of the provinces. There's quite a good collaboration there, and that's because fraud, and particularly digital manipulation, is going through the roof.
People need to be informed. We're playing catch-up. You know that has been true of the criminal law and the criminal justice system forever. That's not going to change, but we still have to try and, as best we can, keep up with what's going on.
I think the worst thing we can do is say that because it's too hard, we do nothing. That's why I'm here, and my colleague agrees 100% with me. We have to do something. It's going to be imperfect. We're going to play catch-up, but it's important.
You know what? There are a lot of people who can contribute their expertise to developing the tools. I really do believe this is not an impossible task— hard but not impossible.
:
Yes, it's a pleasure to answer.
This is close to one of the areas I work in a lot. One subset of the work I do is training for U.S. officials, especially more senior ones, in the defence and national security universe. One of the challenges with training is.... It's been commented on many times. The space is moving so fast that the training has to somehow be relevant and fresh.
There are a few core things the public should understand about the drivers of this technology—about capabilities. This idea, for example, of scaling up AI systems so we can have a rough sense.... If I tell you roughly how many computations went into building a system, you can have a rough sense. “Okay, that's a ChatGPT-level system.” Immediately, you have a comparable that you can establish. We can basic things like that.
There are other things. I think this is a solvable problem. We've had a lot of success finding scalable ways of doing this.
Anyway, there are a lot of partners to collaborate with, in terms of the point that was just made here. I'm optimistic on that front.
:
One of the frameworks I like is, again, computing power as a kind of barometer we use to determine the level of the general capability of systems. There are asterisks galore on that. We heard that, yes, you absolutely can do—in technical terms—inference time augmentations. You can do all kinds of stuff, but the fundamental capabilities of a base model are limited by the amount of computing power you put into it.
In that sense, look at what's being done in the executive order. They're pulling on that thread. They're starting to build institutional capacity for using that as a yardstick. I think that's the best yardstick we have. It's imperfect and I wish it were not, but it is the best yardstick we have at the moment.
There's a lot of stuff we can do around evaluations and audits, depending on what level you are at on that computing-power hierarchy. The more computing power you spend to build a model, the more it costs. GPT-4 is costing, by our estimates, anywhere from $40 million to $150 million to train, just in computing power alone. I'm sorry, but if you can afford to train GPT-4, you can afford a little auditing.
That's the nice thing about this yardstick. It maps onto resourcing, as well, and we can use that to calibrate the trade-off between risk and reward.
:
I think this is the central challenge. You need to figure out....
We're having this same conversation in competition, namely, of how do we adjust to the digital economy? That's a word I don't like. It's the new economy, which has a lot of digital artifacts. There's a lot of experimentation happening internationally. People are trying different things. Part of that is because we have different legal structures, institutional structures and cultures.
I think the balance that has to be struck is this: There are probably a small number of things—and you are the elected representatives who have to make that assessment for the country—that really matter to us, as Canadians, and that might be unique to us. Monsieur Lemire evoked our linguistic identity and the cultural specificity of Quebec, but there are other things that might be very important to us. Think of our indigenous communities. If those are very important, we have to bake them into our system. Then, internationally, what we try to do is make sure we're aligned on most of the big things. We can have a couple of things that are very important to us, and maybe we have special rules about these, but we need to have general alignment, because otherwise it doesn't work.
The challenge is making sure we take those general-agreement principles and translate them into operational legal rules. I guess I'm a bit of a nuts-and-bolts lawyer for that kind of thing. We have to be cognizant of the structural and legal limitations of our system.
We exist in a federation. I want to make one point about regulating everything. There is a division of powers, and a lot of regulation has to come from the provinces. Let's be very clear: This bill is centred on interprovincial and international trade, and on criminal law power that doesn't cover everything. Co-operative federalism is going to be essential.
International co-operation is important, but we also have to agree in the federation.
:
I guess you could ask yourself whether the Europeans have done it. Maybe Céline wants to say something about how well they have navigated the necessity of integrating these considerations.
I think the European system already lends itself to this necessity of dialogue. This trialogue is the three entities that represent the three basic political powers in the European Union—and I'm mangling this—and they have to get together and agree.
I think that we might have to imagine something like that. I know no one likes that idea, but I think a conversation has to occur among the provinces and the federal government. It also probably has to involve local communities. It's all hands on deck.
I take the point that we can't regulate everything with one general framework, but we do need a general framework to set things up.
There are some out-of-bounds things. Let's say, "Kids—out of bounds." Just period, right? We can do blanket prohibitions. We do it, right? It can be done, but you have to target those things, and then for other things we need to make sure that the patchwork fits together.
I'll share one concern I have and then I'll stop. My concern is that if we are not co-operative in the federation, what is going to happen? There will be a set of litigation, founded on the division of powers, like we had 25 years ago in environmental law where large economic interests who have the money to do it will say, “this isn't federal jurisdiction”, and then provinces will want to say, “this isn't provincial jurisdiction", and it will take years to sort out.
If there's agreement, you can make sure there are no holes and that Canadians are protected.
Ms. Castets-Renard, I heard you yesterday on Radio-Canada as I was headed to Ottawa, and the topic was really interesting. You were talking about the things that could go wrong with artificial intelligence as a result of its use by law enforcement authorities, particularly in connection with facial recognition. What I understood from the case that occurred in Ireland was that the use of artificial intelligence could, for instance, place the presumption of innocence at risk.
Are current Canadian laws sufficiently advanced to protect against potential social problems? Bill may not be the solution. How can we plan for or protect ourselves from these problems, which are probably imminent?
Not only that, but the use of artificial intelligence in political face-saving endeavours might well lead to other restrictions. That's what happened, I understand. Is that right?
:
In Canada and the provinces, the use of facial recognition, generally speaking, and in particular by law enforcement agencies, is not circumscribed. Of course, without a legal framework, it becomes a matter of trial and error. As was demonstrated in the Clearview AI case, we know from a reliable source that facial recognition was used by several law enforcement agencies in Canada, including the Royal Canadian Mounted Police.
When there is no legal framework, things become problematic. Practices develop without any restrictions. That's why people might, on the one hand, fear the legal framework because its existence means the technology has been accepted and recognized, while on the other hand, it would be naïve to imagine that the technology will not be used and can't be stopped, and possibly has many advantages for use in police investigations.
It's always a matter of striking the right balance between the benefits of AI while avoiding the risks. More specifically, a law on the use of facial recognition should ideally anticipate the principles of necessity and proportionality. For example, limits could be placed on when and where the technology can be used for specific purposes or certain types of big investigations. The use of the technology would have to be permitted by a judicial or administrative authority. Legal frameworks are possible. There are examples elsewhere and in other fields. It is certainly among the things that need to be dealt with.
I would add that Bill is not directly related to this subject, because what we are dealing with here is regulating international and interprovincial trade. It has nothing to do with the use of AI in the public sector. We can, in due course, regulate companies that sell these facial recognition AI products and systems to the police, but not their use by the police. It's also important to ask about the scope of the regulation that is to be adopted for AI, which will no doubt extend beyond Bill C‑27.
:
One of the problems is that it does tend to misidentify not only racialized people but also non-binary people. There are cases such as self-driving cars having a problem recognizing women. When these technologies start to affect larger proportions, or a significant proportion, of the population without some sort of accountability measure, we're looking at a very bad fragmentation of society on an economic level, a social level, and in ways that would fracture our politics. I think that can be minimized, to be honest.
One of the things I would like to see with AIDA is that it be its own bill. I personally think it should be spun off so that we can look at these things more clearly, because, as it stands right now, there is nothing to.... For example, if you go for a loan and AI predicts that your loan should be rejected because of a variety of factors, or maybe factors that aren't attributed to you because of race, gender, class, geographical location, religion, language, all the things.... If we're going to build these systems, we have to protect people from the negative impacts of those systems, especially when they happen at scale and especially when they happen with government agencies.
I think one of the problems with this bill is that a lot of government agencies, especially in national security and law enforcement, will be exempt. Those are some of the areas—you think of immigration too—where you will see large uses of AI.
I would say about education that a lot of the education over time should have come from journalists and journalism. We should have had a more robust journalistic tech field that could inform all of us and look into these issues with AI and tech writ large.
Mr. Chair, I'd like to get back to the bill before us.
Without prejudging the outcome of 's proposed amendments, we will assume for the time being that those amendments would be passed and incorporated into AIDA. I'd like to know from the three of you—Mr. Harris, Ms. Quaid and Mr. Gagné—if you support AIDA's going ahead as it is right now. It's my understanding that in the future it is almost certainly going to be amended, re-evaluated, recrafted; and it may come back in a different form.
We have to make a decision on the bill before us right now. You're here giving us advice.
Is it your advice for us to go ahead with this, or are there substantive amendments that you would propose?
:
I can speak for myself, to begin with.
I think the bill right now is significantly better than nothing. One of the key factors for me in evaluating this is just the timeline. Do we want to be confronted in the year 2024, 2025 or 2026 with nothing on the books? My strong impulse is to say no, we must have something.
Given the timeline, as has been explained to me by folks who are working on this bill, it seems unlikely otherwise that we would have something on the books by then. That's my understanding—it may be wrong.
If that is the case, then the bill in its current form is better than nothing. That's literally how I'm approaching this. There are things that are actually very good. I think the general purpose AI system stuff and the cessation-of-operations components to the bill are really good.
Overall for me, given the current landscape and the timelines, I would be in favour of the bill's going ahead. However, I see significant issues with it, which I highlighted, including the computational power thresholds and all of that stuff, in my testimony.
:
To share a perspective on it, why is it that OpenAI, Google DeepMind and Anthropic are all based in Silicon Valley and there is no Canadian equivalent?
I'm a start-up founder veteran. I built all my start-ups in Silicon Valley; I didn't build them in Canada. I was born and raised here, and I've lived here basically the whole time. I moved to Mountain View to build my start-ups early on, and then I moved back, but I still based them there.
There are some regulatory factors. It's nice to have a Delaware C corporation, but that's not the fundamental reason. The fundamental reason these companies are based in the Silicon Valley area is just that the best investors in the world are based there. That's it. That is literally the single most important factor by far.
When I go to Y Combinator, I hear the best advice on start-up building on planet Earth. There is no equivalent to Y Combinator in Canada. This is the world's best start-up accelerator, full stop.
The best investors are in Silicon Valley, the Vinod Khoslas and the Sam Altmans. That is why this is happening there.
There are, at the margins, regulatory things going on here, but as a start-up founder who has done this multiple times and has been faced with this exact decision many times, whether it's with AI or other things, there's a kind of talent delta there in terms of the best VCs, the best angel investors. That's the ecosystem.
Tobi Lutke from Shopify started his company here in Ottawa, but there's a reason that their cap table is filled with Silicon Valley money. It's because that's where the best investors are.
At the end of the day, it's the same story over and over. I think we're just seeing it replicated in AI. I don't think there's anything too different there.
:
I agree. To go back to the fact that we do have examples elsewhere, we have high-risk industries that exist already. The nuclear industry is one of them, but there's also finance.
I disagree slightly with Mr. Gagné, because to the extent that the Americans are doing something, they have the force to insist. That is the importance of being a player at the international table with an advocate in the form of a commissioner who is truly independent and able to make decisions. Then you can make sure you're on side with everything. I do believe that at the end of the day international co-operation is essential, but I agree that looking for dangers ahead of time before they're launched on the market should be the case. We insist on that already for other things, in product safety and in other things, so to me it's not new. We have to adapt it to AI.
The other thing I would add that is important, because sometimes this comes out, is, oh, we can't force companies to share this sensitive information, because there's a competitive dynamic. But government departments handle confidential information all the time. That's what they do. The Commissioner of Competition does this all the time. Yes, of course there is a risk that it gets leaked or whatever, but I don't think that's any higher than other risks. I think sometimes that's overstated. Government regulators can handle sensitive information and can use it in the public interest. That's why we trust them to do it, I would say.
:
Thank you very much, Mr. Lemire.
Thanks to all the witnesses for enlightening as us with their perspectives on this important bill this afternoon.
I spoke to Mr. Harris before the meeting. For those who submitted briefs before the amendments were made public, please don't hesitate to send the committee a revised document reflecting any necessary adjustments made in response to those amendments.
Once again, thanks to all the witnesses for appearing in person and by videoconference.
[English]
Thank you for joining us.
Thank you to the interpreters, support staff and analysts.
We'll be back shortly after the vote.
The meeting is suspended.
:
Ladies and gentlemen, colleagues, we will continue this session and resume meeting no. 101 of the House of Commons Standing Committee on Industry and Technology.
I would like to take this opportunity to apologize for the accumulated delay as a result of the votes held following the oral question period and those just held, but here you are.
Pursuant to the motion adopted on November 7, 2023, the committee is resuming its study on the recent investigation and reports on Sustainable Development Technology Canada.
I would like to welcome today's witnesses.
We have George E. Lafond, Strategic Development Advisor. Stephen Kukucha, President and Chief Executive Officer of CERO Technologies, is joining us by videoconference from Vancouver, and we also have, in person, Guy Ouimet, Engineer at Sustainable Development Technology Canada.
If you wish, each witness will have five minutes to present your remarks.
Mr. Lafond, you have the floor for five minutes.
:
Thank you very much, Mr. Chair and honourable committee members, for having me here today.
I want to begin by acknowledging that we are on the unceded and unsurrendered territory of the Anishinabe Algonquin people.
My name is George Lafond. I'm a citizen of the Saskatchewan Muskeg Lake Cree Nation in Treaty No. 6 territory. I currently serve as an adviser to businesses, educational institutions and social and cultural organizations and I am known for successfully leading strategic initiatives requiring first nation engagement.
Previously I served two terms as the treaty commissioner of Saskatchewan, the first treaty Indian to serve in that role. I was appointed by the Harper government in 2012 and then reappointed in 2014. I served as a tribal vice-chief and then later as a tribal chief of the Saskatoon Tribal Council, a first among equals, with seven first nation chiefs and their diverse first nation communities.
My entire public service has been devoted to supporting reconciliation, wellness, economic development and innovation for my communities. Improving access and the quality of education for indigenous youth is what underpins all of my efforts, and this work is informed by my educational background and experiences as a public school teacher some 42 years ago.
In the education sector, I served as an adviser to three university presidents and also served as a university board governor. I advised them on how to ensure that indigenous students could be set up for success throughout not only their time in post-secondary education but also their future careers. It is in this role that I worked with the Saskatchewan Indian Institute of Technologies, commonly referred to now as SIIT.
It was these public service roles that led me, in 2012, to be appointed by the Harper government as an expert to examine first nations education on reserve and to bring advice forward to address a new relationship between the federal government and first nation communities with respect to education. It was there that I witnessed the fact that first nations people were doing well in primary industries but were almost non-existent in the clean tech industry.
Since I was appointed to the board of SDTC in 2015 there has been a noticeable change in how this organization has modernized to better meet the needs of the markets and the Canadian clean-tech industry. It was paramount to ensure that indigenous communities were also factored into this equation, to determine how indigenous peoples could be set up with the proper skills and training needed to participate in this critical sector and also that this sector could respond to the unique needs of our communities.
Strides have been made over the last decade, but there's no denying that the clean sector and innovation agenda present an even steeper hill to climb given the lack of access to training and education for indigenous youth. Indigenous people are at risk of being excluded from innovation in Canada. We're under-represented in STEM, with Statistics Canada reporting that the total employment in this industry is less than 2.5% for indigenous persons with post-secondary training.
During my time on the SDTC board, I had conversations about this very issue with SDTC management and I advised organizations and post-secondary institutions of their obligations to ensure that indigenous youth did not miss out on the future of innovation.
In 2020, SDTC approved funding for a maker's lodge for SIIT, Canada's first innovative accelerator dedicated to educating and empowering grassroots indigenous entrepreneurs. This pilot project was done through the SDTC ecosystem funding stream, which encourages innovation and collaboration among diverse persons in the private sector, academia and not-for-profit organizations. This is a part of SDTC's mandate and a part of their contribution agreements.
I want to be clear. Although I spoke to SDTC about these important issues and about finding solutions to ensure indigenous participation and I introduced them to SIIT leadership, I was in no way part of the decision-making process with respect to funding the SIIT project. When SIIT entered conversations with SDTC, I proactively disclosed my conflict and recused myself from any and all discussions moving forward.
Following the RCGT report, I was made aware that SIIT mistakenly included my services as a part of their expenses under the guidelines of the SDTC project. This was an error. I immediately contacted SIIT, which promptly resubmitted their expense claims. I never received a payment from SIIT related to this project. My contract with SIIT is as adviser to the president and is unrelated to this project.
As I've said, I had spent years working in indigenous education and on improving outcomes for indigenous communities. Although I'm an adviser to the SIIT, this program has provided no personal benefit to me. However, it does have potential benefit for thousands of indigenous youth, giving them an opportunity to combine traditional knowledge with a new idea and to contribute to the innovation landscape of Canada.
As the committee does its study, I do not want this important work to be lost. It is important that, through the creation of innovation programs like this innovation accelerator, we help mentor indigenous leaders and entrepreneurs, and ensure that not just middle-class communities but all Canadians benefit from a meaningful contribution to a modern Canadian economy.
Thank you very much, Mr. Chair.
:
Thank you, Mr. Chair and honourable members.
My name is Stephen Kukucha, and I have served on the SDTC board since February 2021. I live in Vancouver. I'm a retired lawyer, and I'm certified by the Institute of Corporate Directors.
SDTC's work is critical to the development and success of Canada's clean-tech ecosystem. I believe that my unique perspective and positions within the clean-tech sector bring value to my role on the board. My over 20-plus years of experience in clean tech provide me with an understanding of the challenges that companies face in acquiring capital. That struggle has been exacerbated by the market downturn in late 2021, the dramatic increase in U.S. government investment in this space, and now the pause in SDTC's work.
Whatever happens because of these hearings or other investigations, it's critical to state the important and unique role that SDTC plays and that the organization's mandate has for Canada. I ask this committee to consider its importance to all the fledging companies it supports.
As well as my work in clean tech, I should also disclose that I have been involved in politics in the past, both federally and in British Columbia, and I'm very proud of that involvement. I believe that engagement in our country's democratic process, no matter what party one supports, is important to civil society. For example, I have a profound respect for all of your decisions to run for office and to seek careers in public service. It's one of the more important things a Canadian can do.
I would also like to disclose that I was the recipient of the whistle-blower call to the board. I'd like to put that on the record. Unknown to me, our call was surreptitiously recorded. However, I'm comfortable tabling a transcript to show the level of professionalism that this individual was afforded in good faith. On multiple occasions, the whistle-blower was asked to share their dossier and the facts that they were basing their allegations on so that the board could respond and address them in a professional manner. Unfortunately, they did not.
After my one-hour conversation with this individual, I quickly realized that the board needed to be informed, that legal counsel need to be engaged and that a proper process needed to be followed. An immediate investigation was commenced without informing the individuals who were the subjects of the allegations. I acted in good faith and followed proper governance, and in my opinion, the board undertook its fiduciary duty.
With regard to my investments in clean-tech companies, any and all conflicts I had were disclosed prior to my appointment. In fact, I was asked to resign from the board of a company that had previously received SDTC funds, and I promptly did so. Any conflicts after joining, either real or perceived, were also disclosed. Finally, I have not had access to any files related to those conflicts, and I have recused myself from any decision-making.
With regard to payments during COVID to SDTC companies, I'd like to share my perspective as well. At my first board meeting, two weeks after being appointed, a recommendation came forward to give management discretion, within an allotted pool of capital, to make assistance payments if required. No individual companies were listed in the board documents. I'm willing to table a copy of that document to show you what the board received if required. There was also legal advice given to directors at that meeting: that if they had previously declared conflicts, they did not have to redeclare. I had declared mine two weeks prior.
Finally, I have not received a dollar from any company that has received SDTC funds, and none of the companies I'm invested in have exited or provided any return to me. I've not been compensated in any way by these companies or other organizations I'm affiliated with. I've received no payment, no dividend and no remuneration at all. In fact, my partners and I have contributed significant personal time and financial resources to keep these companies and other non-clean-tech companies contributing to the Canadian economy over these last few very challenging years.
In closing, in my experience, the team at SDTC has been professional and has delivered results. While no individual or organization is perfect and we should always strive to improve, I'm very proud of the SDTC team and the work I've done on this board.
I'm happy to take your questions.
:
Thank you, Mr. Chair and thanks to the committee members for welcoming me today.
My name is Guy Ouimet, I am originally from Montreal and still live there. I am an industrial engineer graduated from the École Polytechnique of the University of Montreal. I hold an MBA from McGill University and am certified by the Institute of Corporate Directors.
After starting my career within multinationals, I quickly moved towards a role as an industrial investment professional having worked for most of my career in the fields of venture capital, private placement, project financing and mergers and acquisitions. In this capacity I acted as a senior executive for the Société générale de financement du Québec for 10 years before launching my private practice in the form of a boutique investment bank.
This practice has developed over the years based on my multi-sector and technological expertise, particularly in the fields of energy, metals and minerals, chemicals and petrochemicals, the automotive industry, other manufacturing sectors and, among other things, the evolution of these sectors towards the decarbonization of the economy.
I have had institutional and government funds as clients as well as numerous private companies for 25 years. I participated in the setting up of multiple investment projects and transactions. Among other things, I acted as an external advisor for SDTC between 2006 and 2014. My combined expertise in multi-sector venture capital and the setting up of large-scale projects was then called upon, particularly with regards to the NextGen Biofuels fund of SDTC for which $500 million was allocated to SDTC in 2007 by the federal government then in power.
Since 2020, I have been exclusively a corporate director and act on six boards of directors and various committees.
Having taken a distance of 4 years from SDTC and a new management being in place, I joined the SDTC board of directors on November 8, 2018, following an appointment resulting from my application to a recruitment process conducted by the Governor in Council which lasted over a year. Having no political affiliation and having requested no references except those required by the validation procedures during the recruitment process, I have declared all my background and skills, including my previous role as an advisor for SDTC. At the end of the Governor in Council process, I was recruited based on my expertise to contribute to the SDTC Board of Directors.
In addition to being a member of the board of directors, I am a member of the Project Review Committee, or PRC, and the Governance and Nominations Committee. Appointments to committees were subject to the approval of the Chair of the board, in this case Mr. Jim Balsillie at the time of my appointment.
During the Governor in Council recruitment process, I declared a conflict of interest with a company which I had advised, and which had been approved for SDTC funding prior to my appointment to the board. Once appointed I discussed this conflict of interest with Mr. Gary Lunn, then Chair of the SDTC Board Governance Committee. He advised me and I subsequently followed his recommendations as required, all within the framework of already established governance.
Since my appointment to the board, I have periodically declared all real, apparent or potential conflicts, I have not had access to the files in question and have recused myself from any decision relating thereto.
Regarding the emergency COVID‑19 payment to businesses by SDTC, as already indicated, the SDTC Board referred to the Osler legal opinion which was based on the prior declaration of conflicts of interest, the urgency of the situation and the universal nature of the measure for which no company benefited from individual treatment. Like the rest of the Board directors, I acted in good faith, in accordance with this opinion.
SDTC's mission appeals to me in the sense that it is absolutely relevant for the conversion of the Canadian economy towards decarbonization as demonstrated by its track record over more than 20 years. The relevance and effectiveness of SDTC have been recognized on several occasions through periodic performance audits. Furthermore, cleantech entrepreneurs praise its contribution and venture capitalists consider a SDTC contribution as prior validation for their own investment. These facts are well known in the industry across Canada.
When joining the board, I noted the quality of SDTC's governance, as well as the stature and reputation of my fellow directors. As a finance professional and corporate director, this is an essential prerequisite that I have always applied before joining each of the 21 boards of directors on which I have participated in my career.
I am available to answer your questions.
:
The conflict of interest management procedures are rigorously followed. It's important to note that the act introduced in 2001 to constitute SDTC requires that directors come from the green technology industry and that they be connected. The legislator has thus put in place a recipe for creating conflicts of interest. Accordingly, we have had thoroughly rigorous practices in place to manage them from the start.
Every time a file is submitted to the governance committee, we provide those who receive it with a list of businesses, stakeholders, shareholders and and officials involved and we ask them whether they have any conflicts of interest. As a result, a person can immediately see whether he or she has a real, perceived or potential conflict and, if so, immediately recuses. From that point, the individual receives no documentation and does not participate in decision-making.
The list of individuals in conflict of interest is noted at the start of every meeting of a decision-making, investment or advisory committee. From what I understand of the subsequent reports, in certain cases, there is no indication that a particular person left the meeting at a particular moment or subsequently returned. However, since the practice was known to everyone, that person declared a conflict of interest at the start of the meeting, recused himself or herself during consideration of the file in question and subsequently returned to the meeting. I have been attending board meetings in my capacity as director since 2018; I attended those meetings in another capacity starting in 2006, and I regularly witnessed recusals by all the directors of several generations.
I'll go to you first, Mr. Ouimet.
For a corporate director such as yourself, one's reputation is probably, even obviously, what's most important. In that connection, earlier you mentioned that it was normal for there to be conflicts of interest, given the way SDTC was constituted.
You have to understand the level of expertise that was required around the table. That was particularly the case when SDTC was founded, and the same is true today, 20 years later. The context required the involvement of persons who had a very clear understanding of what it meant to create a new business in emerging economies and in the green economy. It required a high level of expertise.
Could anything have been done differently than to engage individuals with confirmed experience and detailed knowledge of the sector? Could that program have been established differently?
:
Originally, in 2001, SDTC's mandate concerned much more limited fields, such as water, air, soil, processes and decontamination projects. So its mandate was very narrow but complex.
Its mandate has now become more complex because clean technologies have expanded into virtually all sectors of the economy. You need only consider the investments being made around the world, particularly in the United States, to see that.
Thorough knowledge of many sectors is therefore required. SDTC has 15 directors. The organization thus has to attract directors who have vertical sectoral skills in all fields. SDTC also requires a matrix of technological skills and knowledge of the various stages in the development of a business. Three factors must be considered: the sector, the kind of technology and the stage of the business's development.
Some of the new businesses that SDTC finances are at the bench-scale stage, others are starting up, and still others are growing. They also have completely different management and technological development dynamics depending on their stage of development. SDTC therefore requires a board that is capable of assessing the situations of those businesses because it has to consider a large volume and broad diversity of investments.
:
In 2015, 2016 and 2017, we began to realize that our organization was really beginning to respond to the marketplace. In other words, we were responding to what was given to us and to our obligations through our statutory requirements through the order in council. We recognized that we were stepping into a much higher-risk profile with high-risk profile companies, so we had to change our conflict of interest to keep pace. We also had to recognize the types of risks we were now moving into.
We really began to deal with conflict of interest by getting good advice from, I believe, KPMG, which would give us examples as to what was happening inside IP, inside of AI, in data and how we could make sure we protected those.
Basically what would happen is, as board members, in advance of a board meeting, we would get a list of the companies we were about to receive and would be making a decision on at the next board meeting. We were to declare any perceived or clear conflict of interest. It would be a profile from left to right, saying that this is the amount; these are the actors, whether it's a VP or...; and these are the other investors inside of it. It would list the sector—whether it was the tech sector or the oil and gas sector—and what it intended to do.
Then we would respond back and say, “I have a conflict”. When the discussion came through inside the board discussion, people would say that there's a conflict and I recuse myself and leave the room.
:
You answered “yes”. I'm not sure if that was audible.
We heard about how rigorously the conflict of interest rules have been followed at the company. I want to go over that with you.
In front of me I have meeting minutes from the Monday, March 23, 2020, meeting where a COVID payment for Lithion of $192,100 was paid.
Mr. Guy Ouimet: Yes.
Mr. Michael Barrett: You've agreed “yes”.
On Tuesday, March 9, 2021, that company you have an interest in received $201,705.
Mr. Guy Ouimet: Yes.
Mr. Michael Barrett: So there was one payment of $192,000 and one of $201,000. You voted to award that to both of those companies.
Now, in attendance at both of those meetings, your name is listed. Were you in attendance at those meetings, sir? Just give a yes or no.
:
I remember receiving a briefing note that talked about where we were going to start positioning SDTC, which was in a high-risk area. We basically looked at all of what I call the "silos", whether it was the type of research that's required for technology, or whatever.
You said you were a corporate banker. As you know, when you start doing start-ups, you're really looking at the capital costs, you're looking at the tangible assets, like a factory. In Saskatchewan it would be $200 million for a canola crushing plant. What entrepreneur has that? Basically, you're looking at banks, you're looking at investors, whatever it may be. We recognized we had to be even on top of that tier. When we started talking about the types of loans, or the types of grants, soft loans, whatever they may be, we knew we had to be the point of these issues surrounding the new way in which we had to help fund and support start-ups or scaling up.
I would say to you that the rigour was there, because in many ways when you see how some of the projects that were starting up then moved to scale up, it told me that we were really graduating. We were putting bets on companies that really were saying what they were going to do.
I'll continue with you, Mr. Ouimet.
From what we can understand, the oil industry generates profits of roughly $200 billion. TVA informed us this week, and we mentioned it earlier in the House of Commons, that people in that industry have had some 2,000 meetings with Liberals at the various levels of government, which amounts to an average of three meetings a day.
In all those meetings and all those investments in the political parties, both Liberal and Conservative, can't you perceive an apparent conflict of interest that might be worth pointing out?
It certainly wasn't a regular practice. I know the RCGT report pointed out some documenting issues. I think some of the minutes and documents may not accurately reflect.... I can't speak about all of them with specificity.
I know for a fact that, when directors had a conflict, the process George discussed is exactly what happens. You declared the conflict before you received materials, and you left the room. Perhaps, in the instance of a COVID payment—the second one; I can't speak on the first one—some people, including me, had an interest, but there was no list of companies that came forward to us.
To this date, to be honest, I didn't even know the companies that I [Inaudible—Editor] conflict received funds.