Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.
Welcome to meeting number 129 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, February 13, 2024, the committee is resuming its study on the impact of misinformation and disinformation on the work of parliamentarians.
I'd like to welcome the witnesses with us today for the first hour of the meeting.
As an individual, we have Mireille Lalancette, professor, political communication at Université du Québec à Trois‑Rivières
Also as an individual, we have Timothy Caulfield, professor, Faculty of Law and School of Public Health at the University of Alberta.
Ms. Lalancette, you have up to five minutes for your opening remarks. You may begin.
I would like to take a fairly broad look at disinformation and elections.
In my opinion, it's a bit like a perfect storm. Why is it a perfect storm? This can be attributed to five factors: the declining role of the media, the increase in—
Just a moment, Ms. Lalancette. We will suspend for a few seconds because the sound in the room is not loud enough. We'll turn it up a little so that people in the room can hear you properly.
It's a little better now.
Ms. Lalancette, I'll reset the clock and you can start again.
I think disinformation and elections go hand in hand right now, because of what I call a perfect storm related to the role of the media, the rise of digital social media, the decline in partisan affiliations, the rise of populism and the increased incidence of election campaigns.
More and more media outlets are in financial trouble. More and more people are getting their news on Facebook, TikTok and Instagram. Trust in traditional media is also waning. News consumption is happening less and less on traditional media; people are heading to digital social media platforms to get their news.
This perfect storm is also related to the media funding crisis. More and more media outlets are shutting down, and this is creating a media void that's being filled by digital social media. However, a whole bunch of questions could be raised about the reliability of sources and the variety of content found on digital social media. There's also no code of ethics or journalistic standards governing the social media or influencer content. It's still challenging to regulate social media platforms.
All of this is also the result of what we call the decline in partisan affiliations. Fewer and fewer people carry a party membership card and identify with one political party. That has led to electoral volatility, coupled with what we call the rise of populism. During the trucker convoy, many groups showed their dissatisfaction, Canada was split into two blocks: east and west, and regionalism made a comeback. Populism is often protest-based or identity-based. People denounce the elites or focus on certain identities. The founding people will be pitted against immigrants, for example.
Other factors in the storm are fixed-date elections, which actually don't have fixed dates, and something we call the permanent campaign in my field of research. Candidates no longer campaign only when an election is called; they campaign all the time. So disinformation can be be concentrated during elections, but it can also happen anytime.
Where does that disinformation end up? Most of it goes to digital social media, because people get their news on the Internet and because its easy to use these platforms to create content. In some cases, it's impossible to determine where the content on these platforms comes from. We're seeing more and more deepfakes and fake new. This is being spread not only by people engaging in foreign interference, but sometimes also by political parties themselves. Right now, we're seeing politicians themselves talking about fake news and criticizing the media, copying current practices in the United States, including those of the Republicans.
How can we fight disinformation in this context?
From my perspective as a researcher, I believe it's important to acquire good media literacy and show Canadians how to distinguish false information from information that might be true.
It's also important to ensure that platforms are moderated. We saw an example of this last week, when the mayor of Montreal decided to disable user comments about her posts on X, formerly Twitter.
In addition, it's important for states to draw inspiration from what's being done elsewhere, particularly in Europe, to regulate practices and digital social media platforms. Ethical issues and problems related to information and disinformation need to be raised, particularly when it comes to electoral politics.
This is a subject that I feel extremely passionate about. The spread of misinformation is one of the greatest challenges of our time. Research shows that this is something that not only experts believe but also something that people around the world believe.
It's not hyperbole to say that misinformation is killing people. Misinformation is having a tremendous impact on democracies around the world. This is certainly something that we all need to address.
The battle against misinformation itself is very controversial, even when you look at ideas about what the definition of misinformation is.
I want to emphasize to the committee that even if we focus on things that are demonstrably false—about elections, vaccines, climate change or immigrants—we can, as a society, make a real difference.
This is a topic that I've been studying for a very long time, and as you heard from our last expert, I've never seen anything like what we're seeing right now. I just want to highlight a couple of challenges that build on the points she made—a couple of challenges that have made today and what's happening right now particularly challenging.
Number one, there is social media, absolutely, but in addition to that is AI. AI is going to make the spread of misinformation more challenging. It's going to make real, rapidly produced content that is very difficult to discern from reality. Studies have shown that many people believe that they can spot AI and deepfakes, but research consistently tells us they cannot, even when they're warned that AI might be coming.
The second thing that I find incredibly challenging right now is the politicization of misinformation and the connection of misinformation with political identity and polarization. This is a trend that is increasing and is doing incredible harm. It's not only horrible for democracy, but we also know that once misinformation becomes part of a person's political identity, it becomes more difficult to change their mind.
The third challenge is the degree to which state actors are pushing misinformation. The goal of many state actors and, by the way, of many misinformation-mongers, is to create distrust. The distrust that we see in institutions today is largely—not entirely, but largely—created by the spread of misinformation. Those spreading misinformation are trying to create distrust and information chaos. Alas, they are succeeding.
How do we respond? What can we do?
This is a generational problem. You've probably heard these recommendations over and over again, but we must come at this with a multipronged approach.
What does that mean? It's teaching critical thinking skills and media literacy and doing this across.... I wrote an article in which I suggested we start in kindergarten. We have to teach these skills throughout the life cycle, as they do in many countries.
We have to pre-bunk. We have to debunk. We have to figure out the best way to set labels and warnings on things like AI. Yes, we have to work with the social media platforms and other tech companies. Yes, there are regulatory tools that can be adopted.
The other thing I want to emphasize, which I think is so relevant to this committee, is the spread of misinformation about the fight against misinformation. As I've already said, much of the distrust that we see in society has been created by fake news and by the spread of misinformation. By the way, research consistently shows that.
We also have to recognize that fighting misinformation is not just about curtailing people's voices. On the contrary, most of the tools that we can use in a liberal democracy to fight the spread of misinformation can be used within the marketplace of ideas. Pre-bunking, debunking and education are things that work within the spirit of liberal democracies.
Yes, regulating can be a challenge. It's something that I welcome questions about.
I think that this is an essential topic that we must all band together to fight.
Thank you very much. I look forward to your questions and comments.
There was a ton of media. I want to read one of the headlines. This is from CTV News. It says, “Conservatives reject online bot allegation after Poilievre rally”. This article is set up and it has Conservatives denying a charge that has been made against them.
Now, I want to fast-forward you both to August 16—no, we'll say August 28, when a Canadian Press story was published across different outlets. I'm looking at it in the National Post, where it was titled, “No evidence Conservatives were behind social media bot campaign that praised Poilievre: study”. This study was done by the Canadian Digital Media Research Network, and their findings were that there was “no evidence that indicates a political party or foreign entity employed this bot network for political purposes.”
With that precursor—before I ask my question—Mr. Caulfield, are you familiar with the reporting I'm citing from the National Post?
We're talking about the spread of misinformation. We had two political parties, the Liberals and the NDP, that went out...and the media printed as sure fact that without having verified the claims, this was something that was paid for. There were allegations that it was orchestrated through foreign state actors.
Now, look, we're at a time in our country when we have an inquiry happening into foreign interference in our democracy. We have real state actors who are spreading disinformation. It's being propagated, of course, in the political discourse, but also in media.
Here we have an example of, for once, an independent group disproving the claim that was printed without it having been proven. Then we have political parties, the Liberals and the NDP, that didn't withdraw the allegation or didn't say, “Oh, I stand to be corrected,” or “We believe this because it was again leveraged for partisan purposes.”
Let me read you a quote from the author of the report, who said, “The finger-pointing without evidence is actually quite destructive and leans into this hyper-partisan, hyper-polarized information ecosystem that we find ourselves in today in Canada.”
The study says that the initial bot campaign that was used received very little attention because it was from sock puppet accounts, and they don't have a real following, but it was amplified with millions of impressions by Canadian political actors who didn't have altruistic intentions.
I'll start with you, Madame Lalancette. Is this type of misinformation that's being propagated, in this case by the Liberals and the NDP, part of the problem? Certainly that's what the report's author suggested.
Disinformation comes not only from bots or foreign countries but also from the way political actors are communicating and want you to believe something. They are communicating that they feel this is foreign interference and that the party bought the bots in order to get attention.
Yes, this is part of the problem. It's coming from everywhere right now in the chamber. It doesn't have a party association. You can see it going in every way, I think.
Thank you to the witnesses for being here today and for your very important testimony.
Building on some of the comments my honourable colleague made, I don't think it's about which political party has or has not done anything maliciously. When we're talking about foreign interference, it's about who is vulnerable to it. The bots example that Mr. Barrett raised is a prime example of how a political party in Canada is vulnerable to being used by Russian bots in order to disrupt democracy in Canada.
I want to question you, Professor Caulfield.
You talked specifically about what tools we can use to prevent that kind of interference within our democratic systems. You talked about teaching, but I also want to talk about accountability.
When we're talking about how to prevent this from happening, other than through teaching and raising awareness, how do we build accountability within political parties to ensure that, for example, the Leader of the Opposition is not susceptible to bots taking over his political campaign? How do we build on that to ensure the Canadian democratic system is protected?
I'd like to start by highlighting a very recent study. I believe it came out yesterday or the day before. It's a study done in the United States asking people what kind of misinformation the public is most concerned about. The number one response was misinformation from politicians. I think, by the way, that this data would be replicated in Canada. The public does not want to hear misinformation emanating from politicians, even though they know it's there. They want steps taken to stop it.
The other thing that is very important to emphasize is that this is where there is some degree of agreement across the political divide—stopping the use of things like AI and bots in the context of elections and political discourse. There was a survey done by EKOS Research that found very high support, for example, for the use of some type of regulatory response to stop the use of AI in the context of a political campaign.
I think this tells us that the Canadian public really values honesty in the political domain, even though they're realistic about it. They're not naive. They welcome the potential use of regulatory measures in this space. They're less comfortable with regulatory responses—or there's more of a divide—when we talk about regulating misinformation, because that feels like infringing on freedom of expression. There are genuine legal challenges there. However, when you're talking about protecting the integrity of democracy and our elections, I think there's room for a regulatory response.
We saw, with respect to regulations, how much misinformation our online harms bill received. What is the boundary between online harms and the quelling of freedom of expression?
You mentioned, in your comment just now, the Canadian public wanting honesty. I recently came across an X account—I keep wanting to say “Twitter account”. It's called @PierreIsLying and it highlights, on a daily basis, how many times the Leader of the Opposition lies in public during question period. They outline it and put down stuff like how many lies there are per minute. It's that kind of thing.
When we're talking about accountability—about that honesty and that teaching moment for Canadians—how important is it to fact-check the data or information that politicians and political discourse are providing to Canadians?
I think it's very important. Again, it's something that you see the public say they want. The problem, of course, is that because this has become so politicized, and because the fight against misinformation has become so politicized, people trust fact-checkers less. They'll say it's partisan.
Some really interesting research has come out about the degree to which the response to fact-checking is different when an issue has become politicized. As I said earlier in my opening statements, several studies have shown that once this becomes about political identity, whether it's misinformation about vaccines or immigrants, etc., they become more resistant to fact-checking. We need to do more research on this exact topic. In fact, this is something we're researching right now to explore what kind of tools we can use when misinformation has become part of political identity.
The other problem that's happening, of course, is that once a bit of misinformation becomes part of a political platform, it becomes an ideological flag. Once that happens—we've seen that happen with, for example, vaccine misinformation, something that we study—it becomes very resistant to change.
There is some suggestion of tools that can be used, such as pointing to what the scientific consensus says, what the body of evidence actually says, and making it clear what that body of evidence is, but there's no doubt that because this has become so political, it has become more challenging.
I'd like to thank the witnesses for being with us today, particularly Ms. Lalancette from the Université du Québec à Trois‑Rivières, which is in the riding of Trois‑Rivières. I always like to be able to invite people from my constituency.
Ms. Lalancette, I'll start with you.
In your opinion, is the disinformation targeting parliamentarians the work of foreign actors or domestic actors?
I would say that disinformation can come from both.
Right now, I'm quite struck by what's happening within Canada and how people are able to spread false information. As your colleague was saying, it seems that responsibility, accountability and truth are no longer important.
Perhaps it would be a good idea to come up with a set of rules, so that people can't change the names of the parties or associate them with false information in the House, for example. People must be able to discuss the real issues, rather than having debates where everything comes down to personality or turns to personal matters, because that's where things start to go sideways.
Certainly, everything related to truth, fake news and disinformation is currently becoming more central to the ethics that political actors will have. This plays a role in the way people will want to communicate with each other, as well as with voters, of course.
You talked about a perfect storm at the beginning of your presentation. I've noticed that we're living in an era where truth doesn't get you very far. Because truth feels out of reach, people settle for likelihood, and it seems to me that is a breeding ground for disinformation.
Yes, if people keep hearing that something might be false, they will consider it to be false. We see this a lot with Trump, who denies having made statements, even though he's been recorded on video saying them. Because this information can easily be spread through digital social media, but also traditional media, people end up believing what's been repeated over and over, rather than the truth, of course.
Disinformation is deliberately conveying false information. As for misinformation, in some cases, it's when someone shares false information, but unintentionally, without knowing that it's false. Malinformation is really when someone knowingly passes on information that is false to mislead the public and create mayhem.
Yes, there are a number of definitions. I'm giving you the most commonly used ones, but the wording can vary.
You may have seen the recent report about a centre at McGill University, in which Mathieu Lavigne and his colleagues spell out definitions for these terms. The report includes key references and recommendations for misinformation and disinformation in an electoral context.
Mr. Caulfield, you talked a lot about politicians targeting the public with disinformation. However, it must be acknowledged that politicians are also victims of disinformation.
In your opinion, who is targeting politicians with disinformation?
Politicians live in the same information ecosystem that we all do, and one of the very best examples of it, I think—and I'm sorry I keep pointing to the United States, and my colleague has done the same, but there are just so many good examples emanating from that jurisdiction—was the misinformation we saw with the immigrants eating dogs and cats. That came from the community. It started on social media and then was adopted by politicians, J.D. Vance and Donald Trump.
By the way, a very recent report came out that found that over 80% of Americans have heard that misinformation and it plays to the illusory truth phenomenon, which my colleague spoke to. If you hear it enough, it starts to feel real, especially if the misinformation plays to your preconceived notions, plays to your ideological leanings. The confirmation bias kicks in and you believe it.
There has been some very interesting research that's come out by people like Stephen Lewandowsky and his colleagues that talks about the difference between belief speaking and truth speaking. This is this evolution of the notion of truth. Fact speaking is from the old school, in that it's rooted in evidence. Belief speaking is if you just say something earnestly enough—if you say something with enough conviction and it plays to your ideological beliefs—people will adopt it because they believe the gist of the point, even if they know in their hearts it's not literally true.
I think that's what's happening increasingly, unfortunately, in politics, and that's what's happened with that horrible example in the United States of the idea of immigrants eating cats and dogs. It becomes part of a political agenda, and that community adopts it, despite the fallacy that underlies the belief—and by the way, this happens across the ideological spectrum.
I'd like to welcome the witnesses here. It really is great to get back to work on what I think is a very important topic.
I want to begin with Mr. Caulfield.
In your article in the briefing note that we have here, “Politics and vaccine misinformation: A horrifyingly bad mix”, you've spoken a little bit about this in terms of political ideology. In this article, you identify that there is a bit of political partisan opportunism, but that political identity, in the case of COVID, was adopted by the right-wing online community and is associated with vaccine hesitancy.
Can you just comment a little bit about that briefly, and why you think that is the case here?
For sure, and I think this is incredibly important.
As I pointed out in that article, political identity has become.... Look, vaccination hesitancy is a very complex phenomenon. Access matters, needle phobia matters, past injustices matter. There are various culturally and socially complex phenomena.
Right now at the population level, you could argue that political identity has emerged as the single strongest variable of predicting vaccination hesitancy, and also the engagement with an embrace of vaccine misinformation. I think it's really important to highlight that history and context matter.
It hasn't always been like that. On the contrary, in the past, lots of the vaccine misinformation emanated a little bit from the left. It was kind of New Agey, right? It was whole foods and yoga and a distrust of anything that's not natural.
Now we see it very strongly on the right, and there is a large body of evidence to support what I'm saying. This isn't just me speculating. There's a lot of empirical evidence.
I want to help you on that, because I want to use an example that's closer to Hamilton. In fact, had you come into Ottawa, you may have come across some folks who still hold some conspiracy theories around the vaccine. Of course, that evolved into what was known as the Freedom Convoy and the political partisan connections there.
However, I want to reference in particular Paul Alexander, who is a Canadian independent scientist. For those who may not know, he is a former Trump administration official and was with the U.S. Department of Health and Human Services during COVID-19. His refrain was that we want them infected. He actively—or at least according to the Wikipedia article that I have here—sought to muzzle federal scientists and public health agencies to prevent them from contradicting the Trump administration's political talking points.
Those who followed the convoy and the occupation here in Ottawa would know, of course, that Pierre Poilievre marched alongside Mr. Alexander, and that was part of the rhetoric coming out of the convoy. There were lots of instances to see the leader of the official opposition walking in lockstep with this very high-profile Trump administration person.
Talk just a little bit about how this isn't necessarily just an American phenomenon, but that there seems to be a cross-pollination of conspiracy when it comes to this on the record.
Yes, absolutely. I think this is incredibly important.
If you look at why people are justifying their position on COVID, you see that they point to misinformation, right? They don't point to being concerned about the conflicting data on who should get the COVID booster. No, they point to things like.... A very large percentage of individuals in Canada point to things like the idea that the COVID vaccine has killed more people than it has saved. To be clear, the COVID vaccine has saved many millions of people around the world. The COVID vaccine lowers your risk of long COVID. The COVID vaccine improves pregnancy outcomes and lowers the risk to your heart and cardiovascular system. It lowers the hospital costs. It lowers hospitalization rates, etc., but they believe the “died suddenly” myth and the turbo cancer myth, and they'll explicitly reference those. That is misinformation that is emanating from the alt-right. It's having a real impact on what's happening in our country, and I fear that if there is a pandemic, it could have a grave impact.
I'll give one very powerful, tangible example. It's from another survey that was done in Alberta shortly after our last provincial election. Of those who are completely unvaccinated for COVID, over 91% of them voted for the United Conservative Party, and only 3% to 5% of them voted for the NDP or another party. That is incredible.
I'm looking at this, and I'm seeing the way in which Erin O'Toole was cut out of his Conservative leadership at this time. I'm referencing a tweet from Pierre Poilievre that's before me. It says, “Today I walked alongside military veteran, James Topp, who has travelled the country by foot for free choice”, so again there are these libertarian ideas about what freedom is.
Then Poilievre goes on to say, “End all mandates. Restore our freedoms. Let people take back control of their lives.” In the same photo is this conspiracy wingnut, Paul Alexander, who was talking about herd immunity, which of course led to, I think, a whole bunch of confusion around this.
In your closing statements, how can you share information on pre-bunking this type of approach to political rhetoric?
No. In fact, since, I'm going to say, 2013, I've had very little to do with the Trudeau Foundation. I was involved briefly at the beginning of the pandemic. It had a pandemic response committee that I sat on; it involved academics from across the country. I believe I sat on that committee for.... I think we had just a handful of meetings, and I received no money—
How much money have you received from the Trudeau government in the way of research grants in the past two years to study misinformation and disinformation?
From the Canadian Institutes of Health Research in 2020, I received, I believe, about $300,000 to study misinformation. This was a peer-reviewed grant, as you know, that was held by the university. I shared that grant with two other institutions, the University of Calgary and the University of Regina, and that money was used to support trainees and research.
I'm also fortunate to be an adviser now of an entity called ScienceUpFirst. We've also received funding from the government. Again, I don't receive any personal funding. We use that to—
I appreciate those answers, but I do want to ask you why it is you didn't disclose that up front—your connections to the Trudeau Foundation, the fact that you have received a significant amount of research money specifically on the subject matter of misinformation and disinformation from none other than the Trudeau government—and you have come to this committee and presented yourself—
My point of order is on relevance. I doubt this line of questioning deals with the expertise that Professor Caulfield brings to the table on this very important subject.
Look, members know that I allow a lot of latitude here, and attitude, when it comes to questioning. It's the member's time. Mr. Cooper has the right to ask whatever questions he wants, as uncomfortable, perhaps, as those questions may be, or perhaps some members may not like those questions.
Mr. Caulfield, I'll let Mr. Cooper finish here, and then you can answer, sir.
I am fortunate to be an academic at the University of Alberta. As an academic, I have a responsibility to try to bring in research grants to do independent research. Like all academics, I receive grant funding from various sources. It's independent. I don't receive it personally. That's how research is funded in Canada.
—taking back my time, I think it's highly germane that you were a fellow at the Trudeau Foundation. You received $200,000-plus from the Trudeau Foundation. You sat on a Trudeau Foundation committee as recently as four years ago and you've been funded by the Trudeau government, yet you come out and present yourself as an objective expert when you have written hit pieces on Pierre Poilievre and the Conservatives and submitted a brief that singles out, or largely singles out, Conservatives as spreading misinformation and disinformation. I think this committee deserves better.
I am fortunate to have received most of my research grants, the largest research grants, under the Harper government—and by far, by the way. I suppose I should have disclosed that also.
I endeavour, when I write pieces about the politics of misinformation, to make it very clear that misinformation happens historically and contextually across the ideological divide. I do my best to represent exactly what the research says. I think it's very important to understand the role of politics and ideology in the spread of misinformation right now. It isn't easy to hear that stuff. I do try as much as possible to not be partisan about this, but this is a very important topic. We have to be open-minded about the role of politics.
Thank you to the witnesses. I appreciate the clarification in terms of funding to universities from various governments and how important it is to maintain science and research on an ongoing basis in order to stay ahead of the curve.
I say that with all due respect, because when I was in the Baltics last year, I visited Estonia and Latvia. I saw from discussions with them, and with American and Canadian forces dealing in NATO in those countries, the high degree of cyber-threats and cybersecurity, the relevance of ensuring that we are on top of misinformation appropriately, and the targeted effects from, in this case, the Russian government and the Chinese government playing a role in western society. I was very encouraged by the reactions or the defence measures being taken, but I was also extremely concerned by the degree to which it exists now in Canada as well.
Professor Caulfield, are there lessons learned from other countries that are leading the way, especially given the high degree of hostilities that are now before us?
I think there are lessons to be learned. I'll just draw on one that I think is more politically palatable and politically neutral, which is the teaching of critical thinking skills and media literacy.
This is a strategy that can be content-neutral, because what we'd be doing is giving our citizens the tools to discern and cut through the noise themselves. A very good example of that is Finland, where they teach critical thinking skills extremely early, as early as kindergarten. It's hard to study this well, because there are so many relevant variables, like the quality of the education system, socio-economics, etc., but there have at least been some studies that have shown the strategy adopted by Finland has made them particularly resilient to the spread of misinformation.
One of the reasons they've adopted that strategy and are ahead of the curve compared to us, to touch on your comment, is their proximity to Russia and the role that Russian misinformation has played in their lives. I think that is a very telling lesson and something we can build on here in Canada.
Thank you for that. I do tend to worry about the degree of...that the public is almost being callous and is disregarding these activities of misinformation and lies being propagated by elected officials, no less—or the exaggerations of truth, to put it that way—and is seeing them as being somewhat acceptable.
Ms. Lalancette, do you feel that elected officials are, in fact, promoting misinformation and, especially in the preambles of their questions, prejudging the situation and creating some of the threat that exists in misinformation?
I'm thinking here of situations where politicians make comments and add a lengthy preamble to the various issues, including bias, attacks and criticism of an individual or their personality, for example. In this context, they're not discussing political issues, but rather engaging in a battle of personalities by stating half-truths or partial truths. It's the principle of negative political communication. The information is never completely false, but it's used out of context.
Perhaps we should think about a way of naming things, create a code of conduct for the National Assembly of Quebec, the House of Commons or elsewhere, that would encourage people to address people's projects or ideas and their viability rather than engaging in personal attacks. This would keep people from spreading half-truths and falsely or negatively conveying information.
Professor Caulfield, do you agree that name-calling or allegations of wrongdoing by certain officials in the House toward other officials actually lessens the degree of integrity in the political discourse and, worse, creates threats and dangers in the system?
Yes, I do, and there's evidence to back this up, as my colleague noted. There's no doubt that this kind of polarized discourse heightens tension.
By the way, it also makes it very difficult for those of us who are trying to do research when we have our integrity questioned and our reputations smeared.
Again, there is evidence that this is one of the strategies used by those pushing misinformation to delegitimize voices that are trying to counter misinformation, and unfortunately it's very effective.
First, we need to regulate what goes on in the House of Commons so that no falsehoods can be uttered there.
Then, with regard to the public, as my colleague was saying, we need to run awareness campaigns to increase people's knowledge of the media.
Finally, we need to regulate digital social media platforms so that they are accountable for what is said on them. In the U.S., there's the issue of freedom of expression, but this can be defined differently in Canada. My colleague, who specializes in law, can explain it even better than I can.
Be that as it may, those are my three recommendations, which affect elected officials, citizens and media companies respectively.
Very briefly, I agree we need strong national support for critical thinking and media literacy courses that are available to all Canadians and in all of our school systems.
I think we need to support independent, trustworthy fact-checking, debunking and pre-bunking entities that are doing those in an independent, positive way.
Yes, I think we should consider, at the national level, a target of regulation. I'm actually a very strong supporter of freedom of expression, so I think this has to be done very cautiously and in a very targeted manner—for example, around elections and hate speech. However, I also think this is something else that needs to be in the tool kit.
Ms. Lalancette, I'm turning to you again, briefly.
As you know, there are fewer and fewer philosophy courses in Quebec. And yet, this afternoon, two of you have said that we need to increase critical thinking. How are we going to do that?
I think it can be done in different ways. It can be done in schools, but also elsewhere, in other organizations. We could also provide funding for programs that address these issues. At Radio-Canada, for example, there's already a program called Les décrypteurs, that deals with disinformation. Working on these issues, making them fun and interesting for the population at large, could be a way for traditional media to regain legitimacy.
I want to thank Mr. Cooper for providing a textbook example of a growing anti-intellectualism, one that seeks to attack the expertise of subject matter experts.
I want to allow Mr. Caulfield to comment from a public health perspective.
How can parliamentarians better identify and counter misinformation and disinformation of the kind we witnessed here today, especially during times of crisis, in order to protect trust in science-based policy decisions?
Well, I think we need to try to respectfully counter misinformation with the body of evidence.
There's a very interesting growing literature. There's been some interesting research done in both the United States and Europe that talks about representing what the scientific consensus actually says on a topic. That also goes to explaining what the scientific consensus is. Science is hard. It's messy. It's always contested. That's healthy. However, there's often a body of evidence that policy-makers and politicians can turn to.
I think part of the critical thinking skills we need to impart to Canadian citizens is understanding the scientific process—what the scientific process is, how science is done, how science is funded, and how that funding is obtained and used in order to maintain that trust.
Look, the funding process is not perfect. In fact, my forthcoming book talks about all the problems with how science is done and the knowledge production crisis we have. We need to make trustworthy science and have trustworthy sources of facts. We also need to have collegial discussions about how relevant those facts are to our policies.
In the time remaining—I have close to a minute left—could you elaborate on pre-bunking? You started to talk about this. You talked about “belief speaking” and how that's different from evidence-based decision-making.
What is a pre-bunking approach, and how can parliamentarians implement this strategy to proactively address misinformation and disinformation in their communications with constituents, particularly during election periods?
There's a lot of fascinating research on pre-bunking by people like Gordon Pennycook, Sander van der Linden and others that shows that if you highlight to people what misinformation might look like and what kinds of strategies might be used to push it—such as relying on an anecdote instead of the body of evidence—they'll be prepared to see it. If you do that, they're less likely to spread the misinformation or internalize it.
We can hopefully figure out ways to do that at scale. However, it's also something you can remind yourself to do, and remind your friends and family to do.
Thank you to the witnesses. I appreciate your being here on this very important issue.
I've reviewed testimony on this issue from the past. Professor Caulfield, I think you'd agree with me that government has what we would call a positive obligation to act when it comes to disinformation and misinformation. Is that correct?
That positive obligation goes right down to the bottom person working the line at Elections Canada, and it goes right up to the top, to the Prime Minister.
If the government fails to act when it has what I certainly would call a moral duty—and I would actually say is a legal duty—to act, that is what we in law would call an omission. It's a duty to act and a failure to act.
My point is this: We have a situation where one of our former colleagues, Kenny Chiu, is no longer here, and there is clear evidence of misinformation. We also know that 11 people have been either wittingly or semi-wittingly—those aren't my words, but the words of a report—assisted by hostile states. I'm paraphrasing here.
In situations when the Prime Minister knows or Elections Canada knows, obviously a failure to act by them is propagating the very thing that we should be shining the light on, is it not?
I feel so passionately about this issue. I agree that we need an incredibly high standard for public officials and for politicians when it comes to issues of misinformation, for explaining when it has happened or when misinformation perhaps has not been clarified for the public in a way that can ensure trust.
I think that we are on the same page. I think it's very important for public trust—
My point is this: When the government has information, it should act. When it doesn't act, it not only fails the people who've built their lives on running for office, but it denigrates and degrades the very democracy that we are standing here purporting to defend.
I am not going to use my time in any partisan way to try to dig in. I don't consider either of you to be hostile witnesses or witnesses needed to meet political points. I actually have a real question that I would like to ask. I'm going to try to divert it away from Canadian to U.S. politics so that I don't feel that I'm acting in a partisan way here.
Donald Trump is probably one of the biggest liars in the world. Donald Trump, for example, said that he won the 2020 election and that it was rigged.
When I get to my question, you'll understand it. I want to understand where misinformation comes.
Certainly that high-level comment is not true, but he may, for whatever reason in his head, believe it to be true. However, when he gets to saying that Dominion Voting machines switched the votes; when he gets to claiming that in Georgia, this number of people who voted were dead, which he knows isn't true; and when he starts claiming that people in the voting stations were taking boxes and taking ballots out of boxes when they were handing each other candy, this is where you really get into misinformation, because it's a direct lie.
Professor Caulfield, at what point is it misinformation?
All of these things to me are misinformation, but is it only when you get to the linear, direct lies that you're at a misinformation point that we should be fighting against—because otherwise you're taking away a larger idea that they may believe in—or is the larger idea that the election was a fraud also clearly a lie and we should be handling that as well?
I do understand. I think it's important to recognize that 60% to 70% of Republicans believe the big lie, which is absolutely astounding. It's going to have an impact on the election.
I think there is a way to deal with the big lie at a meta level—in which case you point out that the big lie is wrong—and also to carefully outline all of the things that you highlighted and why they're factually in error. This really goes to that idea of belief speaking versus fact speaking. I actually think that many people don't believe that it was stolen, but they believe the gist of the point. They're very willing to accept the broader lie.
You have to do both. You have to talk about the error of the meta lie and then you have to unpack each specific lie.
By the way, again, this happens across the ideological spectrum. I can give you examples from the left.
We've had other witnesses here. We've been studying this for several meetings now. I've asked this question of the other witnesses, and I'm going to ask each of you to answer the question.
We've seen Facebook pull back on the application of links on their website. Oftentimes, people would rely on Facebook. It would be simply a copy and paste of a story, whether it was to The Globe and Mail, whether it was to BarrieToday, or other newspapers. That's been lost, so that source of credible information, fact-checked information, has also been lost.
In your opinion, given the fact that Facebook has taken this action as a result of a dispute with government, do you believe it fills in a vacuum of misinformation, or the potential of a vacuum of misinformation, that could be put online in absence of this fact-checked information from credible news organizations?
I actually wrote a short piece on exactly this. I think it highlights how challenging it is to regulate this space.
My short answer is that I am worried. I am worried that a vacuum was created by removing reputable sources and it's been filled with information that is less credible, and I think that is a real problem. However, the larger goal of that policy, of supporting journalism, is so important.
The value of journalism to a liberal democracy can't be overstated, and it's in jeopardy, as my colleague pointed out in her opening statements. We've seen how this played out in other jurisdictions, such as Australia. I am worried about how it's playing out right now. I wish I had a better answer for you, other than saying this is complex.
As I explained in my opening remarks, this situation is the result of a perfect storm. Social media forced traditional media to go to these platforms in order to survive, and they used their content for free. Now that the government is trying to save traditional media by charging digital social media platforms, we're no longer able to transmit credible information from media.
This does create a huge problem. As research shows, it was thanks to digital social media platforms like Facebook that people read The Globe and Mail, The Gazette, Le Devoir or the National Post, for example. Now that this content is no longer available, people don't read it. Instead, they rely on influencers, who don't have a code of ethics, media code or code of conduct like journalists do.
It's often the case, Mr. Chair, as you will know, that these conversations provide an opportunity for witnesses to perhaps respond in writing for answers that they may not have been able to put into their brief time.
Through you, to the witnesses, I would like to encourage them that if there were topics raised—facts, figures, studies, recommendations—they feel would be for the good and public welfare, to put it to us in writing for consideration at the report-writing stage.
Mr. Caulfield, Madam Lalancette, if there's anything at all, in addition to what you've spoken about today—I know an hour goes by really quickly—that you can provide the committee to help in its deliberation and presentation of this report to government, I would appreciate if you could send that to us.
Typically I like to put a deadline on things, so if you could have it here by next Friday at five o'clock, that would be helpful for the clerk, and the parliamentary analysts as well, to possibly include that information into whatever report we come up with. Thank you.
Welcome back, everyone, to hour number two. I expect that we are going to have two full rounds, and I would like to welcome our witnesses for the second hour.
From DisinfoWatch, we have Mr. Marcus Kolga, who is the director.
Welcome to the committee. I know we had a bit of an issue last time, but we worked it out, and we're so pleased to have you here with us today.
From Mila, which is the Quebec artificial intelligence institute, we have Yoshua Bengio, founder and scientific director.
Mr. Kolga, we're going with you to start. You can address the committee for up to five minutes.
We expect that we're going to be here for the full hour. I hope you guys are okay with that. I see thumbs-up, which is excellent.
Thank you, Mr. Chairman and members of the committee, for the privilege to speak about the threat of disinformation, specifically Russian information and influence operations targeting our democracy.
Over the past decade, we've witnessed a disturbing increase in the Kremlin's efforts to undermine our democratic institutions, erode our social cohesion and advance its geopolitical interest through state media, proxies and collaborators within Canada. While significant attention has been directed toward Chinese government interference in recent years, we've not fully addressed the equally dangerous and sophisticated information campaigns waged by the Kremlin.
The threat to Canada is real, and it cannot be ignored, as recent actions to disrupt these operations by the U.S. government have highlighted. The U.S. Department of Justice recently indicted two employees of Russia Today, RT, a Russian state entity that operates not merely as a media outlet but, as the U.S. Department of State and Global Affairs have noted, as an important component of Russia's intelligence apparatus. This indictment, which implicates Canadians in RT's operations and as its targets, is nothing less than a smoking gun. Canada is a key target of Russian information warfare.
An FBI affidavit that was published at the same time as the indictment details Kremlin documents and minutes from meetings with one of Vladimir Putin's top advisers, highlighting the regime's commitment to weaponizing information. The tactics exposed include developing and spreading lies and conspiracies, manipulating social media algorithms and using Russian and North American influencers to amplify them in efforts to destabilize democratic societies, and that includes Canada.
The objectives are clear: to ignite domestic conflicts, to deepen social divisions, to weaken nations that oppose Russian aggression in Ukraine and to erode public support for Ukraine.
Canadian parliamentarians have also been targets of these operations over the past decade. Following our government's strong stance against Russia's illegal annexation of Crimea in 2014 and our leadership of NATO's enhanced forward presence mission in Latvia, we witnessed Russian information and influence operations targeting Canadian officials and policies. Prime Minister Harper's government was an initial target, including during the 2015 federal election. This was followed by the targeting of then foreign minister Chrystia Freeland and other outspoken parliamentarians.
It's important to note that the Kremlin does not favour any specific Canadian political party. Instead, as the Kremlin documents clearly outline, they seek to exploit existing divisions and create conflict to undermine our democracy and further their interests. This includes diminishing support for Ukraine and weakening international alliances like NATO that oppose Russia's aggression. We've now learned that RT invested $10 million in a company founded by two Canadians aimed at advancing Russian narratives in the U.S. and within our own borders.
A recent poll we conducted at DisinfoWatch with the Canadian Digital Media Research Network indicates that most Canadians have in fact been exposed to Russian disinformation about Ukraine and are vulnerable to it. Canadian influencers play a key role in advancing the Kremlin's narratives in Canada and in the U.S., as do the Canadian academics and activists who collaborate with Kremlin-controlled think tanks like Vladimir Putin's Valdai Club and the Russian International Affairs Council, which are on Canada's sanctions list.
To disrupt and deter these well-documented Kremlin operations and to protect Canadians, the Canadian government, law enforcement and the intelligence community must acknowledge the seriousness of the threat they pose to our democracy and society. We must conduct thorough investigations into Russian collaborators and proxies operating within Canada and hold them to account under our laws. This includes any sanctions laws that may have been violated, including the foreign influence transparency registry and Bill C-70.
Given Russia's ties to foreign intelligence services, Canada must follow Europe's lead in banning all Russian state media from public airwaves and the Internet. This should be extended to Chinese and Iranian state media and state-controlled outlets as well. We should also introduce new legislation based on Europe's Digital Services Act, holding social media companies accountable for the content on their platforms and the algorithms that amplify it.
By enforcing transparency, content moderation and reporting requirements, we can make it significantly harder for hostile actors to weaponize these platforms to spread disinformation in Canada.
Finally, we need to acknowledge and address the rise of foreign authoritarian transnational repression targeting Canadian activists, journalists, diaspora communities and, indeed, parliamentarians.
(1650)
The persistent efforts of foreign authoritarian regimes to undermine our democracy and social cohesion must be met with equally persistent measures and resources to confront, disrupt, deter and, ultimately, prevent them from succeeding.
Thank you again for this privilege. I look forward to your questions.
I am grateful for the opportunity today to share with the Standing Committee on Access to Information, Privacy and Ethics my thoughts on misinformation and disinformation based on artificial intelligence.
The past few years have seen impressive advances in the capabilities of generative artificial intelligence, starting with the generation of images, speech and video. More recently, these advances have extended to natural language processing, which the public witnessed with the release of OpenAI's ChatGPT model.
[English]
Since the end of 2022, nearly two years ago, this last element brought us into an unprecedented technological reality, one in which it is becoming increasingly complex for the average citizen to determine whether they are conversing with a human or a machine when they interact with these models. This state of affairs, by the way, is commonly known in computer science as “passing the Turing test”: We can't distinguish between AI and a machine through a text interaction, and so the boundaries between human and artificial conversations are getting more blurred as these systems become more powerful and advanced after each release.
All of this is controlled by a handful of companies—all foreign—that have the required financial and technical resources. We're talking about over $100 million to train the latest models—and growing—so it's going to be billions pretty soon.
When analyzing the progress and acceleration of AI trends, we see that AI capabilities don't seem to be about to plateau or slow down. Between 2018 and today, every year, on average, “training compute” required to train these systems has quadrupled; the efficiency by which they exploit the data has increased by 30%—in other words, they don't need as much data for achieving the same efficiency of answers; the algorithmic efficiency has tripled—in other words, they are able to do the same computation faster; and the investments in AI have also been rising exponentially, increasing by over 30% per year, and in the last few years were an average of $100 billion, growing quickly towards the trillion.
There was a recent study carried out in Switzerland that I think is very important to the discussion of this committee. It showed that GPT-4, the latest version you can find online, has superior persuasive skills to humans in written form. In other words, they can convince somebody to change their mind better than humans.
What's interesting, and maybe scary as well, is that this advantage of the machine over humans is particularly strong when the AI has access to the user's Facebook page, because that allows the AI to personalize the dialogue. That's just now, so you can expect future generations of models to become even stronger, potentially superhuman in their persuasive abilities, and in ways that can disrupt our democracies. They could be much stronger than what we've seen with deepfakes and static media, because now we're talking about personalized interactive connections between AI and people.
I trust that most large organizations that develop these models make some efforts to ensure that they are not used for malicious purposes, but there are currently no regulations forcing them to do so anywhere in the world—well, I guess China is leading on this—and models, especially when they are open-sourced, such as Meta/Facebook, can easily be modified by malicious individuals or groups to change those models.
For example, they would be stronger at persuasion, helping more to build bombs, perpetrating all kinds of nefarious actions and providing information that can help terrorists or other bad actors. In the absence of a regulatory framework and mitigation measures, the deployment of such malicious capabilities would certainly have many harmful consequences for our democracy.
To minimize these pitfalls, the government needs to do a few urgent things. We need to pass Bill C-27, in particular to label AI-generated content. We need privacy-preserving authentication of social media users so they can be brought to justice if they violate rules. We need to register the generative AI platforms so governments can track what they're doing and enforce labelling and watermarking.
(1655)
We need to inform and educate Canadians about these dangers to inoculate them with examples of disinformation and deepfakes.
[Translation]
Thank you for this opportunity to share my perspectives. This is an important exercise. Artificial intelligence has the potential to generate considerable social and economic benefits, but only if we govern it wisely rather than endure it and hope for the best. I often ask myself: will we be up to the scale of this challenge?
I'd like to thank the witnesses for being here. It's appreciated that you have taken time out of your schedule, especially when we're dealing with something so important.
I'm going to focus my time on Mr. Kolga. I've reviewed your testimony.
I'm not sure whether our two witnesses were online for the last round. Can you just give me a thumbs-up or thumbs-down for yes or no? No, you were not. Okay. I apologize.
I'm going to cover some ground that I covered during the last round.
Mr. Kolga, I'm going to present a few things that I think are probably fairly straightforward. I've reviewed your prior testimony and I've reviewed the testimony of a former colleague of ours, Kenny Chiu, on foreign interference. To me, that is one of the central issues of misinformation and disinformation.
Obviously, it's my view—and I hope you will agree, Mr. Kolga—that the government has what we would say in law is an obligation to prevent electoral interference. That obligation starts from the bottom, at Elections Canada and any other enforcement agencies, and goes straight to the top to the Prime Minister.
I'm not trying to trap you or anything. To me, that's really trite.
The reality is that most of us worked very hard to get here. It's not easy. Therefore, when there is interference in an election and there is uncertainty as to whether or not it affected the election—I'm talking about both at the national level and at the riding level—that's going to be a win for any hostile state actors that are intervening. Is that correct?
Clearly, there was foreign interference in the election in Steveston—Richmond East. The government knew about this and did nothing.
At what point should the government be acting? Should it not be acting immediately and informing Canadians by shining light on this, because that is the best disinfectant—the best antidote, if you will—against electoral interference and the misinformation that comes with it?
I should note that at DisinfoWatch, we were made aware of these efforts about two to three weeks into the writ period, and it took us maybe four or five days to analyze the data we received from various different sources, including sources within the Chinese community in Canada. We produced an election alert, which we posted to our website about a week and a half before election day. I think that as an independent civil society organization, we have the ability to act a bit more nimbly and quickly.
Now, did that alert prevent the interference from happening? No. It had already happened at that point. I don't know whether we were able to have any impact. However, I think that in future elections, the government and the organizations being set up inside of that to monitor elections should be much quicker to report instances of interference.
There were very simple instances that could have been reported quickly, including the Global Times piece that was published in Chinese state media, attacking the Conservative Party and its leader, Mr. O'Toole. That could have been reported very quickly.
In future elections, we need to improve those timelines to get that reporting out there. Any instances of interference and efforts to interfere in our elections should be reported much more quickly than they were during the last election.
Well, certainly. I mean, now we have 11 people who have been identified as either wittingly or semi-wittingly having been assisted by hostile states. We don't know their names. If light is the best disinfectant—we have been slow to react when it comes to foreign interference, and I think we can agree on that—are we not just perpetuating the exact same pattern of the government?
This goes right to the highest level, because the decision is with the Prime Minister. Are we not just perpetuating that more and more by failing to act in this instance right here, right now?
I would agree. I think it's at all levels. For quite some time, I've been advocating greater transparency when it comes to collaboration with foreign governments, especially authoritarian foreign governments. I think Bill C-70 will go a long way to helping with that, so yes, I would agree with you—
I apologize for cutting you off. I just want to end with this quote from Kenny Chiu.
The Chair: Please be quick, Mr. Caputo.
Mr. Frank Caputo: This is what he said about foreign interference on April 30, 2024:
...what I heard during the hearings shook that a little bit because it looks like there are some Canadians who are more valuable and worthy of protection than others.
I hope every Canadian, regardless of what party they run with, is worthy of protection in the future.
Mr. Kolga and Mr. Bengio, we have a short period of time, six minutes, for questions here. Oftentimes members will reclaim their time or maybe cut you off before you're able to answer, but it's the member's time. They're just trying to maximize that. Please don't take it personally.
I have Ms. Khalid next, for six minutes. Go ahead, Ms. Khalid.
Thank you to the witnesses for being here today and for your opening statements.
One thing that I think is quite clear with misinformation, disinformation, hate speech and its role by state actors and non-state actors is the point that the strength of any country or any state is its people. That whole war of using misinformation and disinformation to create disruption, create agitation and undermine the democracy of a state like Canada is troubling. It leaves a lot of the communities that are impacted very vulnerable.
I'll start with you, Mr. Kolga. You mentioned local influencers being used to amplify messages. You used Russia as an example. I'll say that we've also seen stories and articles of Indian influencers who have called upon the Government of India to put money into political parties here in Canada to ensure that a certain political party wins the upcoming election, for example. We've seen Russian bots trying to influence a certain political party and its perception here in Canada as well.
How do we regulate that? How do we hold people to account while also maintaining the sensitivity around local communities that become victims on both sides of the situation, ultimately?
We have Bill C-70 coming into place now. Ultimately, it comes down to ensuring that it is implemented properly to ensure that we are enforcing the legislation that's already in place.
I mentioned during my opening remarks that Canadians were working directly with RT. The U.S. DOJ indictment about that case clearly indicated that Canadians received payment into Canadian bank accounts. The timelines that were presented in that indictment would indicate that those payments were made during a period when RT was under Canadian sanctions. Services were delivered to RT and payment was made from RT to Canadian bank accounts. That, to me, would indicate a potential violation of our sanctions legislation.
My question would then be this: What is the Canadian government doing about that? Is there an investigation into sanctions violations? Are we going to enforce the legislation that we already have in place? If we don't do that, that will send a message to all of these foreign regimes that engage in foreign influence operations and information operations that it's the Wild West: They can do anything in this country.
We need to begin right now by enforcing the current legislation that we have in place.
I will continue down that line. What is the responsibility of political parties in this? For example, the Conservative Party was recently targeted with Russian bots to claim that many people had attended a rally that happened, etc. What is the accountability, then, or the responsibility of the Conservative Party, just to use them as an example—there are examples across all parties on this—to make sure that they're not victims or targets, whether wittingly or unwittingly, of this kind of interference?
There has been research into the instance that was mentioned and the allegation about Russian bots supporting the Conservative Party after a specific rally. In fact, that has been proven to be untrue. Those bots, according to researchers at McGill University, were experimental; a student was just playing around with bots and happened to latch on to that specific case. Any sort of blame that has been laid or any allegations regarding the Conservative Party, or even Russians, in that case should be discounted. That did not happen.
Of course, I think all parliamentarians have a moral obligation to reject any sort of disinformation and misinformation and to not engage in the use of it. More needs to be done in terms of educating parliamentarians. I strongly support annual training for all parliamentarians and their staff on misinformation, disinformation and influence operations to know what they look like so that they're able to detect them when they see them and run across these sorts of narratives. That's very important.
I've long advocated the creation of a committee inside Parliament where representatives of each of the major political parties would be in attendance. They would get briefed on a weekly or biweekly basis on emerging narratives so that everyone was on the same page and to avoid anyone mistakenly or inadvertently amplifying some of these narratives.
Those are a couple of fairly simple steps that we can take to help parliamentarians. I do believe that most parliamentarians—I would say all parliamentarians—want to do the right thing and fulfill that moral obligation in not amplifying these sorts of narratives.
Thank you very much for that. I really appreciate your clarification.
I'll turn to Mr. Bengio for a quick second.
You spoke about the use of artificial intelligence. There was a recent article talking about deepfakes of Justin Trudeau used for passive income ads. There's a video—
Those tools are very easy to use; a kid could do it on their laptop. You don't need to be a state actor to use deepfakes. Some of the things I've been talking about regarding using bots for persuasion are more advanced, but as everything gets more advanced, it's going to be easier for people who are not even state actors and terrorists to use them.
Mr. Bengio, thank you for being with us for the second time. We had an unfortunate experience the first time. It's always a pleasure to talk with you.
Our study focuses on the effects of disinformation on parliamentarians. Could you tell us about the risks parliamentarians face in relation to disinformation, misinformation and so on?
My expertise lies in artificial intelligence, so I'll confine myself to that aspect.
As I said earlier, people can now make deepfakes using very easy-to-use software. In the case of public figures for whom it's easy to obtain images or voice samples, it becomes easy to imitate them. You can reproduce their voices very convincingly. As for video, it's not always perfect, but it's becoming more and more effective.
What also worries me is that these systems are leading us in a direction where we'll be able to impersonate someone we know interactively. It's like someone committing phone fraud. Using artificial intelligence, they pretend to be someone else, and the caller on the other end of the line actually believes they're talking to the person in question. So you could receive a phone call from a supposed political leader and think it's really him.
All this is developing. So, we absolutely must put in place regulatory safeguards to minimize the risks and be able to prosecute people who, under the cloak of anonymity, cheat on the Internet.
One of the important things to do would be to better monitor systems that can be used for dangerous purposes. As I said, these systems cost hundreds of millions of dollars. The companies that make them should be obliged to make statements to the government, as well as to show what they're doing to prevent their systems from being used for purposes that are dangerous to democracy, such as the situations we're concerned about today. For example, they should show what kind of tests they carry out. Civil society should also be able to take a look at all this. That's a minimum.
As I mentioned, there would also be things to do regarding the way social media are organized. There are technologies that would keep users anonymous, but allow the government to find them if they were doing something illegal. Today, the government doesn't have that option. However, companies won't voluntarily use this kind of technology, because it creates friction when creating user accounts, and they don't want to put themselves at a disadvantage compared to other companies. If governments decided to do something like this, we'd create a level playing field for all companies, and that would be good for society in general.
Stills on the artificial intelligence front, are there rogue actors or rogue countries that are more likely to use artificial intelligence for bad purposes?
The country that is most advanced in artificial intelligence, after the United States, is China. It has a very large critical mass of researchers and companies, not to mention military or national security organizations, in particular, which can do all sorts of things and have a lot of resources.
However, it's not just this kind of player that's worrying. There are also smaller players who can use software like Meta's Llama, which is available online. They can use all these cutting-edge systems without anyone knowing. They can even adapt these systems so that they are specialized to carry out a task that is dangerous for democracy, or even humanity.
A lot of people are working on this. There are systems that try to do fact-checking, but they're not perfected yet. I think that today, we have to depend on human beings to do this.
But this is the kind of research we should be funding. States, together, should invest in developing artificial intelligence in such a way that it is beneficial to democracy rather than harmful. But it's not necessarily profitable, so it should probably be up to governments to build a defence system against attacks from evil actors using artificial intelligence.
Given the advances in artificial intelligence, particularly in lie detection, which is an aspect I'm very interested in, can we consider that privacy should become a public good that we should protect?
It's a choice we can make. I think this choice has been made in Europe. Here, we're moving in that direction. I don't really want to take a position on that. There are pros and cons, and it's not all black and white.
What worries me more are the dangerous uses of these systems. What worries me is that today we're legislating on the basis of systems that exist today. This is a mistake, because researchers in artificial intelligence companies are working on systems that will be released in one, two or three years' time. But developing laws and regulations takes time. So we need to be proactive, think things through and try to predict where artificial intelligence will be in two, three or five years' time.
One of the privileges we have as members is to engage with subject matter experts. Mr. Bengio, I know that you are certainly that.
I have some questions following up from my good friend Mr. Villemure, as we seem to be on the same wavelength.
Given the increasing accessibility of technologies capable of producing deepfakes, as you referenced, in synthetic media, what specific regulatory measures would you recommend to safeguard democratic institutions in Canada from the potential weaponization in spreading misinformation and disinformation?
Unfortunately, there is no silver bullet, so it's going to be a lot of little things.
Unfortunately, a lot of the power to reduce those risks is in the hands of the Americans, so it could be their federal government—or California, these days.
However, I think there are things that the Canadian government can do.
First of all, one of the most important things is that those companies that are building those very powerful AI systems need to do tests—which the U.K. and the U.S. AI Safety Institute, for example, are helping with—that try to evaluate the capabilities of the system. How good is the AI at doing something that could be dangerous for us? It could be generating very realistic imitations or it could be persuasion, which is one thing we haven't seen used that much yet, but I'd be surprised if the Russians are not working on it using open-source software.
We need to know, basically, how a bad actor could use the open-source systems that are commercially available or downloadable in order to do something dangerous to us. Then we need to evaluate that, so we basically force the companies to mitigate those risks or even prevent them from putting out something that could end up being very disruptive.
I think you referenced that because it's not necessarily a commercially viable research track, nation states are going to have to invest in this to give immunity. I think about this, in some ways, as a form of national defence spending.
What opinion do you have, if any, around the possibility of creating international regulations? For example, there could be treaties that would deal with specificity around AI and would provide some kind of international pressure or culpability should there be evidence of state actors.
I'll let you answer that first, and then I'll ask my follow-up question.
It turns out that I'm very much involved in these kinds of efforts in the international community. I'm chairing an international panel modelled more or less after the IPCC, but for AI safety. I'm also involved with the UN and the OECD on discussions about harmonization and coordination of AI regulation and treaties.
There's still a long way to go. It's also important that each country moves forward.
We have a proposed bill here in Canada, and we should do our share. It's very similar in spirit to some of the things that the Americans have done with the executive order or what the Europeans are doing with their AI act.
Then we need to play a very important role on the international scene. Canada, being a medium power, is in a way less threatening than the U.S., which also has very strong commercial interests. I think we can agree more easily with the Europeans, for example, and even with developing countries that also have issues with the way things are progressing.
I think we can also really play an important, positive role in the geopolitical battle that's coming along between the U.S. and China, which is not making things easy for finding international solutions.
I hear often that we have a digital axis of evil. We talk about China. We talk about Russia. We sometimes talk about India.
Is it not safe to say that all countries are engaging in this in some form or another and that it's often a matter of degrees of rhetoric about what is an influence campaign and having international influence, versus international interference?
Going back to this idea of treaties and of internationally recognized law being established, you've mentioned the United States and Europe. Are they not also involved in this? If so, are the regulations they're putting forward only considering the short-term and domestic interests?
I think there are ethical red lines. We do have legal protections that don't exist in other countries, so there are differences. You're right. Everyone's trying to use technology to their advantage in different degrees.
We should make sure that we have more transparency in how these tools are used, whether it is by corporations or by state actors. Our governments eventually might be tempted to use AI in order to influence their own people. We need to have guardrails against that as well. Of course, we need to protect against state actors who are clearly intent on destabilizing our democracies.
You have to think that in a few years from now—maybe a decade, I don't know—we're going to build machines that are going to be as smart as humans. At least it's very plausible, and the majority of AI researchers think we're going to get there. How is that going to be used? There's a chance that there's going to be an abuse of that power by whoever controls these machines.
Mr. Kolga, when you appeared before the procedure and House affairs committee back in November of 2022 as part of its study on foreign election interference, you gave testimony. I asked you some questions about Beijing's disinformation campaign in Steveston—Richmond East. Since then, a lot of new information has emerged from the public inquiry and in other fora, including this committee.
Last week, the office of the Commissioner of Canada Elections tabled a report at the public inquiry into foreign interference in federal electoral processes and democratic institutions. The report confirmed that the PRC targeted former MP Kenny Chiu in Steveston—Richmond East as part of a disinformation campaign designed to drive voters away from him, using fairly sophisticated efforts, including through the amplification of such disinformation on social media platforms.
Madam Justice Hogue, in her initial report back in the spring, noted that, “there is a reasonable possibility that these narratives could have impacted the result in this riding.” In other words, the disinformation was significant enough that it could have tilted the balance in terms of the outcome of the election in that particular riding, resulting in the defeat of Mr. Chiu.
I find that alarming. Do you? It seems to me that it should sound the alarm on how serious the threat of foreign interference from hostile foreign states, including the PRC, is to our democracy and the integrity of our elections.
I have no doubt that that operation did impact the result.
As to whether it tilted the result one way or the other in any significant way, we don't know that. There's no doubt that it had some form of impact.
What to do about this in the future? Again, I will go back to my statement earlier. We tried to raise the alarm about that operation as it was happening or shortly after we observed it happening. This is something that needs to be done in the future.
I'm not sure that it's the government necessarily that has a role in doing that. Certainly we should be empowering civil society organizations, those that are monitoring the information space, who understand it and who understand where to look: Empower those groups and individuals who are doing that work so that they can alert Canadians and the Canadian media to those sorts of efforts when they're happening. That will help to build resilience and strengthen our democracy.
Well, you indicated that perhaps it's not the role of government, but there were structures and processes in place at the time of the 2021 election.
I would note that in May of 2021, an IMU, an issues management note, was sent to the Minister of Public Safety that specifically noted that Kenny Chiu's riding was of high interest to the PRC. That IMU went into a black hole. The minister claims he never saw it, even though it went to him, Minister Blair, his chief of staff and the deputy minister of public safety.
Following the Global Times article of September 9, 2021, the rapid response mechanism at Global Affairs Canada highlighted the disinformation that was being spread. That was provided to the election committee that had been set up by the government to counter foreign interference and to bring awareness to interference activities.
Through it all, Kenny Chiu was left to drown in a sea of disinformation. Not only did he drown; in fact, there is evidence that the Liberal Party actually amplified the disinformation being spread by the Beijing-based regime. Not only is it the case that the PRC intervened in the Steveston—Richmond East riding; one of the major political parties, namely the Liberal Party, actually amplified the disinformation, perhaps resulting in the defeat of Kenny Chiu.
Would you agree that the system failed Kenny Chiu and failed the voters of Steveston—Richmond East?
I would absolutely say that there's room for improvement within that system to ensure that when these instances are detected in upcoming elections, they are brought to the attention of those individuals who are being targeted.
That goes beyond elections; that goes to the spaces between elections. Whether it's elected officials, candidates or activists being targeted by information operations, they need to be notified. There is lots of room for improvement.
Thank you to our witnesses for joining us on this very important study.
I'll draw your attention to Bill C-70. Are you familiar with motion 112, which was also introduced in the House? What we're talking about here is information.
Some important recommendations came out of this study. On the misinformation piece, what Mr. Cooper ended with in his question was that there was misinformation. The Liberal Party did not amplify the claims that he's talking about. We've seen evidence of him drawing different conclusions earlier in this study today as well.
However, I will stick to the important matters here—namely, the solutions and what you're talking about in terms of what we can do better. On the issue of information that's being shared, we can look at the Security of Information Act. In fact, a former director of CSIS came into the study and talked about how important it was to use that as an enforcement tool. I talked about motion 112 and sharing information with hostile nations and some agreements we have that already exist and that need to be reviewed. Motion 112 is something I co-authored with MP Dhaliwal.
Can you expand on how important the Security of Information Act is, within your scope of knowledge on that? What impact can the fact that it hasn't been updated for over 20 years have?
I think it will have a tremendous impact if it's properly implemented, and, again, if it's enforced. That update in Bill C-70 to the Security of Information Act will go a long way to helping defend vulnerable communities, activists, journalists and of course parliamentarians who are being targeted with transnational repression. I think that's positive.
There's the update to the CSIS Act and the fact that the update will allow CSIS to now communicate with non-governmental organizations when it's relevant, perhaps to warn individuals and groups when they are being targeted by these sorts of operations and to let civil society organizations know when they detect these sorts of operations, so that we can be better prepared to defend Canadian democracy and society against them. That's a major improvement.
As part of Bill C-70, we also saw the adoption of the foreign influence transparency registry, which will also be critically important. The implementation we'll have to keep an eye on, but this will also be an incredibly important measure in defending against these sorts of operations in the future. I hope that the legislation will be implemented very quickly and that it will be enforced.
I'm going to ask a different question and take a different path.
Do you think there needs to be more vigorous background checking of people who are interested in running for office and who may have had previous relationships with foreign entities and foreign countries?
For example, in 2020, the current member for Calgary Heritage helped produce a controversial report by former CBC reporter Terry Milewski alleging that Pakistan secretly created a Sikh separatist movement. This report was then amplified by official Indian government accounts. Later, the same member led the Macdonald-Laurier Institute's partnership with the New Delhi-based Observer Research Foundation, an Indian think tank set up with funds from Indian oil giants, and is now a member.
Do we need to look at some relationships members like this may have had with previous governments that are now showing hostilities and interfering domestically?
Absolutely. That process of vetting any candidates at all three levels of government should be far more rigorous. I don't think anyone would have anything against greater transparency as part of that process. I would completely support that.
I'd like to thank you for asking me this question, because I don't think we look far enough into the future.
We have to understand that human intelligence knows no absolute limits. It's almost certain that we'll be able to build machines that will surpass us in many areas. We can't know for sure whether this will be in a few years or a few decades, but we need to be prepared for it.
What I find perhaps most worrying is that this means that those who will control these systems will have immense power, whether they be states, companies or others. I mention this because today we're concerned with protecting democracy. We're going to have to set up safeguards to make sure we don't have too much power concentrated in one place, whether it's in the hands of one person, a company director, any other organization, or even a government. The greater the possibilities of these systems become, the more important the question of governance will become.
It's a bit like creating entities or a new species whose intelligence might surpass our own. It's a very dangerous thing. We need to exercise control over this to ensure that artificial intelligence remains a tool, and not something that could compete with humans. We're talking about something much further away in time, but people at companies like OpenAI and Anthropic think it could happen as quickly as five years from now. So we need to start worrying about it today.
We indeed have a legislative role here today. What can we do right now, in addition to Bill C‑27, so as to understand and meet these challenges of the future that are rushing towards us?
I think we need to give incentives to companies to develop better protections. The government needs to fund research so that artificial intelligence systems tend towards public protection. Earlier, we were talking about fact checking, for example. We need to design tools that will help our intelligence and national security communities protect themselves against attacks that could come from other countries using artificial intelligence. With the help of our international partners, we need to develop ways of making artificial intelligence secure.
However, all these issues are more a matter for governments than for the companies that make gadgets. The latter want to maximize their profits. They're not in it for the collective good. They compete with each other to sell us as many things as possible, which is perfectly normal in our market system. However, this means that the responsibility lies with governments. They're the ones who have to deal with it.
Mr. Bengio, I've heard folks who are familiar with AI downplay some of the comments around what the potential is.
You've touched on some things, and I want to talk about technological singularity, or the idea that there is a point in time in the future when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences.
What are your thoughts on that? Is that an overstatement of the possibility, or is that something we should be aware of?
Let me put it this way: Nobody has a crystal ball.
The AI researchers, I think, as you alluded to, disagree among themselves about the different scenarios, so the rational way of thinking about this is that there are different scenarios. Some are incredibly, fantastically good, and others are terrible. People talk about human extinction and many things in between.
The responsibility of public policy here is to invest, to make sure we see through this fog better as we move forward and to make sure that we avoid the catastrophic cases of risks of upending our democracies or even creating monsters that could turn against humans. For all of these, there are computer science arguments explaining how it could happen.
You can certainly send them to us, but I have about a minute and 30 seconds left, so I want to ask about some specificity, given your subject matter expertise.
We spoke about the international obligation Canada has. I want to now turn to domestic regulation. I believe government has a role to play. There's not a free market answer to this, because obviously free markets will incentivize some of the worst basic behaviours.
With AI, ethics and accountability, how can we develop an ethical framework for AI developers to ensure accountability when AI technologies are used to propagate disinformation, particularly in a context that can impact democratic processes?
I think we can look at what has been proposed recently in California, where the way to make sure companies are going to behave well with these systems is through transparency and liability. These are really great, because the government doesn't need to specify what they have to do. Because of transparency, now they are showing more of what they are doing in terms of making sure those systems aren't going to be dangerous, and they want the public to look at them positively. Now with liability, they have to be honest about potential harms that they could create with those systems and that third parties could create.
It's not that one of these companies is going to do something directly to harm people, but if they make it easy for a terrorist to do something or to create a monster, as I said, which may have huge costs for society, they should understand that they're going to be financially responsible for that.
To provide the witnesses with some context about a situation that unfolded over the last week, on September 22, CTV News aired a segment about a confidence vote that was going to come before the House, and in that segment CTV deliberately misrepresented the comments of the leader of His Majesty's loyal opposition. This is incredibly serious. We're talking about CTV News: They're owned by Bell, a media giant in this country.
What happened is shocking. CTV News spliced together different parts of the Leader of the Opposition's comments to create a false impression. First of all, that created a false statement, something that he never said, but the intention was to create a narrative that the opposition day motion was not about having a carbon tax election—which it was about—but was instead about opposing the Liberals' dental care program, as opposed to being about the carbon tax.
This isn't a situation in which there was an error, a misunderstanding during the editing process or some kind of technical issue. This isn't something that can be communicated away. This was very clearly an effort by a media company, a news organization, to manipulate the statements of the Leader of the Opposition on the eve of a confidence vote in the House of Commons, in a minority parliament. We're talking about misinformation here, and the need to trust, and whom we can trust.
We worry about what we see online, but here we have CTV News. We all know what CTV News is. They created a statement and spliced together multiple sentences to say something that the leader of the opposition did not say. Conservatives were calling for a carbon tax election; they made it about something else. How damaging is this to Canadians' confidence in trusted sources if they can't trust that a major news outlet will just simply report on what's actually being said by the leader of the opposition, but instead deliberately edit a clip to have him say something that he never said?
I'm an expert on foreign information and influence operations, not on CTV's editorial policies, but I agree that it's very important that Canadians be able to trust their media sources, especially in these times, when trust is declining in media.
Again, I'm an expert on foreign influence and information operations, not on these sorts of domestic situations.
I think we have a real problem in our country. We're looking outside to examine the effects of interference, misinformation and disinformation, and this is an example of disinformation. This is something that demonstrably did not occur, that they edited together.
We talk about deepfakes. They didn't use AI; they used an editing suite to create a lie, and it's shocking. It's really low tech, actually. It's low-tech domestic disinformation.
I think that the fact that this organization is heavily subsidized by the Trudeau government and then took to undermining the Leader of the Opposition on the eve of a confidence vote tells us something very scary about interference in our democratic institutions by the powerful who favour Justin Trudeau.
I want to thank both of our esteemed witnesses for not only being here today and their testimony, but for being patient with our committee, because I know you've been here before and we had votes and some things going on. Thank you so much for being here today.
On artificial intelligence, foreign interference, misinformation, disinformation and deep fakes, in the last hour, the witnesses were talking about trust lost in media and the lack of critical thinking skills. People just don't know what to believe or whom to believe and whom to trust. I had a conversation with a neighbour one time, and she told me she no longer watches news at all, whether she sees it on social media or whether she sees it on TV. This is kind of heartbreaking. She consciously goes out of her way to not watch any news because she's been duped by misinformation, disinformation or malinformation in news.
In the last hour, we asked our witnesses to give us some recommendations. How do we get back to a place where people in my constituency can feel that they can trust the news again?
I would ask Mr. Bengio first, and understanding, Mr. Kolga, that this may not fall within your exact level of expertise, I would value your opinion as well.
I am going to say the same thing I said earlier that I told the U.S. Senate a year ago, which is that we need to stop—with our international partners, but we can do our share here—the practice of anonymous social media accounts that allow foreign interference, that allow even AI to be using thousands of accounts in a way that makes it difficult not just to trace them and take them down but also eventually to send to jail the people who are doing things that go against our laws.
The reason they're not asking what your bank asks when you open an account is that they don't want to make it difficult for you to create an account, because they make money on having more people. They want to make it easy and they compete with other companies. If we had laws, it would be technically possible to protect privacy so that other users wouldn't know necessarily who you are, but the government with the appropriate mandate could.
There are a number of researchers in the world who are thinking about how to do that. There are technical solutions, and we should go in that direction.
I would only add that we should look to Europe. We should look to the Digital Services Act. It is very effective in Europe—I wouldn't say “very effective”; it is effective. It is a step in the right direction. Europe is making progress in terms of holding these social media companies to account, specifically for the content that's posted on their sites. We need to basically replicate that in this country.
We should also look to our European allies like Finland, which has done an extremely good job of ensuring that future generations do have the digital media literacy skills they need in order to enable that sort of critical thinking. They inject digital media literacy into every course in every year within the school curriculum. It's not just one course a year or one class a year; it's throughout the entire education of a child, from kindergarten to high school. They are learning about digital media literacy. This is something that we should also be doing. We need to start disrupting these sorts of activities, especially when it comes to foreign influence and information operations. We need to figure out ways to disrupt these activities.
Our European allies are doing this. We need to look to them again and learn from them how they're doing it and replicate those efforts here. If we're not disrupting these operations, if we're not holding to account those who are collaborating with these foreign regimes, then we're not going to move towards deterrence of them.
We have 20 seconds left, Ms. Khalid. I looked over at Mr. Fisher.
I do have a question I want to ask, so I hope you'll indulge me here.
First of all, let me say, Mr. Kolga and Mr. Bengio, that despite the technological problems that we had in the past, this was worth the wait. You provided some valuable information to the committee.
Whether we like to think so or not, sooner or later we are going to have an election in this country. The election is set for almost a year from today or sooner.
I want to hear from both of you, first with respect to foreign interference, and then, Mr. Bengio, with respect to artificial intelligence on the concerns that political parties and Canadians should have, and maybe some warning signs as we head into the next election.
What are some of the things that we may be seeing down the pipe, if you will?
Mr. Kolga, I'll start with you, and then we'll conclude with you, Mr. Bengio.
Again, we have a smoking gun that came out of the U.S. with the DOJ indictment. Ten million dollars was used to create a new platform, with the help of Canadians, to try to influence Canadian and American discourse. We see it happening. It's not a question of something that is going to happen at the time when the writ drops. It's already happening right now.
I think one of the problems we have in this country is this belief that these foreign authoritarian regimes only activate themselves when there's an election. They don't. China and Russia are sophisticated. They engage in these sorts of operations well in advance of any election. It's 24-7 for them.
The Russian government, for example, spends $3 billion per year on these operations. We're not even close. Even if you combine all NATO countries and their resources in terms of spending to challenge and defend against these operations, we're nowhere close. We see it's happening.
I think we need to step back and acknowledge the fact that it's not just at election time; it's a full-time operation. How are we going to defend against that?
From an artificial intelligence perspective, now and into the future during an election, what are some of the things we should be concerned about as we head into an election cycle, Mr. Bengio?
We have to worry about the technology that already exists that can be used to create deepfakes of various kinds and imitate people, their voices, their visual appearances and their movements. I think we need to start preparing against tools on the horizon that could be coming out in six months or something like this.
Again, AI is not a static thing. It's getting better as new systems and companies are coming up with new ways of training it that make it more competent.
I'm going to go into a little bit of a technical thing here, which is that once one of these very large systems that cost over $100 million has been trained, it's fairly cheap to take it—especially if it's open source—and do a little bit more work to make it really good at one particular task. This is called fine tuning.
You could imagine, for example, that the Russians might be taking Facebook's LLaMA. They might make it run on social media and interact with people to see how well it works, and then they might take that data in order to make the system even better at convincing people to change their political opinion on some subject.
As I said earlier, there are already studies showing that GPT-4, as it stands, is already better than humans, but only slightly, especially when it has access to your Facebook page. However, it can get a lot worse without any new scientific breakthrough, just with a bit of engineering of the kind that it could easily do.
What that would mean is that they can now unleash bots that would be talking to potentially millions of people at the same time and trying to make them change their opinions. It's a kind of technology that we haven't seen, or maybe it is already happening and we're not aware of it. It could be a game-changer for elections in a bad way.