Skip to main content

ETHI Committee Meeting

Notices of Meeting include information about the subject matter to be examined by the committee and date, time and place of the meeting, as well as a list of any witnesses scheduled to appear. The Evidence is the edited and revised transcript of what is said before a committee. The Minutes of Proceedings are the official record of the business conducted by the committee at a sitting.

For an advanced search, use Publication Search tool.

If you have any questions or comments regarding the accessibility of this publication, please contact us at accessible@parl.gc.ca.

Previous day publication Next day publication
Skip to Document Navigation Skip to Document Content






House of Commons Emblem

Standing Committee on Access to Information, Privacy and Ethics


NUMBER 134 
l
1st SESSION 
l
44th PARLIAMENT 

EVIDENCE

Tuesday, October 22, 2024

[Recorded by Electronic Apparatus]

(1550)

[English]

     Good afternoon, everyone. I'm calling the meeting to order.

[Translation]

    Welcome to meeting number 134 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
    Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, February 13, 2024, the committee is resuming its study of the impact of disinformation and misinformation on the work of parliamentarians.
    I would like to welcome the witnesses we have with us for the first hour of the meeting. Both are participating by video conference.

[English]

    I would like to welcome, first of all, Mr. Jacob Suelzle, who is a correctional officer federally.
    Mr. Michael Wagner is online with us today. He's a professor and William T. Evjue Distinguished Chair for the Wisconsin Idea at the University of Wisconsin-Madison.
    Mr. Suelzle, you'll have up to five minutes for your opening statement, followed by Mr. Wagner.
     I apologize for the delay. We've had some speeches in the House that have precipitated this delay.
    Go ahead, please, Mr. Suelzle.
    I was asked here today to speak about my experience with misinformation from Correctional Service Canada. I am and have been a correctional officer with the CSC since 2007. This opportunity gives me a chance to address issues I've experienced and observed, and which are experienced by correctional officers all over the country who are not able to speak or to draw attention to these issues because of fear of retribution and punishment.
    It is widely known amongst correctional officers that the message of the state of our penitentiaries, as represented by Correctional Service Canada, is a very inaccurate representation of what is happening in our prisons. Violence in our prisons is one of the most pressing issues faced by correctional officers. The levels of violence within a prison are at a level I've never seen in my career. The violence against correctional officers is at a level I never would have assumed it could or would ever be allowed to get to. I classify this as a measure of misinformation because the issue is so discarded by the service. Officers are often ridiculed by management for reporting assaults, and are often coerced to not document or write reports regarding these assaults.
    Correctional officers understand that working in Canada's federal prisons comes with inherent risks, but the injuries they incur are widely discredited through many means, not the least of which is the general Correctional Service Canada's refusal to allow assaults and threats against correctional officers to be documented and reported through occupational health and safety procedures. Incidents that are documented seldom result in any change in routine or procedure that would alter the likelihood of these happening again.
    Correctional officers struggle against the service while trying to recover from injuries sustained at work. They are pressured to suck it up, to grow up, to not report and to not miss work after an injury or an incident. It is a general cliché that rings true within prison that someone has to die before a safety concern regarding protection from inmates is actioned. Life-threatening incidents and murders are the generally accepted threshold for taking a situation seriously. Why do I classify this as misinformation? Because a picture is painted by the CSC that does not take this reality into account, and by doing so further belittles the struggles of those on the front line in prison.
    An initiative like the needle exchange program is an easy example of a response to a very inaccurately presented problem within prisons. Canada's prisons are filled with drugs. Correctional officers across the country will unanimously agree that the only change in the amount of narcotics in prison year after year is the increase. Substances that were seldom seen are now so prevalent that they draw little to no attention when confiscated. Officers have become proficient in administering naloxone to overdosing inmates, sometimes multiple times a shift. Needles within Canada's prisons have always been a rare piece of contraband to find. They were, generally, crudely made and ineffective. Probably for this reason, drugs within prison are very seldom used intravenously. In prisons, drugs are smoked or snorted. The introduction of the prison needle exchange program has and is introducing an injection drug problem that did not exist in our prisons. The service's rhetoric that this is a harm reduction measure is actually creating a new problem that we on the front lines had never had to deal with.
    Besides the introduction of injecting drugs in prison, we are also presenting weapons to a violent inmate population and creating an economy for these needles to be used and distributed through the populations. The use of medium and maximum security inmates unsupervised outside of a perimeter fence, and the terms used to get around policies that would not allow these to happen, are standard practice. Terms like “perimeter work clearance”, “on-site TAs”—temporary absences—or “positions of trust” are often used at sites for inmates who are not eligible for forms of release into the community or away from security measures. Inmates in these positions often introduce contraband into the institution, most commonly in the forms of drugs and cell phones. Memos are often written, directing officers to not perform regular search procedures on these inmates once they return to the institution, because of their positions of trust or exempt status. Inmates on perimeter exception are constantly using these opportunities to visit community restaurants, coffee shops like Starbucks, etc., and, of course, to introduce contraband and participate in other security compromising activities, including escapes.
    It is demoralizing and insulting to frontline correctional officers to see the organization they work for misrepresent their workplace and the dangers they face, and further contribute to those dangers by not properly responding to issues, fostering a culture that does not allow accurate reporting and minimizing the physical and mental injuries often incurred in this environment.
    That's my opening statement.
(1555)
     Thank you, Mr. Suelzle.
    I go to Mr. Wagner. Mr. Wagner, you have up to five minutes to address the committee. Go ahead, sir.
    I'm happy to offer some general thoughts about misinformation and misinformation correction before answering your questions to the best of my knowledge and experience.
    My name is Michael W. Wagner. I have a Ph.D. in political science from Indiana University. I'm the William T. Evjue distinguished chair for the Wisconsin Idea, and a professor in the school of journalism and mass communication, where I direct the center for communication and civic renewal at the University of Wisconsin-Madison.
     It's well established that Russia's Internet Research Agency, or IRA, operated thousands of Twitter accounts, posing as individuals, to weigh in on political discussions on social media in the United States and other countries, including Canada. Beyond driving some social media conversations witnessed and engaged with by users of social media platforms like Twitter—now called X—these IRA accounts also found their way into legitimate news coverage, being quoted as examples of the person on the street, further amplifying IRA messages about issues like support for Russia's war with Ukraine. This greatly amplifies the reach of its messages, as more people consume legitimate news sources than use social media to learn about and discuss politics. It also increases the likelihood that lawmakers could be affected by IRA posts, as research also demonstrates that parliamentarians use legitimate news sources as a way to read public opinion—something lawmakers can then choose to use in their own decision-making calculus about how to represent their constituents.
    In terms of another aspect of misinformation online, it's useful to think about what factors are most associated with inaccurate content going viral and spreading widely and quickly. Posts with more emotional resonance are more likely to get shared online. Posts published at times that people are habitually more likely to be on social media make things go viral as well. Perhaps most importantly, key influencers in politics and the news media sharing or spreading that information are often critical amplifiers to virality.
     In terms of misinformation correction, fact checks can work to help people come to believe things that are verifiably true. Labelling stories as a fact check tends to motivate audiences to think about the accuracy of information, while they're consuming it. People willing to admit what they don't know are also more likely to benefit from fact checks. However, fact checks come at a cost, the cost of people believing that the fact checkers are biased, which could affect long-term, trusting relationships the audience has with more legitimate news sources.
    Another promising strategy to correct misinformation on social media is the use of a strategy called "observed correction". Rather than engaging with the person making a misinformation or disinformation claim, simply correcting the claim without focusing on the person and linking to the accurate information is useful. Research shows that observational correction occurs when seeing misinformation shared by others being debunked on social media. It reduces misperceptions or beliefs in misinformation among the audiences witnessing the exchange, even if it doesn't affect the opinion of the person who created the false post to begin with. This strategy is shown to be more effective in some circumstances than pre-bunking misinformation, and there's some evidence that logic-based interventions perform better than fact-based interventions as well.
    I'm happy to answer questions about these factors or other factors related to misinformation and the health of democracies.
    Thank you.
(1600)
     Okay. Thank you both for being under time. That will allow for more questions.
    We have six-minute rounds, starting with each party.
     I'm going to begin with Mr. Caputo.
    Mr. Caputo, you have six minutes. Go ahead.
    Thank you, Professor Wagner and Mr. Suelzle, for being here. I appreciate it.
    There's no disrespect meant to you, Professor Wagner, but I am going to focus my questions on the other witness. That's not to deny your qualifications or insight in any way, shape or form.
    Mr. Suelzle, to be clear here, you're representing yourself. You're not representing your union or anything like that. Is that correct?
     That's correct.
     I'm going to run a few scenarios by you. Before we do that, what I'd like to delve into is the culture of Correctional Service Canada.
    Correctional Service Canada reports to parliamentarians through the Minister of Justice and is accountable. In fact, we have the commissioner appear at committees. When we're talking about misinformation and disinformation, particularly as it relates to parliamentarians, in my view, this is actually quite germane.
    Would you say that CSC has what I would call—these are my words—a culture of secrecy? What would you say about that?
     I think that would be a fair statement to make.
     Can you elaborate on what you've seen, generally? We don't want you to breach confidentiality, or anything like that. However, how does the culture of secrecy manifest itself, and what are the consequences?
     My experience has been that the service is inclined to answer questions to the bare minimum, in order to not expose themselves to anything that would portray them in a way counter to how they want to be looked at by parliamentarians and the public.
     How do they want to be looked at, in your view, by parliamentarians and the public?
    In my view, they want to be regarded as progressive. I think they would very much prefer, to be honest, to be left out of any spotlight they can. Certainly, it's to be viewed as a progressive organization at the forefront of changing the perception of corrections.
     How does CSC deal with anything negative?
     In my experience, they don't, or they quickly clamp down on those who are drawing attention to any negativity.
     In those situations, when we in Parliament are trying to evaluate how well our correctional system is operating—this is just a comment—it seems as if we don't get the unvarnished truth.
    Now, I'm not going to ask you to comment on this specific scenario. I did some work and produced a video on it. Correctional Service Canada put out a press release after somebody escaped. The release said the person escaped from “institutional property”. The person was in medium security. For those who don't know, that means two very large fences surround the institution. It's the same as maximum, in fact. It's very difficult to escape from. The service put out a bulletin saying he escaped from the grounds of medium security. That sounds fine. However, the information I had was that this person was actually allowed to be outside the fence. That was a pretty material omission, in my view. If you say that someone escaped from medium security, I picture them jumping over two razor-wire fences about 20 feet apart and 12 feet high. That's a pretty material omission. It's kind of like what we sometimes see in Parliament—telling half the truth. “It's true. He was on the grounds.”
    Can you comment not specifically on this case but on whether this type of thing is a surprise, given your experience?
(1605)
     In my experience, that is absolutely no surprise. Those are more common than I think most people would ever believe.
    In that case, would the service be trying to...? I'm just inferring this. It doesn't want to say this person was actually outside the fence.
    You don't have to answer that. I'm just saying that it seems they don't want to acknowledge that this person was outside the perimeter fence and simply walked away. Yet, parliamentarians and the public get a half-truth.
    I also exposed Paul Bernardo's situation. I said to the public, very clearly, that I came “eye to eye” with that offender. CSC came out and was quoted as saying that I had no interactions with that inmate. Yet, I hadn't said I had any interaction. They framed it as though I was lying about it.
    Does that type of reaction from CSC surprise you, given your experience?
    Based on my experience, it doesn't surprise me at all.
     Jail has an underground economy.
     Is that correct?
     Yes.
     There's value in drugs, weapons, information and cellphones. Everything has value.
    Is that right?
    That's correct. We phrase it this way: “It's what makes jail go round”—that underground economy.
     I have—

[Translation]

    Mr. Chair, I have a point of order.
    I'm having a bit of trouble seeing the connection with disinformation as it pertains to parliamentarians. I don't often bring this up, but it seems to me that we're completely off topic.
    Thank you for your comment, Mr. Villemure.
    This study is about the impact of disinformation and misinformation on the work of parliamentarians. I think Mr. Caputo is going to bring it back to that.

[English]

    You have a minute left, Mr. Caputo. Go ahead, please.
     Candidly, we're talking about how a government agency communicates with the public and parliamentarians. I don't know what could be more germane to misinformation and disinformation.
    I will move forward.
    The government has spoken with parliamentarians and the public about needle exchanges.
    What kind of picture have they painted about needle exchanges? I have about 10 or 20 seconds left. Could you answer that in about 15 seconds, please?
     It's that it's a harm reduction measure.
     What kind of danger are you seeing? Is that being communicated to parliamentarians?
    I don't believe that's being accurately communicated at all. The dangers we're seeing are the introduction of weapons into the facilities and the feeding of this underground economy. We're creating elements to the economy that we haven't dealt with before.
     Mr. Caputo, we're out of time in this round.
    Ms. Khalid, you have six minutes. Go ahead, please.
     Thank you very much, Chair.
    Thank you to the witnesses for appearing today.
    Professor Wagner, if it's okay, I'll start with you.
    I'm sure you've been studying this for the past couple of years now. It's been awhile that the Conservative Party leader has been attacking mainstream media and journalists with the intention to mislead and make Canadians believe that the news networks they have trusted for many years are no longer trustworthy.
    Can you comment on the danger this presents to the state of Canada's information ecosystem and our democracy?
    There's a long line of research in the study of political communication that researchers call “blaming the referees”. It's a strategy that political elites can often use to try to diminish trust in verifiably accurate news sources.
    There's a distinction to be made between news sources that have things like corrections policies and that punish journalists when they get facts wrong versus other organizations that also sometimes frame themselves as being news organizations, but are primarily opinion organizations.
    When it comes to the more trusted places—the places where they're engaging in what we would think of as more legitimate journalism, which doesn't mean they're always right, but it means that they correct themselves when they're wrong—it's a danger to diminish trust in those organizations without evidence.
    A lot of times, a strategy that political elites use is to try to diminish the amount of trust in mainstream news sources. The purpose of that diminishing trust is then to not face consequences from voters at the ballot box, as an example, for behaviours that lawmakers or others may engage in.
(1610)
     I appreciate that.
     I will tack on the second half of that question to my next question.
    On inflammatory language and material, you spoke about this in your opening remarks with respect to emotions getting more engagement than facts. We've seen dog whistles, etc., by political parties generating a lot of engagement. That engagement does lead to ad revenue as well.
    Is it fair to say that media platforms are actually benefiting financially when and if they allow this dissemination of misinformation and disinformation on their platforms?
     I think it's fair to say that social media platforms benefit from that, and some news media platforms benefit from that as well. It partially depends upon the major purpose of the platform and why people use it.
    If it's a place where people are trying to learn what's true, there might not be as much of a financial benefit as for those who might seek out a news source because that source tells them they are right and the other side is wrong.
    There are advantages to following that kind of model, but not for all media in a blanket way.
    I appreciate that.
    I'll go back to my original question, partly.
    Is there any correlation between a person's distrust of mainstream media sources and their participation in the democratic process? If so, how?
     There is, in some ways. People who distrust mainstream sources tend to be more supportive of political violence as an avenue to exact political preferences. Many people who are distrustful of mainstream news sources are also highly participatory. In fact, many people who believe in conspiracy theories are extraordinarily knowledgeable and have very low trust in political institutions like mainstream news media sources or elements of government.
    There are lots of different ways that one's distrust of news could foster participation. Sometimes it encourages more violence, but often it encourages more participation in voting, political donations, posting online and things like that.
    I appreciate that.
    You spoke about the role that western social media platforms plays.
    Do you think there is an obligation for these social media platforms with respect to the algorithms and how information is disseminated, whether it is truthful information, misinformation, disinformation or hate speech?
    Do you think that social media companies have a responsibility to control how those algorithms are impacting what an individual Canadian is seeing within their feed and how it's impacting their participation in the democratic process?
     There are responsibilities that social media platforms have when it comes to the unfettered amplification of things that are known to be false. Those can often be very dangerous and have violent consequences, or other consequences that might be political consequences but are related to believing in things that are verifiably not true. Much of politics operates in a grey area where some things that are said are true and some things are not, so it can be dangerous for social media platforms to regulate with too heavy of a hand and stifle speech.
    When it comes to de-amplifying statements that are known to be verifiably false, social media platforms have an obligation, in my view, to not try to share that information widely with their users.
    You spoke a little bit about fact-checking as a method of controlling misinformation and disinformation and how it's disseminated. You also talked about the bias as to who, exactly, is fact-checking. I've seen some people, if they're sold on an idea, who Google something and find 20 articles countering their idea. All they need to do is to find that one article to confirm what their belief is.
(1615)
    Ms. Khalid, could you finish up quickly, please.
     Absolutely.
    How do you think that plays into what kinds of regulations and partnerships governments need to have with social media companies in that dissemination of information?
    I need a very quick response, please.
    That's a very difficult question to answer, because it's very difficult to figure out the volume there. It's something that platforms need to discuss with regulators, but I don't have a quick answer to that question, I'm sorry to say.
    Okay. Thank you, sir.
     I'm just wondering if the witness can, perhaps, think about it and give us a written response to that question.
     I'll deal with that at the end of the meeting, like we typically do with those requests.

[Translation]

    It is now Mr. Villemure's turn.

[English]

    Witnesses, make sure that you're on the French translation channel, please.

[Translation]

    Mr. Villemure, you have the floor for six minutes.
    Thank you, Mr. Chair.
    Thank you to the witnesses for being with us today. It is a pleasure to hear their very informative comments.
    I'm going to start with Mr. Wagner.
    In general, do social media companies really care about disinformation, or are they just pretending to?
    We know, as my colleague said earlier, that revenue is based on the number of clicks.
    When it comes to disinformation, are their concerns real or just for show?

[English]

     First, I think that "they" is a term.... Social media platforms operate really differently, but, in general, social media platforms vary in how aggressive they are at de-amplifying false claims and amplifying other kinds of claims.
    As an example, in the United States in 2016, a high percentage of things that were not true were exposed to people over Facebook. Facebook was then deeply criticized for that, and in 2020, that percentage dropped precipitously because Facebook engaged in more aggressive diminishment of inauthentic behaviour on its platform. When the criticism died down, it stopped doing that and that percentage is increasing again.
    They're not static in how they engage in these kinds of behaviours. It often is in response to criticism and perceptions of potential regulation.

[Translation]

    You mentioned a little earlier that extreme, banal and trivial content generates more clicks than matters of public interest.
    For the most part, social media has become a vector for entertainment rather than news, even though people claim they get their news from social media. What can we do, as politicians or parliamentarians, to send what I would call a serious message, at a time when people are looking for jokes and entertainment?

[English]

    That's a great question. A lot of it has to do with the audience that individual social media users cultivate. Some people—most people, I think—cultivate audiences around their interests relating to entertainment, in some sort of way, or sports or those kinds of things. Others cultivate audiences based upon commenting on public affairs, sharing evidence about politics or trying to organize and persuade people or to sow chaos in the social media ecosphere. I think it really depends upon what the purpose is, to answer that question.

[Translation]

    As you know, X had a problem in Brazil recently. It was initially blocked there, but then the government unblocked access to the platform.
    Did X change anything to have its access reinstated in Brazil?

[English]

     I'm not directly aware of changes that Twitter made with respect to what happened to them in Brazil. I can say that, in general, they often alter their behaviours in response to governments regulating or fining them.

[Translation]

    As you said a little earlier, when an issue comes up and gets attention, people improve their practices when they're in the spotlight, but then go back to the same behaviour.

[English]

     That at least happened with Facebook. We know to some degree that this happened after the ownership change of Twitter and its renaming to X. There's been a difference in the content moderation behaviours of that platform since then.
(1620)

[Translation]

    Could you give us more details on X's content moderation?

[English]

     There is far less content moderation. Most of the staff who were doing that were either fired or quit after Twitter was taken over by Mr. Musk. Part of it is that—but it's not that there isn't any. There's a feature, I think now called “community notes”, that will sometimes append a post. Usually the thing that has to happen to get to that feature being enacted is that multiple sides of a political debate have to agree that the claim was false. If one side is pushing a false claim, that's not usually enough to invoke the community notes feature.

[Translation]

    What do you think is the worst platform when it comes to disinformation?

[English]

     That's a really hard question. I think probably Truth Social would be the single worst. There's this kind of unfettered access to saying things that aren't true. Very little behaviour from the platform lets the audience know that claims being shared widely are not true.

[Translation]

    For now, I'm going to exclude Truth Social from the equation, because it cultivates users who are mainly from one particular segment of the population rather than the general public.
    Other than Truth Social, which platform is the worst actor when it comes to disinformation?

[English]

     It's hard to say with accuracy, because we don't know the denominator of how many posts are made on all of the different platforms as compared to how many things aren't true. My impression is that X has now become that leader, but I don't know that as an empirical fact.

[Translation]

    Thank you, Mr. Wagner.
    Thank you, Mr. Villemure.

[English]

    Mr. Green, you have six minutes. Go ahead, please.
    Thank you very much.
    Mr. Wagner, can you please state your subject matter expertise on this topic once more for the record?
    Sure. I have a Ph.D. in political science. I conduct research on individual engagement in information ecologies, including news, social media and individual conversation. I look at outcomes related to what people believe to be true, what they want from their government and how they participate civically and politically.
    Mr. Suelzle, can you please state your subject matter expertise on misinformation and disinformation?
     I'll state my experience as a correctional officer since 2007 within the federal penitentiary system.
    So you have no subject matter expertise on misinformation or disinformation? Okay.
    Mr. Wagner, with that being said, who are the primary actors spreading misinformation, disinformation and malinformation? I know you're coming from an American context, but perhaps you might have some insight into the Canadian political system.
    A lot of misinformation and disinformation that is shared in western democracies originates from Russian sources in the IRA, the Internet Research Agency, and other bot farms in different parts of Europe that exist to sow false claims and try to spread them. In the American context, a key spreader of misinformation that also gets a lot of attention is former President Donald Trump, with an account that spreads a lot of misinformation.
    I would say the primary organization that is really attempting to influence elections in Canada and the United States is the Russian IRA.
     Are certain parliamentarians more at risk than others of being targeted and affected by disinformation campaigns?
     It seems that the IRA, or the Russian government, has candidates it would prefer to see win elections and candidates it would prefer to see lose elections. Those it would prefer to see lose have a greater likelihood of being targeted with negative information.
     However, sometimes, candidates in different parties get positive and negative information from these agencies as an effort to just sow chaos and be confusing, which is often one of the objectives of this kind of organization.
(1625)
     Is it fair to say this goes well beyond Russia? I would put it to you that in my experience, we love to oversimplify this as Russian or Chinese, or maybe Indian sometimes, but is it not fair to say there are various state-sponsored Internet propaganda machines being used to spread this?
    I would reference Operation Earnest Voice in the United States of America. You talked about Donald Trump. Obviously, the United States was a prime propagator of the Chinese virus during COVID and a lot of vaccine misinformation and disinformation. Israel, a so-called ally, has the hasbara. It has a whole ministry of strategic affairs that deals with targeting political actors, and I know that's come out in the United States.
    Can you maybe just take a step back, zoom out and talk a bit more beyond just Russia in terms of state-sponsored Internet propaganda machines out there?
     There are lots of state-sponsored efforts to try to influence elections in other countries and in their own countries. There are lots of non-state-sponsored organizations that are also trying to do the same, and do so by trying to spread mis- and disinformation.
    It's certainly not any one actor. If asked to name the primary one, I would say Russia, as I did when I was asked earlier, but in general, there are as many opportunities as there are users of the Internet in some respects.
    What advice would you have, if any, for us on dealing with attempts, times or scenarios that become prevalent when our so-called allies are actively engaged in presenting misinformation and disinformation?
     Do you view it as a national security threat when foreign state actors are actively engaged in spreading disinformation and misinformation?
     That's outside my area of expertise, but as a citizen, I worry about how state-sponsored mis- and disinformation from or toward allies or adversaries.... Yes, I worry about all of it. It's certainly something that, in my view, governments should be talking with each other about, and parliamentarians should be talking with their constituents about it.
    In terms of a risk assessment in national security, how high would you put misinformation and disinformation as a threat to free and fair elections when it comes to western democracies?
    I'd put them much higher than they used to be, at least in the United States context. Mis- and disinformation on social media platforms have been tied to encouraging the January 6 atrocities at the United States Capitol. That's an example of how mis- and disinformation about the results of an election can foment political violence.
    There are also a high number of mis- and disinformation behaviours that occur around elections, such as trying to target particular populations, telling them the wrong day for an election or telling them rules about what they have to do when they vote that aren't accurate, and those sorts of things. Those are all dangers to free and fair elections.
    Mr. Chair, how much time do I have left?
    You're out of time, Mr. Green.
     I'll get to you in the next round.
    That concludes our first round. We are going to have two five-minute rounds and a two-and-a-half-minute round. That will conclude our first panel. We want to try to keep it on time here.
    I'm going back to Mr. Caputo for five minutes. Go ahead, sir.
     Thank you, Mr. Chair.
    Mr. Suelzle, before I begin, this idea of expertise is an interesting one, because sometimes we have.... People can chuckle, but sometimes we have a—
    An hon. member: [Inaudible]
    Mr. Frank Caputo: Okay. I apologize. I thought it was a chuckle to my question.
    People can have different views on expertise, but in my view, 17 years of real-world expertise or real-world knowledge is highly beneficial to this committee, so I thank you for being here.
    Sir, do you fear any repercussions for what you are saying to this committee here?
     I do.
    I should say, do you feel fear?
     Yes, I anticipate some kind of punitive response to this.
     Would you be prepared to let this committee know if there is any sort of response or follow-up as a result of your testimony here today, given that you were invited by parliamentarians and were simply answering questions by parliamentarians truthfully?
(1630)
     I have no concern with doing that.
     I'm going to ask you about gangs in jail, how that is communicated to the public and parliamentarians by the CSC, and what's really happening.
     Do you have any comment on gangs in jail, how that's being communicated and what's really going on?
     The CSC prides themselves in having successfully eliminated gangs from Canadian prisons a number of years ago. I believe it was 2014 or 2015. That is a celebrated accomplishment amongst the brass of the CSC, which was accomplished by changing the name “gangs” to “security threat groups”.
     What does that mean, "security threat groups"? Was it simply just a change in the name and, therefore, we say that no gangs exist? Do I have that right?
     Yes. It was a name change that, out of one side of the mouth, eliminated gangs and, on the other hand, created a very new problem with a very newly named group.
     If somebody were to testify before Parliament and say that gangs are not an issue, but we didn't know that and would have to say that say that "security threat groups" are, that would certainly be misleading in my view.
    I want you to talk about this as well. We are told that in Corrections there's safe, secure, humane control, and people are transferred—when I say "transferred", their security status is transferred down from maximum to medium, or from medium to minimum—in accordance with their risk to public safety and things like that.
    Do you have any comment on that, sir?
     It's a very hot issue amongst correctional officers about what we perceive and believe is the inaccurate security classification of inmates being inappropriately downgraded or de-escalated down through security levels outside of where that inmate is fit to be operating in.
    When we have a maximum security inmate positioned into a medium or minimum security environment, of course that produces all kinds of threats to us and to the public.
     Is this type of thing routinely communicated to parliamentarians and the public?
     It's very rarely communicated to anyone internally. I would be shocked if it were communicated externally at all. It's usually done through the form of what's called "overrides".
     An override is when a manager essentially says, “I disagree with what the computer has spit out, and I'm going to change that on my own.”
    That's a kind of crude way of putting it, but is that accurate?
     Yes, essentially.
     How much time do I have, Mr. Chair, please?
     You have a minute and 10 seconds.
    When somebody is incarcerated, there's a perception that the person is put behind fences, especially in medium and maximum institutions, but people still perceive "minimum", which in fact has no fences, as being behind fences.
    There are times when people have work clearances or are unofficially escorted out of jails, in your experience, contrary to policy. Is that ever communicated?
     Absolutely. That happens on a daily basis, I would suggest, in most institutions across the country.
     Without breaching any confidentiality or anything like that, can you describe a scenario that is common knowledge to somebody in your industry?
     We have inmates serving life sentences who are not eligible for any forms of parole but are given fence clearance to work off site and out of perimeter during the business day.
     Is that communicated to the public or to parliamentarians in annual reports or anything like that?
     No, it's not communicated anywhere.
     I assume that's my time, Mr. Chair?
     It's close to it.
    Thank you, Mr. Caputo.
    Mrs. Shanahan, go ahead for five minutes.
     Thank you very much, Chair.
     I thank the witnesses for being here today.
    Professor Wagner, I find the remarks you've made very interesting, as is the fact that you have been undertaking research in this area, which, we must admit, is fairly new.
     I know that you co-wrote and published a paper in July 2024 entitled, “Slant, Extremity, and Diversity: How the Shape of News Use Explains Electoral Judgments and Confidence”. It introduces measures that capture individuals' partisan slant, partisan extremity and overall diversity of news media used to understand how people interact with contemporary news ecology.
    One of the conclusions—and I don't pretend to have read this paper, but our analysts have, and I thank them for this question—of this paper is that a diverse news consumption style can moderate misinformation beliefs.
    What do you mean by that? What is a "diverse news consumption style"?
(1635)
     “A diverse news consumption style” is using a wide range of sources, some of which will oftentimes include sources that tend to favour a political ideology other than that of the individual of interest. That would be a liberal who also gets some news from conservative outlets or a conservative who also gets some news from liberal outlets. They also engage in a higher number of sources to learn from rather than a smaller number. So, it's the volume, the diversity, and then the slant, the direction, of the information from a political perspective.
     When you say “diversity”, you actually mean in points of view. I was wondering if it actually had to do with types of media, just knowing that watching television news, for example, is a very different experience from looking at Facebook news. Is that a diversity that's important?
     In that particular paper, we don't use social media. Social media aren't news sources. They don't produce news in the way that a journalist does. They share news, and a lot of what gets shared on social media is links to news. That particular paper is looking at news sources: print, broadcast, digital, radio, those kinds of sources. There's some evidence that different sources have different.... Broadcast news, broadcast television, tends to help those with less education learn more about politics, whereas newspapers and digital news tend to help folks with more education learn about politics. So, there are some differences in how people benefit from the different sources they might consume.
     Professor, help me out here. When we think about different news sources and how they would have different points of view.... I consider myself a well-read person. I consume a lot of news. I would think that CNN, for example, is considered liberal. As for The Washington Post, I don't know. CTV I would have considered conservative, but now apparently it's liberal. How do we identify different news sources, or is it even useful to be identifying different news sources that way?
     It's a real challenge. Any way that anyone, including myself, makes those labels is open to criticism.
    One strategy is to use what we did in that paper—a set of scores called the Faris scores that have an array of news sources based upon the ideological orientation of their users. That is one way to measure how liberal, conservative or centrist a source might be.
     Another strategy would be to compare the kinds of sources that news organizations quote to the kinds of sources that lawmakers of different political parties quote, and look for correspondence between those. If a newspaper quoted sources that more Liberal parliamentarians refer to in speeches, that might be an example of that paper's being more liberal, and vice versa.
    All of these, of course, have their problems. None of them are infallible.
     Do you have a recommendation, before my time is over, for what you would consider the most well-balanced news sources or credible news sources?
    Often they are sources that are publicly sponsored. In the United States, it would be National Public Radio and public television, as an example, and the BBC is another. These are often high-quality news sources.
    It's also the case, though, that any government-sponsored source does run the danger of bias that might favour the government, since they are signing the cheques.
    I'd like you to check out CBC/Radio-Canada here in Canada in your next study.
     Thank you, Mrs. Shanahan.

[Translation]

    M Villemure, you have the floor for two and a half minutes.
    Thank you, Mr. Chair.
    Mr. Wagner, I'll go back to you.
    We live in a time when we've become used to living with fake news, when truthiness often replaces truth and when, as you said yourself, people believe that social media produce news, even though they don't. Isn't it too late to counter disinformation?

[English]

     I don't think it's too late. Among the fastest-growing legitimate news sources in the west are fact-checking news organizations. There is an audience for reporting that does more than just say, “Here's what our leaders said,” and asks, “Were these things verifiably accurate, and how do we know that or not?”
    I don't think it's too late, but it's definitely an uphill battle.
(1640)

[Translation]

    To your knowledge, are Canadian laws up to the task of countering disinformation?

[English]

    I am, sadly, not enough of an area expert to speak to that. I wish I could, but that's not my area of expertise.

[Translation]

    In that case, what broad measures would you recommend we take as soon as possible?

[English]

    It would have to do with information that's time-dependent, such as when there is a natural disaster and there's dis- or misinformation about the government response to it, or when there is an election and voting ends at a certain time and there's mis- or disinformation floating around. Things with a time deadline are probably the most important to engage in action about.

[Translation]

    From your perspective as a political scientist, is it not strange that, in today's world, we see the word “lie” every day in the national media and accept the fact that some people brazenly lie without consequence? Does that have an impact on democracy?

[English]

     I think it does. Most folks don't know very much about what's going on in their country and around the world. Most people don't wake up and ask themselves how they will hold their government accountable today. Instead, they rely upon legitimate sources of information to create penalties for lying, penalties such as not being re-elected, censure or sanction from their colleagues or a public embarrassment for things they say that are not true.
    We see a decrease in many western countries with respect to those kinds of behaviours, both from lawmakers and from news organizations, in some respects, over time. As societies polarize and divide, we see an increasing willingness in people to forgive the sins of their side and focus on the sins of the other side.

[Translation]

    Thank you.
    Thank you, Mr. Villemure.

[English]

    Mr. Green, you have two and a half minutes.
    Go ahead.
    Thank you.
     I want to explore some of those concepts, perhaps in English, for the good and welfare of the people who are watching.
    We have had several witnesses appear before this committee in the context of the study to talk about misinformation and disinformation and how it can negatively impact Canadians' trust in public institutions. The current information ecosystem has seen an erosion in the trust that some people have in them, as well as in traditional media. Obviously, the same impacts are happening in the States.
    Have you done research on the impact of misinformation and disinformation on public trust in institutions and traditional media?
    Yes, but not in Canada.
    What are the main take-aways from the American example that we might be able to learn from?
     One is that when ideological sources attack the referees, which is to say they attack legitimate news sources time and time again and then want to rely upon those sources to fact-check things that they care about, they learn quickly that their audience no longer trusts them.
    We've seen very prominent people who've engaged in polarized communication on talk radio and cable television in the United States sow distrust in legitimate news sources, and then, when a candidate from their own party who they didn't like came along and they told their audience, “We can't trust this person because they lie all the time,” their audience said, “But you told us not to trust these sources.”
    When there are no referees, it's very difficult to maintain the integrity of the game or the integrity of the—
     In terms of ideological ecosystems, we know that that continuum ends in a place of ideologically motivated violent extremism.
    We know that in the States, or at least it's been reported—and I'll leave it to you to comment, as a subject matter expert—that the two attempts on Donald Trump's life were in fact from ideologically motivated right-wing extremists. Is that not correct?
     I am confident that at least one of those is correct, and I believe that the most recent reporting I've seen is consistent with your characterization.
    In that space, you quite rightly identified that when this ecosystem of political violence is unleashed in a world that is absent of fact and completely disassociated from basic civil norms, political violence will impact everybody. Is that a safe assumption to make?
    It certainly could, and it's the case that individuals feel it. In survey research that we do, when we ask those who don't participate in politics why they don't, a non-trivial percentage say it's because there are too many dangerous people out there who are often believing some of the things that you're talking about that aren't true, which can often lead to political violence.
(1645)
    Okay.
    In short, that undermines democracy at its foundation if people don't even want to engage because they're afraid of ideologically violent people.
    You can't have a free and fair election if everyone doesn't feel safe to participate.
     Thank you, Mr. Green.
     Thank you, Mr. Wagner. I want to thank you and Mr. Suelzle for being here today.
    That concludes our first panel.
    Going back to what Ms. Khalid had said earlier, I invite both of you to submit to the committee in writing any other thoughts you may have, because oftentimes you'll walk away and think that you should have said this or that. I invite you to do that through the clerk, who contacted you to be part of this meeting today. I would ask, if you're going to do that, doing so by Friday at five o'clock would be a good time to set as a deadline.
    I'm going to suspend for a couple minutes before we move to our next panel.
    The meeting is suspended.
(1645)

(1650)
    Welcome back, everyone.
    We're going to move to our second panel now, and I'd like to welcome our witnesses.
    First, we have Samantha Bradshaw, an assistant professor in new technology and security, who is here by video conference. From The Dais, Toronto Metropolitan University, we have Karim Bardeesy, the executive director.
     I want to welcome you both to the committee.
    Ms. Bradshaw, you have up to five minutes to address the committee. Go ahead, please.
    My name is Samantha Bradshaw. I'm an assistant professor in new technology and security at American University, where I also direct the Center for Security, Innovation, and New Technology.
    For the past eight years, my research has examined questions around how state actors co-opt and weaponize social media for achieving political goals. Some of this research has focused on Russian interference, so I'll spend most of my time discussing this work here today.
    There's no doubt that emerging and digital technologies have expanded the scope, scale, reach and precision of disinformation campaigns. State actors like Russia have learned to use these technologies to reach across their borders and influence individuals in ways that can undermine democracy and the expression of human rights.
    Since 2017, platforms have been taking down multiple campaigns. It's in the hundreds now. We've seen state-backed disinformation campaigns removed by Facebook, Twitter and YouTube. These activities have also been documented across other kinds of platforms, such as chat applications like WhatsApp or alternative platforms like Parler. Disinformation and propaganda on these platforms, of course, are used to influence online audiences in ways that advance Russia's geopolitical ambitions.
    Sometimes they rely on more covert tactics, such as the use of fake social media accounts, bots and online troll forums to spread false information or other harmful narratives discreetly. Other times, they rely on more overt propaganda strategies that come from state-sponsored media outlets like RT and Sputnik, which openly disseminate pro-Kremlin narratives.
    Many of the strategies we see today reflect the longer history of Cold War strategies, wherein Soviet leadership undertook many efforts to alter audience attitudes, opinions and perspectives on events and issues around the world. Back in the day, in addition to promoting overt and attributable content on social media, Soviet entities employed news agencies and sympathetic newspapers abroad, and courted journalists as sources to spread unattributable messages. Today we're seeing a lot of these strategies play out in the development of fake websites and fake journalist personas, the development of front media organizations, and the co-opting of social media influencers.
    Some of my more recent work looks at Russian state-backed media coverage of the Black Lives Matter protests in the U.S. over the summer of 2020. We investigated elements of this Russian-affiliated media landscape and its digital presence. We found that a lot of these front media organizations often developed and tailored content to different segments of English-speaking users. A lot of this content was about playing both sides and emphasizing the racial divides in American politics, with some outlets expressing support for the Black Lives Matter protesters and others emphasizing support for the police and the Blue Lives Matter movement.
    By tracking a lot of the ownership of these media companies, and the movement of staff and journalists affiliated with known Russian news agencies, we found lots of connections in the incorporation, funding and personnel working for media outlets that claim to be independent from the Russian government. While things like editorial independence can of course be subjective, funding and ownership relations are key criteria in any evaluation process.
    A lot of strategies around state media, influencers and front organizations have appeared in information operations in other countries around the globe. This includes countries as far away as those in Africa and across the Sahel states, where I worked on platform data investigating Russian activities there. In those examples, we saw the co-opting of local influencers, who are often paid by Russian actors to generate this veneer of legitimacy around the content being produced and amplified on social media. While the specific goals of any influence operation will vary, many are designed with the intent of disrupting the social fabric of society. In the context of the Sahel, Russian disinformation campaigns often highlighted anti-Western and anti-colonial narratives that fed into localized and generational memory to amplify divides within and across society.
(1655)
     This brings me to my final point that I want to make in my opening remarks about contemporary Russian information operations, and it's that many don't really rely on what we consider, traditionally, to be disinformation. A lot of the more effective campaigns that we're seeing don't rely on false information, things that can be easily fact-checked, but on identity-based disinformation and tropes around racism, sexism, xenophobia or even who we are as political citizens. These tropes, really, are then used to polarize, suppress and undermine our institutions of democracy.
    Thank you, Ms. Bradshaw. The toughest part of my job is having to cut somebody off when they're on a roll, and you were on a roll.
     Honestly, I was right at the end.
     Wonderful—it was an abrupt ending.
    Mr. Bardeesy, you're up for five minutes, sir. Go ahead, address the committee.
    Thank you, Chair, for the opportunity to appear before you and for doing this important work.
    I'm Karim Bardeesy. I'm the executive director of The Dais, a policy and leadership think tank at Toronto Metropolitan University, working on the bold ideas and better leaders Canada needs for more shared prosperity and citizenship. We work in areas of economic, education and democracy policy.
    I'll be drawing on my remarks from two studies we've done recently: one supported by the Privy Council Office's democratic institution secretariat as part of our annual DemocracyXChange summit, and another one supported by the Department of Canadian Heritage's digital citizen initiative, the "Survey of Online Harms".
    I make three points.
    First, the state of the threat of foreign or external misinformation and disinformation is real, ever-changing and points, as Professor Bradshaw said, at specific communities triggering specific identities. Canada's national cyber threat assessment describes online foreign influence activities as a “new normal”, and some of this is difficult to detect. For instance, disinfo and misinfo on private messaging platforms are more likely to reach specific cultural communities or identity groups, and they're harder, by their very nature, to study. The design of these platforms also makes it more difficult for the users, who are concerned that there may be misinfo or disinfo on those platforms...to be flagged for content concern.
    There are also a number of new vectors, and some came to the public's attention only through judicial actions in other countries. Professor Bradshaw mentioned Russian disinfo, so you're probably aware that the U.S. justice department recently charged two employees of RT, a Russian state-controlled media outlet, not for its own content but in a U.S. $10-million scheme to create and distribute content with hidden Russian government messaging. Some of these payments, as you're probably aware, went to prominent Canadian YouTubers, but the extent of this deception was only revealed thanks to the discovery that accompanies criminal proceedings.
    Prominent online actors can also play an important role in spreading foreign misinfo and disinfo. A recent study by Reset Tech shows that Elon Musk's personal engagement with content can amplify, 250 or morefold, the audience that a piece of foreign misinfo or disinfo receives out in the real world.
    Another new vector are deepfakes, again, with some of the old techniques but now fuelled by powerful AI algorithms that are available to many at low or no cost. Our recent study of online harm showed that 60% of Canadian residents said they have seen a deepfake online, with 23% reporting seeing deepfakes more than a couple of times a week. That kind of exposure to deepfakes is correlated with the use of social media platforms like Facebook, YouTube, X, TikTok as well as ChatGPT.
    Second, how do we respond to the threat? It's real, it's coming in multiple forms and those forms are constantly evolving. On this, our report has a number of recommendations for policy-makers and institutions, civil society and individual citizens—and I'll be sure to table that report with you—although I caution this group, your committee, against expecting too much on behalf of citizens to equip them. They need to be equipped with media and digital literacy skills, but the power of these platforms and their ubiquity really require a policy response.
    We at The Dais join dozens of other civil society and research organizations to urge timely passage of Bill C-63, the online harms act. Although misinfo and disinfo isn't an explicitly prescribed harm under the act, misinfo and disinfo helps fuel the harms that are identified in the act, and so we urge timely passage of that.
    Third, I will address misinfo and disinfo, not foreign influence, as it relates to the Canadian media ecosystem generally. How Canadians consume media makes them more vulnerable to some of the...and those consumption trends make them more vulnerable to some of the phenomena that you are studying. We know that more Canadians are getting their news online, specifically from social media, and that fewer are participating in a shared space and consuming information produced by organizations that have strong or identifiable journalistic standards or standards of review, evidence, and context, to begin with. We also know that the effects of recent corporate decisions and policies can make the media ecosystem weaker. For instance, 25% of Canadians get news from Meta/Facebook—which is a source of news according to the Reuters digital study—and 29% get it from YouTube. Well, the recent decision by Meta news to throttle...on Facebook and Instagram, means that, in our study, 41% of respondents say that it has had a negative effect on their ability to stay current with the news.
    Thank you for this opportunity to speak to you, and I look forward to your questions.
(1700)
     Thank you both for your opening statements.
    We're going to do two rounds of six minutes, starting with Mr. Caputo.
    Go ahead for six minutes.
     Thank you very much, Mr. Bardeesy and Professor Bradshaw. I appreciate you both being here.
    Professor Bradshaw, it sounds like you're currently a tenure-track professor in the U.S.
    Is that right?
    Yes, that's correct.
    Just out of curiosity—and this isn't bad or good or anything; I'm just wondering—have you lived in Canada, stayed here during an election campaign or anything like that?
    I have. Even though I live and work in the U.S., I'm actually Canadian.
     Okay, perfect. That's good. You'll be up to date, I'm sure, on a lot of Canadian political happenings.
    You each have your area of expertise and sometimes we get very nuanced areas of expertise, but I always like to ask this question.
    If you could change one thing that would inhibit misinformation and disinformation.... You've talked a lot about social media. If you were responsible in this area for the Canadian government, what would you do first thing tomorrow?
    Professor Bradshaw.
     I think I would be working around issues that have more to do with platform transparency. I say this because I think, especially in the Canadian context, we actually don't have a very good empirical understanding of how the activities we see state actors engage in translate to changes in behaviour, and particularly voting behaviour.
    I think there are some real, measurable consequences of these kinds of campaigns when they, for example, attack activists or female journalists because there's clear political suppression happening, with very measurable consequences that appear in the literature. However, getting somebody to change their mind or alter their voting behaviour.... These kinds of things are very ingrained and embedded in our identities. Being able to actually get access to better data to study social media's immediate effect on those kinds of attitudinal changes over time is really important to enhance a lot of the concerns raised by the field.
    We're starting to develop more empirics and evidence around this, but we need better access to data. That's where I would really start if I wanted to see changes.
    Unfortunately, I don't think there is a silver bullet solution to misinformation and disinformation. There isn't something that we can immediately do to make this problem go away because it's something that's really at the conflux of human behaviour. The real technical design of platforms that might incentivize certain kinds of information to go viral over others is also socially shaped by the people who are interacting with it. It's something that will need a lot more long-term attention.
    I think that starting with the empirics and getting a better grounding and understanding of the causal mechanisms will be really important.
(1705)
    It sounds like we're looking at a 10-year to 20-year project here, based on the way you said it.
    Mr. Bardeesy, please go ahead.
     I definitely endorse every single thing that Professor Bradshaw said. To create that fact base for policy-makers is really important.
    I will maybe answer with two quick points.
    First is a longer term project, which is to have an all-of-system education system response that brings in the media companies and those who are collectively responsible for creating a shared space for debate and factual presentation. That, I believe, is actually a shared responsibility between educators and the media sector.
    Second, I think I'll come back to the passage of the online harms act. Bill C-63 would have a positive effect on some of these phenomena.
    One thing that I'm struck by as a parliamentarian.... I came in partway through this study. You talk about deepfakes and I have my idea of what that is.
    One thing that really concerns me is just how easy it is to create such content and, secondarily, how easy it is to disseminate such content.
    Given those two factors, are we really in a situation where as opposed to prevention, we're really looking at management?
     Yes, but I also think that, in the context of misinformation and disinformation, and coordinated efforts to manipulate election processes, there's a much longer “kill chain”—in technical platform speak—around an information operation.
    In order to get the deepfake on the platform and its going viral, you have to be able to create a fake account. In order to create the fake account, you have to deceive a lot of the internal systems within these platforms. Even though there is an easier ability to create and disseminate deepfake-related content, if we're talking about it in the context of an information operation, we still need to consider the broader life cycle that these operations have to go through.
    A lot of the mitigation measures have nothing really to do with the AI side of things. They still rely much more on the old school IP detection and all of the tricks that platforms play to figure out if this is a real person or a fake account.
(1710)
     Thank you, Professor Bradshaw and Mr. Caputo.
    We're going to go to Mr. Bains for six minutes.
    Go ahead, sir.
    Thank you, Mr. Chair, and to both of our witnesses for joining us today.
    I'm going to start with Ms. Bradshaw. You mentioned tropes and other methods—tactics employed by Russia specifically—and we've recently seen a rise in other hostile actors trying to manipulate western voices.
    Then there's the use of domestic commentators who amplify these tropes.
    Can you describe a little bit how it is being translated here, and then having domestic commentators amplifying these things? How can we combat some of those things?
    Definitely.
    In the more effective disinformation campaigns that I've studied, a lot of them aren't relying on false information. Instead they're drawing on harmful identity tropes. They're using these ideas around racism, sexism and xenophobia to polarize society and to suppress certain kinds of people from participating, or even to incite violence against particular groups or individuals within societies.
    When we're thinking about combatting this kind of identity-based disinformation, it's really a tricky challenge because you just slap a label on something that is sexist on the Internet, and you can't simply fact-check racism away. It's very much a long-term human bias problem, so it's going to take a long-term strategy to manage that.
    Drawing attention to the fact that these are the tactics and strategies of influence operations today is really important. Platforms can do more, particularly on the political violence side of things. When we're going to more extreme and egregious cases—I'm thinking about Myanmar and the coordinated campaigns against the Rohingya population by the government there—where we see violence and even a genocide against a particular group of people, having platforms do appropriate human rights assessments and making sure they have enough content moderators who have a local language understanding and local contextual understanding of any given society is really important.
    You're putting it back onto the platform providers to do a little more work as well.
     I met with the Ukrainian Canadian Congress yesterday. They advocated for bans on Russian state-led media on the Internet. Canada has banned certain media. It has also taken measures to look at other social media platforms—WeChat and TikTok and others like that. I'm wondering if bans are effective and if you can talk a little bit about that.
    Maybe I'll switch to Mr. Bardeesy to engage in the conversation as well.
    Can you shed some light on that? Why are these methods working?
     Banning on broadcast channels is a bit easier than banning Internet sites or online content. That's why we think at The Dais that the online harms act, which doesn't look so much at specific platforms as it does specific behaviour and content, is the way to go, coupled with some of that algorithmic transparency and coupled with some of that availability to have data that Professor Bradshaw mentioned.
     I'll also bring in a piece that we haven't really talked about yet. This entire conversation exists in a context of trust or mistrust. Where there is a trusted messenger, that is where the misinformation or disinformation is more likely to land. Where there's a context of mistrust, then a messenger can fill that vacuum and generate trust.
    I think that's the main concern with some of these propaganda outfits. It's not that people around this table don't see them as propaganda; it's more that there are people who perhaps have lower trust in some of the mainstream media institutions or some of the institutions of society more generally. In Canada the Reuters digital news survey shows that trust in mainstream media and news overall has fallen 20 percentage points over the last few years. It's in that context that some of these actors can weaponize some of those platforms or associate themselves with ideas that are harmful to Canada.
     It's very difficult, both legislatively and in other ways, to actually ban platforms or sites in Canada. Countries have tried to do this, including some western countries and some in the global south with very large populations. We've seen those bans. TikTok is banned in India. There was an attempt to ban X in Brazil. It's difficult enough to ban at the individual outlet level. It's very operationally difficult and may not be meeting the interests of what we're trying to pursue here.
(1715)
    Thank you, Mr. Bains.

[Translation]

    Before giving the floor to Mr. Villemure, I want to make sure the interpretation is working properly.

[English]

    Can I have a thumbs-up from both our witnesses if you heard that in English?
    Good.

[Translation]

    You have the floor, Mr. Villemure.
    Thank you, Mr. Chair.
    Thank you to both witnesses for being with us today.
    You both paint a rather bleak picture of the current situation. I'm going to ask you both the same question, starting with you, Ms. Bradshaw.
    Often, the goal of rogue states, as I will call them, is to sow chaos or division, and disinformation can be one of the tools to do that.
    Is this the beginning of a cognitive war, Ms. Bradshaw?

[English]

     I don't know if it's necessarily a start. I think a lot of these strategies go back and have a very, very long history. A lot of the current Russian playbook for information operations reflects a lot of the Cold War strategies of the past.
    I wouldn't say we're necessarily at the start, but we do need to think about responses that Professor Bardeesy highlighted and rebuild trust to create that cognitive defence against these kinds of attacks.

[Translation]

    Yes, the discussion revolves around trust. Philosophically speaking, trust means you don't need to prove something. Nowadays, fact checkers are in charge of maintaining trust.
    I'm going to correct myself: We aren't just at the beginning of a cognitive war.
    Mr. Bardeesy, are the tools that are being used to sow chaos and division part of a cognitive warfare strategy?
    They may be, but it's really up to us to decide how to respond. As you and Ms. Bradshaw said, it's a matter of trust within a society. We need to increase people's trust in institutions.
    As you know, the political tactic of sowing chaos and undermining trust is being used not only in foreign countries, but also here at home. It's up to us to decide how stringent the measures we put in place should be. We need to have vigorous political debate without it being a means for foreign actors to sow chaos to harm us and make them think they can get away with it.
    A vigorous debate requires us to weigh freedom of expression on the one hand and the public interest on the other. Where would you draw the line between the two?
    At The Dais, we work to train the leaders of the future. One of the ways we do that is by encouraging youth leadership.
    It's up to you, as members of Parliament, to get information from all sources.
    I'm very sorry, but I'm going to switch to English.
(1720)

[English]

     It's for you to model the kind of political space that you want to be in. I believe, from a public policy matter, that these bills, like the online harms act and the foreign interference projet de loi, form effective guardrails and it's incumbent on leaders, not just political leaders but leaders across society, to show the norms for us to have a good debate, but that don't let the foreign actors take confidence in creating more misinformation around the debate we do have.

[Translation]

    Thank you, Mr. Bardeesy.
    Mr. Chair, how much time do I have left?
    You have one minute and forty-five seconds left, Mr. Villemure.
    I'm going to use that time to turn to you, Ms. Bradshaw.
    It's in the financial interest of digital platforms to get people to click. We know that controversy generates more clicks than matters of public interest.
    As a government, we have a duty to protect free speech on the one hand and keep free enterprise going on the other. How can we reconcile these two things?

[English]

     I think this is really the million-dollar question of the century, how do we create business models that are going to support democracy rather than ones that are going to incentivize hate and anger and fear and frustration? I don't have a good answer to that question, but I do want to acknowledge that these things are in tension.
    Coming back to the platform transparency angle, I do think that when we have insight into how platforms make the trade-offs between freedom of expression and other interests, we can better evaluate how they are doing content moderation, whether that's good for democracy or not.
    The questions they are tackling sometimes are very difficult questions that don't have a right or wrong answer. Things and initiatives like the Facebook Oversight Board that are creating a public record of very difficult cases to set precedent for how these decisions are made, I think are really positive steps. I'd want to see more transparency initiatives and more efforts going into those kinds of applications.

[Translation]

    It's also important to note that one person's view of the public interest isn't necessarily the same as another's.
    Thank you, Mr. Chair.
    Thank you, Mr. Villemure.

[English]

    Mr. Green, you have six minutes. Go ahead, sir.
     Thank you very much. I'd like to welcome both subject matter experts. It's nice to have subject matter experts here today, but I do know that you bring with you your own unique political experiences.
    Mr. Bardeesy, I do know you from your previous life. Without putting you on the spot, I am wondering if if you're comfortable talking about your political experience in campaigns at all.
     Yes, I'm here as an expert witness, but I do have that background, both as a political staffer and more recently as a candidate in the provincial election in 2022. Perhaps this is familiar to other members, but as I mentioned in the third part of my remarks about the Canadian media ecosystem more generally, I observed in a very acute way that my campaign—maybe like a lot of local campaigns—just didn't get any local coverage.
    I actually have a more specific question, although I'm sure your local campaign was exhilarating. I want to go back to the Kathleen Wynne election from the outside looking in. There was the emergence of Ontario Proud under Jeff Ballingall, whom we know to be a Conservative staffer and who created Canada Proud and is an owner in the Post Millennial and has been at the centre of particularly egregious examples of misinformation and disinformation. I know there was a fervour, second only to what I would see from this recent iteration of the F*** Trudeau culture that has been created on the far right.
    From your experience, can you speak in whatever way you think appropriate to this discussion about the ways in which you may have watched Ontario Proud use Facebook—I think it was primarily Facebook—and other avenues to spread misinformation, disinformation, and I would say malinformation, if I could.
(1725)
     I was the director of policy and the deputy principal secretary to the Premier of Ontario from 2011 to 2016. I was there for former premier Dalton McGuinty and former premier Kathleen Wynne. I was the platform lead on the 2014 campaign.
    At that time, frankly, these issues were not as pronounced. There was a media ecosystem that was out there. There was online campaigning. People appeared to be getting information from a variety of sources. Mistrust and the toxic authoritarian populism that started to emerge in North America was not as apparent then. Primarily after I left the premier's office, I know that former premier Wynne faced a certain amount of personally-directed hate, as did caucus members, as did people from a variety of political partisan persuasions as elected officials.
    While I wasn't involved in the 2008 campaign, with some of the events that you described, we actually did do a study in 2019 as part of the Facebook ad transparency work that we were trying to bring. We allied with a researcher who was bringing disclosure to whom was being targeted by Facebook ads. That, along with a website called Who Targets Me, is still a project that's under way with our collaborators over in the U.K. and Ireland.
    We did a study at that time, not focused on the provincial election, which prompted, with other efforts, a pretty good Facebook political ad transparency registry to be created. The entity that you described and a number of others with either generic names or names that weren't quite reflective of what they were actually promoting were some of the main buyers of online ads on Facebook in the 2009 campaign.
    The names of the organizations would not tell you, with much specificity, what they were actually about.
    I do know that there was a National Observer story in 2022, I think, that linked phone numbers with people such as Angelo Isidorou, whom corporate filings identified as a writer for The Post Millennial. He made headlines when he resigned from Vancouver's NPA Party after allegedly flashing a hand gesture associated with white extremism; I would say that it was likely a Nazi salute. He has, through these findings, direct connections with Ballingall's company, Mobilize Media Group.
    Can you talk a little about the way there is a threat of third party advertisers and social media, and about the ecosystem of misinformation and disinformation, as it relates to electoral politics, particularly given the consideration that we've just seen in an election in B.C.? We have them in the Prairies, and it's going to be coming to us federally. If you could just comment on that, I have about 45 seconds left.
     There are a number of news outlets, or outlets that appear to be news outlets, that are primed and designed to appeal to a younger demographic. The Buffalo News was mentioned in Commissioner Hogue's initial report. There are a number of outlets.... With the Meta, Facebook and Instagram news ban, there are new ones popping up, and 6ixBuzz is a popular one among my students in the greater Toronto area. I'll observe that there are a number of sites that don't have the kinds of editorial controls you would expect, that masquerade or appear to be news sites, or that have an agenda, which are reaching larger audiences.
     Ms. Bradshaw, I will come to you with that same question in my next round. It looks like I've run out of time.
    Thank you.
    Thank you, Mr. Green.
    That does conclude our first round.
    We're going to do five, five, two and a half, and two and a half. Then we're going to conclude the meeting.
    We're going to start with Mr. Barrett for five minutes.
     Go ahead, sir.
     I'd like to refer to an ABC News article dated October 18, 2023, titled “US says initial independent review shows no evidence of bomb strike on Gaza hospital”. I want to read the first paragraph of this article.
A day after the Hamas-led Gaza Health Ministry claimed Israel had attacked the Al Ahli Arab Hospital in Gaza City, saying some 500 Palestinians had been killed, Israeli and U.S. officials, explosive experts, and President Joe Biden said Wednesday an available evidence shows the destruction was caused instead by a failed Palestinian terrorist rocket launch.
    I'm going to end that quote from that story there. However, I would then like to refer to a tweet on October 17, 2023, posted on X by Canada's foreign affairs minister. It says:
Bombing a hospital is an unthinkable act, and there is no doubt that doing so is absolutely illegal.
    Now, that was viewed 2.7 million times, and I want to draw to your attention the context that existed at the time. That goes back to this ABC article's opening graph that talks about that initial claim, and that initial claim was that Israel had perpetrated an attack on a civilian site, killing 500 civilians, innocents, in that war. That was reported by mainstream news outlets across the west, including here in Canada, and many, if not all, issued some form of correction.
    However, with regard to the tweet that I read to you from Canada's foreign affairs minister, I just read that and wrote it down in the last five minutes, so that tweet is still up, viewed 2.7 million times. Now, how harmful is this? Frankly, it's recklessness from a government verified account. Now, on the platform X, there are different types of verification, and one of them is from a government official or an elected official. This has the gray check mark. Talk about the disruption that foreign or hostile state actors seek to create. It's not necessarily to favour one political ideology or another.
    I think, Professor Bradshaw, you talked about a very contentious time in 2020 in the U.S., the BLM riots, and the same hostile states were sponsoring or fomenting both supporters of BLM but also supporters of Blue Lives Matter, trying to create discord between two groups in a very tense situation, a tense time.
    So, with regard to this example here, do you believe that this type of failure to act after a hot take, which can happen on social media, enables hostile foreign states to create the kind of division that, frankly, we've seen in Canada in the face of this ongoing war in the Middle East?
    I'll start with you, Professor Bradshaw.
(1730)
     I don't remember the fact-checked story or anything like that, so I'm going to focus my response mainly on the role that I think influencers can play in the dissemination of misinformation and disinformation when we're looking at things like COVID misinformation, for example.
    It's really a small number of people who tend to generate the most engagement around the conspiracy theories. Thinking about the role of people who have large audiences on social media, you'll find that there's almost a greater incentive or a greater reason to—you know, with great power comes great responsibility—kind of take more steps to ensure that fair, accurate and good information is going out to audiences. However, forums in particular have not traditionally taken that path. There have been a lot of whistle-blower documents that have shown specific kinds of white lists of accounts from people who are influencers who have large followings and can get past a lot of the moderation systems because it generates engagement.
     For me, I think the problem is really there, and we should not be so worried about a Russian Twitter account that's not generating as much engagement and not reaching mainstream public attention. However, we should also be thinking more broadly about the role of influencers.
    I also think that's why we see a lot of Russian information operations pivoting to hiring and co-opting local voices and people who already have audiences. If we're thinking about policy responses to, you know, generating that kind of trust, building a culture for influencers to do some defending of democracy could be a potential positive route, but you also don't want to be paying them to do that kind of stuff. You know what I mean.
(1735)
     Thank you.
     I'm sorry, Mr. Bardeesy. We didn't have time to go to you.
    Mr. Housefather, you have five minutes in this round.
     Thank you very much.
    Thank you very much to the witnesses for being here.
     Professor Bradshaw, I read the paper that you co-wrote with some others about the 2020 election. One thing that struck me was the way that different platforms, and you looked in particular at X and Facebook, dealt with disinformation related to the election. Could you just walk us through the issue where 30% of the posts were not treated consistently by the platforms and why that was?
     Yes. This was a paper that used a really interesting data set of misinformation narratives that we had identified and reported to the platforms. We then assessed whether or not they did or did not take action against the content that was reported to them and checked by researchers to see whether this was a misinformation narrative or not. Some of the things that explained the differences were really simple technical fixes. For example, if we had reported a narrative of things that were on a certain date, things that were published before weren't necessarily backdated with a label, but things going forward were. We also noticed differences across the different kinds of media. If things were screen grabbed or cut or edited slightly, the automated detection tools didn't always do a great job identifying similar kinds of misinformation narratives. You have to remember that a lot of this kind of content takedown is done automatically by automated systems.
    So there were a couple of problems there, but for the most part, we saw about 70% of the content being enforced. A lot of the decisions to not enforce were relatively arbitrary, based on these small technical problems or problems with the automated systems that I think could easily be fixed.
    [Technical difficulty—Editor] the allegation that there were prefilled ballots. When you had a picture of the prefilled ballot, you would then have the content noted as being misinformation, but you wouldn't necessarily if they put up a video saying this. Is that kind of correct?
     Yes. It really depended on the particular narratives that we were looking at. I don't remember if that was an exact example, but that is a good characterization of some of the differences we were seeing.
     The Americans put together the election integrity type of approach from the different social media companies in 2020. When we're looking at the outcome in Canadian elections, what lessons should we learn that the Americans did right in 2020 or that the social media companies did well as they applied it to the 2020 election? As well, what did they do wrong?
    In terms of really great lessons, I think a real success was lot of the partnerships with academic institutions. It not only led us to be able to detect and brief the public in real time on misinformation narratives that were occurring; it also allowed us to do really interesting and novel research on the back end to audit platform content moderation, something that's really, really difficult to do.
    In terms of things that could be done better, I think there was so much public and far right outlash against those kinds of initiatives, and universities didn't do well at necessarily protecting the researchers and the groups who were doing this really important work. I don't think the platforms really did a lot too to protect those partnerships. They're taking a step back from doing those kinds of collaborations now. I think that was a real, real harm, and something that we could protect and create a more positive multi-stakeholder culture around.
    Do I have any more time, Mr. Chair?
     You have 30 seconds.
    I'll pass that on to my Bloc and NDP colleagues, if anybody wants to take it.
    I actually may take that time at the end, Mr. Housefather.
    That's fine. I'll pass it on to you.
    Thank you, Mr. Housefather.

[Translation]

    Mr. Villemure, over to you for two and a half minutes.
    Thank you, Mr. Chair.
    Thank you, Mr. Housefather.
    Mr. Bardeesy, I'd like to draw on your experience, which you mentioned a little earlier.
    You said that Bill C‑63 and Bill C‑70 were very useful measures for countering disinformation and foreign interference. However, as you know, Bill C‑63 hasn't been passed. Bill C‑70 is not yet in force.
    A federal election is expected in less than 12 months. What can be done in terms of those measures since they may not be in effect by then?
(1740)
    As Ms. Bradshaw mentioned, partnerships between companies and researchers do not depend on bills. Right now, companies can connect with researchers and give them information about their algorithms or any other information that can help to keep the public in the know. That's what's happening right now. You might think that's misinformation, but it's not. The truth is that it's important to be able to have institutions outside government and social networks that people trust or can potentially trust. That's the first thing.
    Second, we need standards. Members seeking election or re-election must be able to tell the people in those ridings that their local media in whatever city are not the enemy. It is very important that, as community leaders, you tell people that media that rely on principles and standards are there for everyone, even if certain people aren't as aligned with those principles and standards right now.
     Just as an aside, Mr. Bardeesy, in Trois-Rivières we have a lot of media outlets. However, since Facebook blocked news access, newsrooms have been struggling. We still have a lot of media outlets, but few journalists. As a result, the news may not be as reliable and people don't trust it as much. This goes a bit beyond the subject before us today, but it is still a problem that we should look into.
    Mr. Chair, I understand that my time is up. I will end on that note.
    Thank you.
    Thank you, Mr. Villemure.

[English]

     Mr. Green, for two and a half minutes, go ahead, please.
     Thank you very much.
    Ms. Bradshaw, you probably listened to my exchange there. I know you're Canadian, so you have context for this.
    My friend Mr. Villemure talked about cognitive warfare. Steve Bannon, chief far-right extremist strategist with connections to Canada's far-right extremist movement, talked about cognitive warfare, in essence, flooding the zone with a word that I can't say because it's unparliamentary.
    Could you comment on the ecosystem of third party political actors and fake news, quite literally these fake online platforms that pop up? You can probably list them, and there are probably a dozen of them that I can think of off the top.
    The online platforms look like news. You talked about Buffalo News. There's also the Western Standard. There's a whole bunch of these far-right extreme outlets. How do average people sift through all of that stuff in an electoral cycle in order to make informed decisions based on the facts?
    First, maybe I'll talk a little bit about the disinformation for hire kind of groups and how there is an industry backing a lot of disinformation campaigns, which reminds us that we should not only focus on political and cognitive warfare but also recognize that a lot of these actors are incentivized. Not just platforms, but the creators of disinformation are incentivized to do so because they can generate advertising revenue or business deals with governments.
    Thinking about policy responses could also think about raising the costs of engaging in these kinds of activities by making them less profitable, taking the typical kind of scam and fraud approaches and applying them to disinformation and these groups that try to generate advertising revenue by creating fake news stories and getting people to visit websites that show them ads. This is all part of the broader ecosystem of challenges.
    When it comes to what citizens should do to navigate this complicated environment, I do think that we don't always give citizens enough credit for the diversity of media they already consume. We don't just get our news from social media. It does play an increasingly important role, but so does what we read in newspapers, what we watch on TV, who our social circles are and who's immediately around us in our community. All of these things play really important roles in shaping our political knowledge and then, therefore, our behaviours.
    When we're thinking about solutions, we can focus and hone in on the social media angle, but we can also think about building more robust social institutions to empower people through other kinds of media to have that diverse knowledge and to be able to generate their own political knowledge and opinions.
(1745)
     Thank you, Mr. Green.
    Thank you, Ms. Bradshaw.

[Translation]

    Before I ask you a question, Mr. Bardeesy, I want to apologize for asking it in English, even though your French is very good.

[English]

    I want to congratulate you for that.
    I have a question. None of this has been touched on today, but we heard from prior witnesses about the impact that artificial intelligence is going to have on the propagation of disinformation and misinformation, so I'm going to give you both an opportunity, in a minute or less, to share your thoughts with the committee on that.
    I'll start with you, Mr. Bardeesy, if you don't mind.
     Sure.
     Kevin Kelly, who was this Internet guru back in the day, said that the Internet is fundamentally a giant copying machine, and AI has the ability to create copies of things at incredible speed, at incredibly low cost and in incredible volume.
     I'm specifically concerned about AI-generated audio content versus visual content. There's some evidence that audio content is harder to discern as being a deepfake. One thing to be aware of as we prepare people around this issue is having a specific line of inquiry about audio-related deepfake content.
    I also commend to this group Bruce Schneier, a Harvard Kennedy School researcher who has these great little articles on 16 ways that AI can be useful or interesting for democracy. It's not all bad. There may be some specific uses around the edges of AI that could help us and could help you do your job, but it is definitely an area of concern.
    Thank you.
     Ms. Bradshaw, go ahead.
     I will plus one everything Professor Bardeesy has said.
    For me, the greatest challenge with AI is the way we talk about it in the public and in the media. Coming back to the idea of trust, when we're constantly telling people that they can't trust anything they see, read or hear, we're not creating resilient citizens who are able to then effectively participate in society. Going forward, it will be really important to be able to create digital literacy programs that don't build too much overt skepticism and that need to not trust anything that we read, see or hear anymore.
    We do need some level of trust, so I'm all in there.
    That's interesting, because one of the things we have heard is exactly what you've talked about: critical thinking and digital literacy. Finland has been used as the model for education in grade school. I suspect, though I can't preclude any conclusions or recommendations of this committee, that may be a big part of what we provide by way of a recommendation going forward.
    I want to thank you both for being here today. I invite you, if you have any afterthoughts, to submit them to the clerk, because oftentimes, as I said earlier, you walk away and you're sitting there in bed at night and you think, “Ah, I should have said this,” so I'm giving you that opportunity to provide that to the committee. If you could do so by five o'clock on Friday, that would be helpful. We like to have deadlines at this committee.
    For the sake of committee members, before I conclude, I want to let you know that Thursday we have TikTok, Google, Meta and X coming for this study, so prepare your questions.
    That's it for today. Thank you, everyone: our technicians, the clerk, the analysts and our witnesses.
    The meeting is adjourned.
Publication Explorer
Publication Explorer
ParlVU