:
I call this meeting to order.
Good afternoon, everyone.
Welcome, everyone, to meeting 135 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
[Translation]
Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, February 13, 2024, the committee is resuming its study of the impact of disinformation and misinformation on the work of parliamentarians.
I'd like to welcome today's witnesses.
[English]
From Google Canada we have Shane Huntley, senior director, threat analysis group, who's joining us by video conference, and Jeanette Patell, who is the director of government affairs and public policy, Canada.
From Meta Platforms Inc., Rachel Curran is here. She is the head of public policy for Canada. We also have Lindsay Hundley, the global threat intelligence lead, who is appearing by video conference.
From TikTok, we have Steve de Eyre, director of public policy and government affairs for Canada, and Justin Erlich, who is the global head of policy development. They are appearing by by video conference.
Also, from X Corporation, we have Wifredo Fernández, who is the head of government affairs, United States of America and Canada.
I welcome you all to the committee for this very important study. As you know, you all have up to five minutes to address the committee.
I will start with Mr. Huntley. Mr. Huntley is online. Can you can go ahead, sir? You have five minutes to address the committee.
:
Thank you very much, Mr. Chair.
Members of the committee, my name is Jeannette Patell. I'm responsible for government affairs and public policy at Google in Canada.
[English]
I'm pleased to be joined remotely today by my colleague Shane Huntley, a senior director of Google's threat intelligence group.
Earlier this year, as part of our ongoing commitment to protect elections, Google created the Google threat intelligence group which brings together the industry-leading work of our threat analysis group and the Mandiant intelligence division of Google Cloud.
Google threat intelligence helps identify, monitor and tackle threats ranging from coordinated influence operations to cyber-espionage campaigns across the Internet. On any given day, TAG, the threat analysis group, tracks and works to disrupt more than 270 government-backed attacker groups from more than 50 countries. It publishes its findings each quarter. Mandiant similarly shares its findings on a regular basis, and has published more than 50 blogs to date this year alone, analyzing threats from Russia, China, Iran, North Korea and the criminal underground. We have shared some of our recent reports with this committee, and Shane will be happy to answer your questions about these ongoing efforts.
Google's mission is to organize the world's information and make it universally accessible and useful. We recognize this is especially important when it comes to our democratic institutions and processes. We take seriously the importance of protecting free expression and access to a range of viewpoints. We recognize the importance of enabling the people who use our services to speak freely about the political issues most important to them.
When it comes to the integrity and security of elections, our work is focused on three key areas. First and foremost is continuing to help people find helpful information from trusted sources through our products, which are strengthened through a variety of proactive initiatives, partnerships and responsible safeguards. Beyond designing our systems to return high-quality information, we also build information literacy features into Google Search that help people evaluate and verify information, whether it's something they saw on social media or heard in conversations with family or friends.
For example, our About This Image feature in Google Search helps people assess the credibility and context of images they see online by identifying an image's history and how it has been used and described on other web pages, as well as identifying similar images. We also continue to invest in state-of-the-art capabilities to identify AI-generated content. We have launched SynthID, an industry-leading tool that watermarks and identifies AI-generated content in text, audio, video and images. On YouTube, when creators upload content, we now require them to indicate whether it contains altered or synthetic materials that appear realistic, which we then label appropriately.
We will soon begin to use C2PA's Content Credentials, a new form of tamper-evident metadata, to identify the provenance of content across Google Ads, Google Search and YouTube and to help our users identify AI-generated material.
When it comes to our own generative AI tools, out of an abundance of caution we're applying restrictions on certain election-related queries on Gemini and connecting users directly to Google Search for links to the latest and most accurate information.
The second area of focus is working to equip high-risk entities, like campaigns and elected officials, with extra layers of protection. Our advanced protection program and Project Shield are free services that leverage our strongest set of cyber protections for high risk individuals and entities, including elected officials, candidates, campaign workers and journalists.
Finally, we focus on safeguarding our own platforms from abuse by actively monitoring and staying ahead of abuse trends through the enforcement of our long-standing policies regarding content that could undermine democratic processes.
Maintaining and enforcing responsible policies at scale is a critical part of how we protect the integrity of democratic processes around the world. That's why we've long invested in cutting-edge capabilities, strengthened our policies and introduced new tools to address threats to election integrity. At the same time, we continue to take steps to prevent the misuse of our tools and platforms, particularly attempts by foreign state actors to undermine democratic elections.
The Google Threat intelligence teams, including the threat analysis group founded by my colleague Shane Huntley, are central to this work. They often receive and share important information about malicious activity with national security agencies and local law enforcement, as well as our industry peers, so that they can investigate and take appropriate action.
Maintaining the integrity of our democratic processes and institutions is a shared challenge. Google, our users, industry, law enforcement and civil society all have important roles to play, and we are deeply committed to doing our part to keep the digital ecosystem safe and reliable.
We look forward to answering your questions and continuing our engagement with this committee as you study these important questions.
:
Thank you for the opportunity to have us appear before you today.
My name is Dr. Lindsay Hundley, and I am the global threat intelligence lead at Meta. My work is focused on producing intelligence to identify, disrupt and deter adversarial threats on our platforms. I've worked to counter these threats at Meta for the past three years, and my work at the company draws on over 10 years of experience as a researcher focused on issues related to foreign interference, including in my doctoral work at Stanford University and during research fellowships at both Stanford University and Harvard Kennedy School.
I'm joined today by Rachel Curran, the head of public policy for Canada.
At Meta, we work hard to identify and counter foreign adversarial threats, including hacking and cyber-espionage campaigns as well as influence operations—what we call coordinated inauthentic behaviour, or CIB. Meta defines CIB as any coordinated effort to manipulate public debate for a strategic goal in which fake accounts are central to the operation. CIB occurs when users coordinate with one another and use fake accounts to mislead others about who they are and what they are doing.
At Meta, we believe that authenticity is a cornerstone of our community. Our community standards prohibit inauthentic behaviour, including by users who seek to misrepresent themselves, use fake accounts or artificially boost the popularity of content. This policy is intended to protect the security of user accounts and our services and create a space where people can trust the people and communities that they interact with on our platforms.
We also know that threat actors are working to interfere with and manipulate public debate. They try to exploit societal divisions, promote fraud, influence elections and target authentic social engagement. Stopping these bad actors is one of our highest priorities, and that is why we've invested significantly in people and technology to combat inauthentic behaviour at scale.
The security teams at Meta have developed policies, automated detection tools and enforcement frameworks to tackle deceptive campaigns, both foreign and domestic. These investments in technology have enabled us to stop millions of attempts to create fake accounts every day and to detect and remove millions more, often within minutes after creation. Just this year, Meta has disabled nearly two billion fake accounts, and the vast majority, over 99%, were identified proactively.
Our strategy to counter these adversarial threats has three main components. First there are expert-led investigations to uncover the most sophisticated operations. Second is public disclosure and information-sharing to enable cross-societal defences, and third are product and engineering efforts to build the insights derived from our investigations and turn them into more effective, scaled and automated detection and enforcement.
A key component of this strategy is our public quarterly threat reports. Since we began this work, we've taken down and disclosed more than 200 covert influence operations from 68 countries that operated in 40 languages, from Amharic to Urdu to Russian to Chinese. Sharing this information has enabled our teams, investigative journalists, government officials and industry peers to better understand and expose Internet-wide security risks, including ahead of critical elections.
We've also shared detailed technical indicators linked to these networks in a public-facing repository hosted on GitHub, which contains more than 7,000 indicators of influence operations activity across the Internet.
Before I close, I'd like to touch on a few trends that we're monitoring in the global threat landscape.
To start, Russia, Iran and China remain the top three sources of foreign interference networks globally. We have removed nearly 40 operations from Russia that target audiences around the world, including four new operations in just this past quarter. Russian-origin operations have become overwhelmingly one-sided over the past two years, pushing narratives to support those who are less supportive of Ukraine.
Likewise, China-origin operations have evolved significantly in recent years to target broader, more global audiences, including in languages other than Chinese. These operations have continued to diversify their tactics, including targeting critics of the Chinese government, attempting to co-opt authentic individuals and using AI-generated news readers in an attempt to make fictitious news outlets look more legitimate.
Finally, we've seen threat actors increasingly decentralize their operations to withstand disruptions from any singular platform. We've seen them outsource their deceptive campaigns increasingly to private firms. We are also seeing them leverage generative AI technologies to produce higher volumes of original content at scale, though their abuse of these technologies has not impeded our ability to detect and remove these operations.
I would be happy to discuss any of these trends in more detail.
I want to close by saying that countering foreign influence operations is a whole-of-society effort, which is why we engage with our industry peers, independent researchers, journalists, government and law enforcement.
Thank you so much for your focus on this work. We look forward to answering your questions.
:
Good afternoon, Mr. Chair and committee members. My name is Steve de Eyre. I'm the director of public policy and government affairs for TikTok Canada. I'm joined today by my colleague Justin Erlich, the global head of policy development for TikTok's trust and safety team. He's joining virtually from California.
Thank you for the invitation to return to your committee today to speak about the important issue of protecting Canadians from disinformation. The topic of today's hearing is important to us, to the foundation of our community and to our platform.
TikTok is a global platform where an incredibly diverse range of Canadian creators and artists have found unprecedented success with global audiences; where indigenous creators are telling their own stories in their own voices; and where small businesses like Hamilton's DSRT Company, Mississauga's Realm Candles, and of course Smiths Falls' McMullan Appliance and Mattress are finding new customers, not just across Canada but also around the world.
Canadians love TikTok because of the authenticity and positivity of the content, so it's important, and in our interest, to maintain the security and integrity of our platform. To do this, we invest billions of dollars into our work on trust and safety. This includes advanced automated moderation and security technologies and thousands of safety and security experts around the world, including content moderators here in Canada. We also employ local policy experts who help ensure that the application of our policies considers the nuances of local laws and culture.
When it comes to misinformation and disinformation, TikTok takes an objective and robust approach. To start, our community guidelines prohibit misinformation that may cause significant harm to individuals or society, regardless of intent. To help counter misinformation and disinformation, we work with 19 independent fact-checking organizations to enforce our policies against this content. In addition, we invest in elevating reliable sources of information during elections, during unfolding events and on topics of health and well-being.
We relentlessly pursue and remove accounts that break our deceptive behaviour rules, including covert influence operations. We run highly technical investigations to identify and disrupt these operations on an ongoing basis. We have removed thousands of accounts belonging to dozens of networks operating from locations around the world. We regularly report on these removals in our publicly available transparency centre.
Addressing disinformation is an industry-wide challenge that requires a collaborative approach and collective action, including both platforms and government. At the heart of this collaboration lies transparency and accountability, which we believe are essential to fostering trust. We're committed to leading the way when it comes to being transparent in how we operate, moderate and recommend content, empower users, and secure our platform. As part of this commitment, TikTok regularly publishes transparency reports to provide visibility into how we uphold our community guidelines; how we respond to law enforcement requests for information, or government requests for content removals; and attempts at covert influence operations that we have disrupted on our platform.
Our commitment to transparency is also guiding our work with Canadian officials, including in the national security review of TikTok under the Investment Canada Act. We have been working with officials to ensure that they understand how our platform operates, including how we protect Canadians' user data and defend against things like disinformation and foreign interference. As part of this process, last year we offered Canadian officials the opportunity to review and analyze TikTok's source code and algorithm. While the government has not yet taken us up on this opportunity, we are hopeful that they will do so. We will continue to work collaboratively with the government in the best interest of Canadians.
Such collaboration will be critical as we approach the next federal election. In 2021 TikTok worked with Elections Canada to build an in-app hub that provided authenticated information on when, where and how to vote. That year we were also the only new platform to sign on to PCO's Canada declaration on electoral integrity online. As we approach the next election, we will be building upon these efforts and leveraging learnings and best practices from other elections taking place around the world, including in the U.S.
Finally, I'd be remiss not to mention that today's meeting is taking place during Media Literacy Week, an annual event promoting digital media literacy across Canada. As well, yesterday was Digital Citizen Day, a day that encourages Canadians to engage and share responsibly online. Education plays a critical role in empowering Canadians to be safe online and build resilience against misinformation and disinformation.
In Canada these events are led by MediaSmarts, a Canadian non-profit and a global leader in this space whose work TikTok is very proud to support.
We look forward to sharing more with you about how we are addressing these important issues.
Thank you again for the invitation to speak with the committee today.
:
Thank you, Mr. Fernández.
Thank you to all our witnesses for their opening statements.
Members of the committee, we are fortunate that we have all four of the major players on social media here today, which poses its own problems. I'm going to ask every member to direct their questions specifically to an individual. That will save us some time in guessing who's going to answer.
It's been common practice at this committee that we reset after the first set of questions to allow Mr. Villemure and Mr. Green the opportunity to establish those six-minute questions in the second round. Is it the will of the committee to do that?
Some hon. members: Agreed.
The Chair: Thank you.
We're going to start with six minutes of questions.
Mr. Cooper, you have the floor. Go ahead, sir.
:
Chairman Brassard, Vice-Chairs Fisher and Villemure and members of the committee, thank you for the opportunity to be with you here today. It's an honour.
My name is Wifredo Fernández, and I have the pleasure of leading government affairs and public policy at X in the U.S. and Canada.
We know that X is a critical platform in the public debate around elections. Through September this year, there were over 850 billion impressions, 79 billion video views and four billion posts related to politics globally. We are proud that our platform powers democratic discourse around the world. For us, authenticity, accuracy and safety are fundamental to our approach to elections.
Our consideration of authenticity has two principal dimensions: accounts and conversations. Our safety team proactively monitors activity on our platform and employs advanced detection methodologies to enforce our rules related to authenticity, such as platform manipulation, spam, and misleading and deceptive identities. Whether they are state-affiliated entities engaged in covert influence operations or generic spam networks, we actively work to thwart and disrupt campaigns that threaten to degrade the integrity of the platform.
Through our verification program, we have profile labels that signal the authenticity of accounts, including brands and governments. The grey check mark helps the public know when they are hearing from or interacting with a verified government actor, whether they're an elections official, law enforcement or their representatives.
We want X to be the most accurate source of information on the Internet. That's why we have deeply invested in the development and expansion of Community Notes, which now empower over 800,000 contributors in 197 countries and territories to add helpful context to posts, including advertisements.
A recent study from the University of Giessen in Germany found that across the political spectrum, Community Notes were perceived as significantly more trustworthy than traditional, simple misinformation flags. It also found that Community Notes had a greater effect on improving people's identification of misleading posts. Separate studies from the University of Giessen and the University of Luxembourg show that posts with notes are shared 50% to 61% less and deleted 80% more. We'd be happy to submit these studies for the record.
Deepfakes, shallowfakes, AI-generated photos, out-of-context media and similar content are a source of public concern. This past year, we put a new superpower into contributors' hands, allowing them to write notes that are automatically shown on posts with matching media. To give you a sense of the multiplying effect this has, the around 6,800 media notes that have been written are now showing on over 540,000 posts and have been seen nearly two billion times.
We've also introduced, due to popular demand, the ability for anyone to request a Community Note. With enough requests, top contributors will be alerted and can propose notes. For everyone on X, it's a way to help. For contributors, it's a way to see where help is needed. Posts with a Community Note are also demonetized.
We strongly believe that freedom of speech and safety can and must coexist. The election context brings a diverse set of challenges covering abuse and harassment, violent content, deceptive identities and impersonation, violent entities, hateful conduct, synthetic and manipulated media, and misleading information about how to participate and vote.
At X, every year is an election year, and our policies and procedures are constantly being revised to address evolving threats, adversarial practices and malicious actors. For us, planning begins well in advance of these elections. All relevant working groups internally collaborate to lend their expertise and experience in planning and to participate in enforcing these rules before, during and after elections. We continue to invest in our team and our technology to strengthen our capabilities.
Our efforts extend well beyond content moderation and include proactive initiatives to direct those on our platform to authoritative and reliable sources around election participation. We engage directly with regulators, political parties, campaigns, candidates, civil society, law enforcement, security agencies and others to ensure that clear lines of communication are established to broaden our visibility into the threat landscape and ensure that external partners have a resource here at X.
For example, on multiple occasions over the last year, we engaged productively with Canada's rapid response mechanism and as a result took down networks of accounts, including those linked to the Chinese information operation called “spamouflage”. We appreciate the helpfulness of the mechanism and will continue to maintain open lines of communication in the lead-up to the next federal election in Canada.
Thank you again for the opportunity to be with you today. I look forward to any questions you may have.
:
We've been enforcing against spamouflage since 2019. Last year, we did a really large enforcement under our coordinated inauthentic behaviour policy.
Spamouflage is a long-running, cross-Internet operation with global targeting. We removed thousands of accounts and pages after we were able to connect different clusters of activity together as part of a single operation and were able to attribute that operation to individuals associated with Chinese law enforcement.
We've identified over 50 platforms and forums that spamouflage has used, including Facebook, Instagram, X, YouTube, TikTok, Reddit, Pinterest, Medium, Blogspot, LiveJournal, VKontakte, Vimeo and dozens of other smaller platforms and forums.
As with other China-origin operations, we have not found evidence of spamouflage getting significant substantial engagement among authentic communities on our services. As it is a global operation, we have seen targeting of audiences in Canada as part of this targeting. Researchers at the Australian Strategic Policy Institute, for instance, have described the operation's use of generative AI audio and doctored YouTube videos that were shared on other platforms with zero or minimal engagement from real users.
We've engaged a couple of times with the rapid response mechanism, including just yesterday, about spamouflage activity. I'm happy to report that in that instance, they found that we had been able to proactively remove the vast majority of activity that they were tracking.
:
Thank you very much, Mr. Chair.
Thank you all for being here today.
I'm going to start by asking Ms. Curran my first question.
We've already had the opportunity to exchange views on this subject. A number of experts have told us that disinformation in Finland has been defeated, if you like. At least, it has been greatly affected by the fact that education has been provided. Secondly, the country has a very strong media. They are independent and free. As you know, in Canada, local media have been very affected by Facebook's decision not to sign up to the recent law. I see all the efforts Meta is making to counter disinformation, but, according to our specialists, one of the biggest recommendations is the presence of strong, free media, which you don't subscribe to.
I'd like to know where you stand on this issue.
:
Thank you very much, Ms. Curran.
Mr. Fernández, the social media business model is based on the number of clicks; that's no secret to anyone. The algorithm makes its own choices. Personally, I go to the platform X regularly, and it seems to me to be a hostile environment. I've noticed that brutality, banalities and other forms of falsehood generate more clicks than anything of public interest. You can't not know that.
In your opinion, how might we resolve the paradox between the need for clicks for revenue purposes and the hostile environment this currently creates? I have to say, every time I finish my visits to your platform, I get depressed.
:
I'll continue to read from it. It says that it reviewed over a thousand cases that involved what they consider to be peaceful content in support of Palestine that was censored or otherwise unduly suppressed, while one case involved the removal of content that was in support of Israel, so essentially there were 1,049 cases of Palestinian suppression and one case in support of Israel.
Human Rights Watch found that censorship of content related to Palestine on Instagram and Facebook was systemic and global and that Meta's inconsistent enforcement of its own policies led to erroneous removal of content about Palestine.
In fact, I believe Meta publicly apologized. They had received some recommendations on patterns of undue censorship; removal of posts, stories and comments; suspension or permanent disabling of accounts; restriction on the ability to engage with content—so shadow banning—and restrictions on the ability to follow or tag.
In response to that, it appears that Meta took responsibility, publicly apologized, and then engaged in business social responsibility by commissioning an independent entity to investigate this. They came back with findings that there appeared to be adverse human rights impacts on the rights of Palestinian users.
Then what has Meta done since to ensure that Meta's practices don't unduly harm the basic freedom of expression for people posting about the question of Palestine?
:
I'm going to reference a New York Times article by Sheera Frenkel. It states that Israel secretly targeted U.S. lawmakers with an influence campaign on the Gaza war.
This was actually a campaign that Meta did uncover, so I want to give you the opportunity to talk a little bit about this. It began in October. It remains active on X. At its peak, it used hundreds of fake accounts on Facebook and Instagram to post pro-Israel statements. The accounts focused on U.S. lawmakers, particularly those who were Black and Democrat—and I take a specific interest in that—such as representatives like Hakeem Jeffries.
Then a further report out of NBC News said that Meta and OpenAI said that they had disrupted influence operations linked to an Israeli company. Your tech company announced that Project Stoic, which was a political marketing and business intelligence firm based in Tel Aviv, used their products nefariously to manipulate various political conversations online.
As a Canadian lawmaker, then, what assurance do I have that these same tactics, these nefarious tactics linked to this Israeli firm, weren't used to target parliamentarians such as myself?
:
I have a question for Meta about the impact of the and the effect it's having on local journalism.
I've heard, in my community and from Canadians across the country who work in the news space, that when this law was passed, they then saw their traffic, which was coming to them for free from Facebook, hit a wall. It dropped right off. In some cases it caused outlets to lay off journalists. In some cases it caused them to close.
I'm curious about whether you have measured that impact. However, I would also like to know about the space opened up when reputable and accredited journalists, independent journalists, have left, and their space is now being filled by other actors, and also about the potential for the spread of misinformation in the place of news that was previously being sought out and shared on your platform.
I want to come back to you, Mr. Fernández, for a second.
At the end of the last round of questioning for Ms. Khalid, you did acknowledge that you were funding the lawsuit of Mr. Strauss. You said that X would fund lawsuits of people who lost their jobs related to comments they made on the platform.
If I were a member of Parliament and made foolish comments on the platform attacking, for example, a specific ethnic group, and as a result the voters of my riding decided to not re-elect me in the next election because I said very stupid things on your platform, would I be able to get X to fund a legal challenge on that?
:
They have, exactly. Mr. Finkelstein was here at our committee on this very study.
As you know, the NCRI did a study on Chinese influence on TikTok. I'm going to read their conclusions to you. They say:
The conclusions of our research are clear: Whether content is promoted or muted on TikTok appears to depend on whether it is aligned or opposed to the interests of the Chinese Government. As the summary data graph below illustrates, the percentages of TikTok posts out of Instagram posts are consistently range-bound for general political and pop-culture topics, but completely out-of-bounds for topics sensitive to the Chinese Government.
Those would be, for example, the Uyghurs.
I read the 26 pages of the report, and it's pretty troubling.
What is your response to the idea that TikTok is essentially muting all of the voices that are against the Chinese Communist Party?
:
I can only understand the methodology of the report from what the report says is its methodology.
The report states, “On November 13, 2023, TikTok issued a letter defending itself against accusations of anti-Israel and anti-Jewish bias”—which is something that I've actually been in discussions with TikTok on, and I agree that TikTok is doing its best to try to confront that.
The report continues, “TikTok prolifically compares relative hashtags between its platform and Instagram to buttress its argument. We have replicated TikTok’s methodology”—that is in this letter of November 13, 2023—“to assess whether anomalies exist regarding the relative representation of issues on TikTok vs. Instagram.”
The methodology they used in the study that you say is debunked is actually the same methodology that you used in that letter that was sent to disprove something else.
You were just talking about Community Notes, so it seems timely to reference a few.
This one is from the , dated October 13: “Last year, Cenovus raked in $37 billion in profits. And a whopping $64 billion in 2022.” There is a Community Note on that, with a link to Yahoo! Finance news.
This is from August 17: “ told Canadians things would be better, instead, they've gotten worse. Families are losing their homes”, and it goes on.
The Community Note says, “Justin Trudeau is in government because of a confidence and supply agreement with .”
The community note goes on. Again on August 17, said, “ built people's hopes up, only to let them down.” Then he goes on to talk about rent prices, and there's a Community Note on that.
On March 18, he said:
The vote is in and we have forced the Liberals to:
Stop selling arms to the Israeli govt,
Place sanctions on extremist settlers,
The Community Note says, “The motion in question does not 'force' the Liberals to do anything.” It goes on.
On March 7, there's another Community Note.
Then this is one from February 27. It's a very interesting one: “80% of the grocery market is controlled by 5 corporations...Sobeys, Metro—and you guessed it, Loblaws.” Then he goes on to say, “Both Liberal and Conservative Party campaigns receive donations from the three.”
It's very interesting that Metro is added there, given that his brother lobbies for them.
The Community Note says, “The claim in the post is false. Corporate donations to federal political parties have been forbidden by law in Canada for over 15 years.” It's similar to the Liberals saying, “assault-style weapons”, which have been illegal for 40 years, and they know it.
There's another one on conflicts of interest.
What do we have? We have eight Community Notes. Have you ever seen a political leader in Canada get this many Community Notes?
:
In your opening statements, you talked about about the desire of X to be forthright and honest. You said—and I sort of quote you, but I might have it wrong—“a critical platform as it pertains to elections”.
Your CEO, Elon Musk, has been trafficking disinformation on X as it relates to the 2024 U.S. presidential election coming up this November.
On October 4, he retweeted a false claim stating that as many as two million non-citizens had been registered to vote in Texas, Arizona and Pennsylvania. On October 19, he retweeted a post suggesting that the state's voter rolls were likely to contribute to widespread fraud. These were all debunked, and there are numerous other instances of election disinformation.
This is the CEO. Clearly your in-house fact-checking is not working.
My question to you is this: How can Canadians be sure that Elon Musk and other X employees will not spread disinformation about Canadian federal, provincial and municipal politicians, especially given that he's already opined on the platform X about Canadian affairs previously?
:
That is where Community Notes comes into play. We have a network of 800,000 contributors around the world, including over 30,000 Canadians who have enrolled in the program. In order to become a contributor, you need an account in good standing—an account for at least six months—and a verified phone number.
Then you apply, and we onboard folks every week in a fair and randomized process. They have the ability to start rating notes for their helpfulness, whether the note contains a high-quality citation, whether it directly addresses the post's claim, whether it's easy to understand and whether it contains neutral or unbiased language.
Then we use what's called a “bridge ranking algorithm”. For a post to have a Community Note, contributors who have historically disagreed on the helpfulness of Community Notes actually agree that this note is helpful, and that's when it ends up on the post.
I'm going to go to either Dr. Hundley or Ms. Curran.
I first got into politics in 2009, and I ran against six other people municipally. I think that the reason I won was that I was early on Facebook. None of the candidates I ran against even had a Facebook page, and I had somewhere around a thousand friends on Facebook at the time.
In your opening comments—or maybe it was Dr. Hundley who said it; I can't remember—you talked about finding millions of fake accounts and deleting them. Is that a drop in the bucket? Are there billions of fake accounts? It seems to me that an aunt of mine has had about 700 fake accounts, and they're still there.
I've been an immigration lawyer, with my picture. I've been just about every possible profile out there, and that's just me. A lot of them are still there, and when we report them, they don't come down.
I agree that if you're able to take down millions of fake accounts, that's great, but do you need more capacity? Do you still see that as a problem? Are there billions out there?
Mr. Fernández, the government has introduced Bill , known as the online harms act. It has been characterized as Orwellian by Margaret Atwood. The Atlantic has published an article in which it labelled the bill as “Canada's Extremist Attack on Free Speech”. The bill has been characterized this way: “The worst assault on free speech in modern Canadian history”.
Among other things, the bill will establish a so-called digital safety commission, a massive new bureaucracy of censors who will have the power to impose penalties on any person or social media service found to have permitted what deems to be “harmful content”, whatever that is. The penalties will be established by the Trudeau cabinet, not Parliament.
Do you have concerns about this so-called digital safety commission and the effect it will have on the free speech of Canadians online?
I'll go back to a question I asked at the end of my last round.
The Prime Minister's department was very quick to get in touch with Facebook when it identified an article that contained disinformation about , but during the 2021 election, the Prime Minister's department, the PCO, as far as Ms. Curran was aware, made no contact with Facebook in the face of a wave of disinformation by the Beijing regime directed at Kenny Chiu and other Conservative candidates to defeat them and to help re-elect Justin Trudeau.
To the other witnesses representing the other social media platforms, were you ever contacted by the Prime Minister's department, the PCO, during the 2021 election about Beijing's disinformation efforts?
I'll ask Mr. Fernández.
:
Thank you very much, Mr. Chair.
The Châteauguay Facebook page is very popular with my fellow citizens. However, I find it disappointing that people put all kinds of personal information on Facebook. I'm thinking of my mother, for example, or friends or relatives. Privacy may be a little-known issue, but I find it worrying.
The question I'm going to ask Ms. Curran concerns the unanimous decision handed down by the Federal Court of Appeal on September 9, 2024.
[English]
The decision is called Canada (Privacy Commissioner) v. Facebook, Inc. 2024; it's FCA 140.
[Translation]
It overturned the Federal Court's decision and found that Facebook's practices between 2013 and 2015 had contravened the Personal Information Protection and Electronic Documents Act, because the company had failed to obtain informed consent from its users and failed to protect their personal data. The Federal Court of Appeal asked the parties to report back within 90 days of the date of the decision to indicate whether an agreement on the terms of the remedial order had been reached.
The Privacy Commissioner of Canada said he expects Facebook to now outline how it will ensure compliance with the court's decision. Meta has not indicated whether it intends to seek leave from the Supreme Court of Canada to appeal the decision.
I assume you are aware of this situation, Ms. Curran.
What does Meta intend to do about the Federal Court of Appeal's unanimous decision in the case between the Privacy Commissioner and Facebook?
:
Thank you for the question, MP Shanahan.
As I understand it, this decision is under appeal, but I don't have more detail than that, so I'll avoid commenting on the case specifically.
We have always maintained that there was no evidence that Canadians' information was shared with any external actor, including Cambridge Analytica, and the Federal Court agreed with the finding that there was insufficient evidence that Canadians' data was ever shared externally.
More importantly, in the last few years we have transformed our privacy practices at Meta and built one of the most comprehensive privacy programs in the world, and we look forward to continuing to build the services that people love and trust, with privacy at the forefront.
I won't comment any more on that decision, but I can say we agree with the decision of the Federal Court that there was no evidence that Canadians' data was ever shared with Cambridge Analytica.
:
I can't speak to the details of that particular court case. My understanding is that a decision is being appealed, but I don't have more detail than that.
I agree with you, MP Shanahan: If people don't trust us to keep their data safe, we know they won't choose to use our products and our services.
Our business uses data to connect potential customers and users with relevant and interesting content. We can only do that if our users trust us with their data and trust us to ensure their privacy. That's why privacy is really a core priority across our company.
We have dozens of teams now, both technical and non-technical, that focus on the issue of privacy and that look at how data is protected and shared across the company—how it's collected, how it's used and how it's stored. I think it's safe to say that our privacy practices have evolved significantly in the last decade. We are confident now that privacy is really at the core of everything we do and everything we build.
:
Thank you very much, Mr. Chair.
My question is for Mr. de Eyre, from TikTok.
In a March 15 article published by Reuters and reprinted in the Chinese edition of Forbes, we read that 60% of ByteDance shares were held by institutional groups such as Carlyle Group, General Atlantic and Susquehanna, 20% were held by employees, and the rest by Mr. Zhang Yiming, who is the founder. It is also said that, although he owns 20% of the capital, he still holds 50% of the votes in ByteDance.
What is the link with China?
:
I'll give you some background.
On July 29, 2024, there was a mass stabbing at a children's dance class in Southport in the United Kingdom, and three children died.
Immediately following news of the attack, false information about the attacker's identity spread on social media, alongside calls for action and violence. The next day hundreds of people gathered outside a Southport mosque and hurled petrol bombs, bricks and anti-Muslim abuse, motivated by false information spread online naming the attacker—and I won't even rename it, because I don't want to boost it anymore—and was both a Muslim and an asylum seeker.
Acts of violence and public disorder, much of it featuring anti-Muslim and anti-migrant sentiment, soon spread around the country. Posts containing the fake name were promoted by users using platform algorithms and recommended features. The Institute for Strategic Dialogue found that X featured the false name in its “trending in the U.K.” promotions, suggesting it to users in the “what's happening” sidebar.
Far-right figures with millions of followers capitalized on false claims that the attacker was an asylum seeker, spreading the falsehood further into the massive bases of followers.
One platform stood out. It was yours. It was X, and the owner, whom we identified already, Mr. Elon Musk, shared false information about the situation to his 195 million followers and made a show of attacking the U.K.'s government response to the outbreak of violence. Rather than ensuring risk and illegal content were mitigated on his platform, Musk recklessly promoted the notion of an impending civil war in the U.K., Mr. Fernández, and yet your company, X, refuses to sign on to a declaration on the practice of disinformation.
What do you have to say about that, Mr. Fernández?
:
Sure. Thanks for the question. This is something we talk a lot about with Canadian creators.
The creator fund, or the creativity fund as it's called now, is specific to the U.S. and a few other countries. We haven't rolled it out globally yet. We're always looking at ways we can continue to help creators monetize their content and earn a living from their content. There are quite a number of other ways, though, that Canadian creators are thriving and making money by using TikTok.
You have a constituent in your riding, whom I mentioned in my opening statement, Corey McMullan. He uses it for brand partnerships and to sell his own items directly to his followers. I think he has an audience of over 400,000 or 500,000 followers.
Live gifting is another major way that Canadian creators are able to monetize their platforms. They go live and receive virtual gifts.
We're constantly looking at ways we can help our community leverage their audience and earn money or even make a living through TikTok.
:
I appreciate that response.
I have a question that I'd like to put to each member of the panel, and it has to deal with the responsibility that verified users on your platforms have when it comes to the dissemination of disinformation.
Mr. Fernández, you talked about the grey check mark and the trust that users of your service can have when they recognize that the grey check mark means that this person is an elected official or a government official.
I want to read to you a post on X from October 17, 2023. Canada's posts, “Bombing a hospital is an unthinkable act, and there is no doubt that doing so is absolutely illegal.” That post was viewed 2.7 million times. It's still live on your site today.
I want to juxtapose that with an ABC News story from October 18, 2023. I'm just going to read you the first paragraph:
A day after the Hamas-led Gaza Health Ministry claimed Israel had attacked the Al Ahli Arab Hospital in Gaza City, saying some 500 Palestinians had been killed, Israeli and U.S. officials, explosives experts, and President Joe Biden said Wednesday that available evidence shows the destruction was caused instead by a failed Palestinian terrorist rocket launch.
How difficult is it for your users—and also your services—to manage this when we have this type of recklessness not only from an elected official but also from the of a G7 country who is spreading what is demonstrably evidenced as fake news, false information—call it what you will?
We talk about misinformation. The chair has pointed out before that that's a clever term for when people lie. What kinds of challenges does it create when this type of actor is posting this type of misinformation?
Thank you to all our platform guests for joining us today.
I want to talk to Mr. Fernández regarding bots. Bots, we know, are a source of misinformation on social media platforms. Since X was sold, bots' activity on X has become worse than ever, according to experts like Timothy Graham at the Queensland University of Technology. There is an article from The Washington Post in July 2018 that reads:
The rate of account suspensions, which Twitter confirmed to The Post, has more than doubled since October, when the company revealed under congressional pressure how Russia used fake accounts to interfere in the U.S. presidential election. Twitter suspended more than 70 million accounts in May and June, and the pace has continued in July.
However, according to your statements earlier today, Mr. Fernández, you said that X has removed 60,000 spamouflage accounts in the last year. Why is there such a discrepancy between the suspensions before Mr. Musk purchased the platform and what I referenced in 2018? Is that not a huge gap of bots still active?
:
You know, I've suspended my use of TikTok subsequent to inquiries about foreign interference that I take very seriously. I'm waiting for investigations to unfold.
I was an avid user of TikTok. I know many people who are. I give credence to what you're saying about people enjoying it. I would put to you that in fact the reason people spend so much time on TikTok is the power of the algorithms. It's the ability to profile people and continue to provide content to them that will essentially monopolize their time on the platform.
There still remains a concern, regardless of how you're answering it. You don't want to characterize ByteDance as a Chinese tech giant; I would. You don't want to say that it's Beijing-based; I would suggest that it is, yet here we are.
We have countries around the world, and I'll say specifically in the west, that are investigating the use of the algorithms that TikTok has. What do you have to say to people like me who have suspended their accounts because of the fears of the potential for foreign interference?
Ms. Khalid, before I go to you, I just want to circle back to your earlier intervention about asking for documents.
I've looked back into the book. It's clear that committees.... We've had several requests throughout the course of this meeting. I know that Mr. Housefather has made a request. There have been others that the clerk has noted. We'll follow up with whomever that request has been asked of, but it does say that we usually obtain papers simply by requesting them from their authors or owners. If the request is denied after the ask has been made, however, and the standing committee believes there are specific papers that are essential to its work, it can use the power to order the production of papers by passing a motion to that effect. Typically, the method is to ask. If we're not satisfied after that, we can move a motion.
I just wanted to make that very clear before you started.
If there is any additional data that you can provide that would confirm what you've said today, I'd really appreciate that.
Mr. Fernández, I'll go back to you quickly on the blue check marks and on the apparent ability now on X to be able to buy legitimacy for a number of dollars and to amplify your voice, regardless of how accurate or how truthful—or not—that voice is. It could be misinformation, disinformation, hate speech, etc., but you could purchase the blue check mark that then amplifies your voice.
Has there been any study done within X as to whether that blue check mark and the accounts associated with it have any correlation with misinformation or disinformation campaigns or with fact-checking expeditions on your platform?
:
Perfect. Thank you, Ms. Khalid.
I want to thank all our witnesses. I'm not going to name you all; there are just too many of you. I really appreciate the fact that you've made yourselves available to the committee for this important study.
The clerk has noted some of the requests that have come in from committee members for providing more information. She will review the blues and then get back to you.
I expect, over the course of the next coming weeks, that we are going to be providing our analysts with some drafting instructions. Once the clerk follows up with you, if you could get those answers back to the committee through the clerk as quick as possible, I would appreciate it as chair. We'll probably give you a deadline, if that's okay.
Thank you to everybody who's been here on Zoom and to everybody who's been here in person, including our technicians, clerks and analysts.
That's it. Have a great weekend, everybody.
The meeting's adjourned.