:
I call this meeting to order.
Welcome to meeting number 23 of the House of Commons Standing Committee on Public Safety and National Security.
We will start by acknowledging that we are meeting on the traditional, unceded territory of the Algonquin people.
Today's meeting is taking place in a hybrid format, pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. Members and witnesses participating virtually may speak in the official language of their choice. You have the choice at the bottom of your screen of floor, English or French.
Pursuant to Standing Order 108(2) and the motions adopted by the committee on Thursday, February 17, 2022, the committee is resuming its study of the rise of ideologically motivated violent extremism in Canada.
With us today are Vidhya Ramalingam, co-founder of Moonshot, and Adam Hadley, executive director of Tech Against Terrorism.
You will each be given up to five minutes for opening remarks, after which we will proceed with rounds of questions.
Mr. Hadley, you now have the floor for up to five minutes for your opening remarks, sir, whenever you're ready.
:
Good morning, and many thanks for the invitation to speak at the committee hearing today.
I'm Adam Hadley, executive director at Tech Against Terrorism. Over the next few minutes, I'd like to explain more about who we are at Tech Against Terrorism and what we do, and provide some clarity about our position on some of the discussion points.
Tech Against Terrorism is a not-for-profit based in the U.K. Ours is a public-private partnership. We were established with UN CTED, the counterterrorism executive directorate, in April 2017. Our mission is to work with the global tech sector, in particular smaller tech platforms, to help them tackle the terrorist use of their services while respecting human rights. Our work is recognized in a number of UN Security Council resolutions, including resolution 2354 and resolution 2395. As a public-private partnership, we work with the major democracies—governments such as the Government of Canada, the U.S., the U.K., Australia and New Zealand—alongside the tech sector, which includes big tech and smaller tech platforms.
The reason we focus on smaller technology platforms is that many of these platforms have limited capacity and capability to deal with terrorist use of their services. Our mission is to support these smaller platforms, free of charge, to improve their response to terrorist activity and terrorist content. In particular, over the past two or three years, we've seen a significant increase in migration from the use of very large platforms to smaller ones. This represents a strategic vulnerability in response to the terrorists' use of the Internet.
Tech Against Terrorism monitors over 100 tech platforms on an hourly basis. We also monitor around 200 terrorist-operated websites. Overall, we work with 150 platforms, providing a number of services to help improve their response. We also work alongside other organizations focused on online counterterrorism, such as the Global Internet Forum to Counter Terrorism.
In detail, our work at Tech Against Terrorism focuses on understanding the nature of the threat. This is based on open-source intelligence, in order to understand the detail of how terrorists use particular platforms. We use this intelligence and insight to establish relationships with these platforms, reach out to them and evaluate the extent to which we can provide support.
This results in a mentorship service that we offer free to platforms. The mentorship service is designed to build capacity. We do this alongside the GIFCT. Of note, we've developed some software, called the terrorist content analytics platform, which helps alert small platforms of the existence of terrorist content. The TCAP, the terrorist content analytics platform, has so far been funded by the Government of Canada. This has resulted in 30,000 URLs—individual items of terrorist content—being referred to platforms, with more than 90% of this content on smaller platforms removed. We've also built a knowledge-sharing platform, which is designed to share best practice information and guidance to smaller platforms. We actively work to have terrorist websites removed from the Internet.
I should stress that we focus on violent, Islamist extremist organizations and, of course, the extreme far right. The basis of our work is typically focused on designation. In upholding the rule of law, we believe that designation is a critical mechanism to ensure that platforms remove content in a timely fashion. Therefore, we applaud the Government of Canada for its pioneering work in designating organizations from across the terrorism and violent extremism spectrums.
In summary, we call for governments to focus on the rule of law and how they regulate, with a focus on providing definitional clarity to tech companies so that they can improve their action. We believe that designation is a crucial tool that can be used to help provide that clarity, so that small tech platforms get better at dealing with terrorist activity.
Finally, we would stress that proportionate measures are important. Often, regulation in this area is primarily focused on big tech. We understand the concern here. However, the current threat picture is such that there is a significant amount of terrorist activity from across the spectrum on smaller platforms. Often, regulation fails to take this into account and fails to take into account the nature of adversarial shift—in other words, when terrorist activity changes or adapts according to the measures that are being used to avoid terrorists' use of services.
In summary, many thanks for the invitation to speak today. I look forward to participating in the session.
Thank you.
:
Thank you, Chair and members of the committee.
My name is Vidhya Ramalingam. Eleven years ago, when a far-right terrorist murdered 77 people in Norway, I led the EU's first intergovernmental initiative on far-right terrorism. It's in that role that I first started working with Public Safety Canada and saw first-hand the resilience and strength of Canadian practitioners working to ensure that no more Canadians take a violent path.
I now lead Moonshot, an organization working with the governments of Canada, the U.S., the U.K., Australia and other global partners to build online prevention capabilities fit for the challenges of the 21st century.
The threat posed by IMVE actors and groups is undoubtedly growing more sophisticated both online and off. Moonshot started studying Canadian engagement with this content on search engines in February 2019. In little over a year, we tracked over 170,000 individual searches for IMVE content across Canada. As Canadians spent more time online as a result of the COVID-19 pandemic and lockdowns, the engagement increased. Searches for far-right content increased 19% weekly during lockdown measures. In Ottawa we tracked a 35% increase after Ontario's state of emergency was declared.
We have seen greater engagement with conspiracy theories. Over a year we tracked over 25,000 searches across Canada for white supremacist conspiracy theories such as the Kalergi plan, the great replacement and white genocide.
In partnership with Public Safety Canada, we also produced the first systematic online study of the Canadian violent incel community online. The Canadian incel ecosystem is spread across both niche and mainstream platforms, including Twitter, YouTube, Telegram and Reddit. Canadian users on incel sites were 65% more likely than global users to post news stories about incels and were especially celebratory of incel violence that occurred in Canada.
However, we are not without tools to respond. Perhaps the greatest challenge for governments today is how to bring our prevention models into the 21st century. We have to intervene where extremist groups are seeking to recruit: online. In 2022, every prevention model needs a robust digital component. This must be delivered safely, ethically and responsibly, with user privacy at its heart.
Our recommendations for Canada are, first, strengthen pre-existing behavioural health and other wraparound services for prevention, specifically mental health support, community outreach as well as adjacent fields such as suicide prevention. Frontline practitioners such as Équipe RAPS and CPN-PREV in Quebec, OPV in Alberta and Yorktown Family Services in Ontario are best positioned to intervene.
Our second recommendation is to adapt the entire suite of prevention services for online delivery. In a 2017 study, Moonshot found that only 29% of Canadian practitioners were using social media in their prevention work. We need to build the digital literacy and capacity to deliver their work online. There are an abundance of online tools and methodologies we can use. For example, from 2019 to 2020, we worked to ensure that every Canadian searching for extremist content online would be offered a safer alternative to terrorist content. We used advertising tools to safeguard approximately 155,000 violent, far-right searches and around 16,000 Daesh and al Qaeda related searches. The natural evolution of this work should see the use of these tools to connect Canadians with prevention services that can work with them to change their paths.
Finally, third, signpost terrorism prevention services such as hotlines, counselling and exit offers online. Evidence shows us that this works. Moonshot found audiences at risk of far-right extremism in the U.S. were 48% more likely than the general public to take up offers of psychosocial support services online. In the last year alone, Moonshot has channelled over 150 individuals at risk of violent extremism across the U.S. into text message counselling sessions via online engagement. Now we're working with the U.S. government to launch state-level models to off-ramp at risk Internet users into local support programs, starting with New York state.
Here in Canada, we need to signpost local services to Canadians engaging with extremist content online. To do this, local providers and networks like CPN-PREV need sustained investment to run interventions, extend their service hours and support the professional and mental well-being of staff. These organizations fill a critical gap in Canada's public safety infrastructure. The government should invest in these models and support efforts to take their interventions online, where their services are needed the most.
Thank you for your time today.
Thank you to the witnesses for taking the time to be with us today.
My question is for Mr. Hadley.
In 2017, your organization launched a knowledge-sharing platform, which was a collection of tools that start-ups and small tech companies can use to better protect themselves from terrorists' exploitation of their services.
Could you provide this committee with some more in-depth information about how this platform works and some of the results you have seen?
:
Of course. Many thanks for that.
The knowledge-sharing platform is designed as a tool that's free to access for tech platforms. Its objective is to improve the understanding that those running small platforms have of the terrorists' use of the Internet. It spans the spectrum of terrorism and violent extremism. Within the scope are violent Islamist extremism, the extreme far right and a number of other terrorist organizations that are designated by other international organizations.
In detail, the KSP provides information on logos associated with designated groups, the terminology associated with them and phraseology that may be typical of the content that appears. There's also detail on workflow in order to support platforms in making better content moderation decisions. There is also a significant amount of information about designation lists at the international level and a summary of global online regulatory efforts and many other elements. For more information, the website is ksp.techagainstterrorism.org.
:
We are careful to vet access in everything that we do. In fact, in everything I will say during this committee meeting, I will assume that terrorists and violent extremists are aware of what we're saying, so there is always concern about not disclosing too much.
Tech Against Terrorism is distinctive in that much of our work is done confidentially and privately. In order to build trust and confidence with smaller platforms, much of this must be done in private. In particular, there are grave concerns about access to the methodology and information that small platforms have. We know that terrorists and violent extremists are extremely adept at changing their use of the Internet. The more information they have about content moderation, the easier it is to change their methodology and therefore subvert mechanisms designed to stop that activity, so we have to be careful.
In detail, for every individual who applies for access to the knowledge-sharing platform, we will ensure that they belong to a real platform. We will email them, call them and ensure that the knowledge that's being shared is appropriate for that audience.
:
Of course. Automation can cover a number of separate activities. Often we might discuss algorithms, which certainly are part of automation. However, in our experience, the biggest challenge that small platforms have isn't in the basics but in the workflow. Content moderation automation is a simple mechanism in principle. It's identifying content that may fall afoul of the law or terms and conditions. It's then assessing whether this content does pass those thresholds. It's taking action, recording that action and reporting on it. It's also providing an opportunity for a user to appeal that decision. For the workflows, the complex ones, with smaller platforms in particular, most of our activity in supporting platforms is with that basic infrastructure.
You could argue that this is all about automation. It's about trying to ensure that small platforms are able to accurately identify and moderate content in a scalable way. Unlike big platforms, smaller platforms have very small teams. They often have no or limited revenue or profitability, and they tend not to have particularly sophisticated technical infrastructure. That explains partly why terrorists and violent extremists will often use smaller platforms, because they know it's so much harder for those smaller platforms to remove the material.
When we're working with smaller platforms, we provide a number of recommendations about how they can best use technology and automation to make the content moderation process more accurate and more successful as a result. Automation can include various other mechanisms such as hashing or hash-sharing. Potentially it can ultimately include searches of keywords and terminology, and it could involve more sophisticated mechanisms to understand whether a symbol is in an image or a video.
However, most small platforms rarely have the capacity or capability to build complex automation. The automation that we typically support with is fairly simple and it's about helping them make the right decisions and record the decisions that they're making. An important principle in all content moderation, at least in our view, is transparency. Therefore, we recommend that platforms of all sizes invest in transparency reporting and, for that, automation is required to understand what has been removed and what's been left up.
:
Thank you very much, Mr. Chair.
I thank both witnesses for taking the time to appear before the committee today.
My first question is for Ms. Ramalingam.
Ms. Ramalingam, in your opening remarks, you mentioned the tragic event in Norway. To enlighten the committee, I would like to know what you think about the recently passed European legislation on illegal content online. What can we learn from that?
I'd also like to hear what you have to say specifically on the issue of liability for technology companies.
:
Thank you very much for the question.
We really admire the Canadian government's forward planning around this threat. This is an emerging threat that not only the Canadian government but also global governments need to be concerned about.
Some of our main findings around the Canadian incel movement I mentioned in my briefing, but I want to talk a bit about prevention here. Some of the main findings we have discovered in the early stages of that work are that incel communities are open to mental health interventions and behavioural health interventions. This is actually no different from other forms of violent extremism—really across the spectrum. Whether we're talking about al Qaeda and Daesh inspired violent extremism or whether we're talking about the far right or the far left, we have consistently found, across the spectrum, that these audiences are open to behavioural health interventions.
With the violent incel community, in part because we found high levels of discussions around their mental health and well-being already on platforms, there is an opening for us here to use mental health interventions as a way of starting a conversation with people who are at risk of violence.
We would really encourage the Canadian government to invest heavily, as I mentioned, in behavioural health models, in building on the existing prevention and social service provision organizations across the country, and also equipping them to be able to handle cases coming from this violent misogynistic movement as well.
:
Thank you for your question.
I think Canada is very well placed, actually, to take the long-standing programs that the Canadian government has been investing in for the last 10 years and start to build their digital capacity to deliver their work online.
I mentioned some of our findings from a study that we ran five years ago, which was looking at what was then the current level of digital capacity among Canadian prevention practitioners, and it was very low. We need to work to improve that, so I would suggest that the Canadian government work to deliver training and capacity building to organizations that need to start using social media to signpost their services online.
I would also suggest that we start to look at large-scale programming across the entire country, and not just focus on the few territories and provinces that have been heavily invested in and already have these programs on the ground but also really start to look at parts of the country that don't have these programs—in particular, Manitoba, Saskatchewan, the Atlantic provinces and the territories. We need to build up specialist teams that can cater to audiences that are at risk in those regions and start to bring services for those audiences online.
We are in a moment of prolonged crisis. Domestic extremist movements, IMVE movements across the ideological spectrum, thrive on moments of crisis, and they basically use these moments to turn anxiety and fear in society into an opportunity for them to grow. That is what we saw on January 6 in America. We saw extremists grasping onto the insecurity and anxiety following the U.S. presidential election, and that's what we saw with the convoys in Canada. We saw extremist groups taking advantage of social polarization and using that moment to manipulate and to grow in Canada.
We need to be a bit more front-footed and looking ahead at the crises on the horizon. We need to ensure that our prevention programming is equipped to pre-empt those crises so we're not just reactive and dealing with violence after the fact, but we're pre-emptively going out to individuals who may be at risk in our community and working with them to ensure they know violence isn't the way.
:
Thank you for your question.
Whenever I talk about gender, I think it's really important for us not to go in with assumptions. I tend to hear, and I've often heard across the policy spectrum internationally, this notion that only men are really getting involved here and not women. I do want to say here that we have evidence to show the counter. Globally, in fact, across the United States, Canada, the U.K., Australia and New Zealand, we tend to find that 25% of the audience engaging with right-wing extremist content is actually women, people who self-identify as being women. That's not to diminish the fact that we do tend to find that on average 75% of the folks who are engaging with this content online are men.
In addition to that, we need to recognize the real intersections between the misogynistic violent movements—I mentioned violent incels—and far-right extremism communities. We've also seen violent misogyny intersect with other forms of extremism, including al Qaeda and Daesh inspired extremism and across the ideological spectrum.
I would encourage us to really look at the data here as we're designing prevention mechanisms but to recognize the gender-specific interventions that are required.
Thank you to both of our witnesses for your testimony today.
My first question is for Ms. Ramalingam from Moonshot.
We've been trying to get you here for quite some time. I want to thank you for the work you're doing and for being here today.
Since 2014, CSIS has identified 10 plots—seven attacks and three disrupted plots—that killed 26 people and wounded 40 on Canadian soil. They identified all of these plots. Four were incel. All of them involved far-right or incel attacks.
When NSICOP tabled their report, they mentioned that in the last two years, “CSIS has uncovered extensive ideologically motivated violent extremism...(notably right-wing extremist groups)...through online activity and physical attacks. The sizable increase in this activity throughout 2020 suggests [that] the terrorist threat landscape is shifting. The primary physical threat to Canada remains low-sophistication attacks on unsecured public spaces.”
Given what independent agencies like CSIS are reporting, does it not make sense that the Government of Canada would be funding your research on those threats?
:
At the time, some of my main recommendations were based on the reality that far-right extremism so often falls in a policy gap between the community safety initiatives and counterterrorism. Counterterrorism practitioners and the counterterrorism community across Canada needed to be equipped at the time with the skills to engage with far-right terrorism. I think that has dramatically improved in the last 10 years, both to Canada's credit as well as that of the international government community.
That said, I think where the threat has evolved since 2011 is in the online space. There is this worrying risk that members of the wider public are coming into contact with this content that was once relegated to very niche spaces online, or even to niche communities off-line.
My major concern is that the content that's being pushed by violent far-right groups and also violent incel groups is suddenly emerging into mainstream communities online. This is where we need to invest not only in prevention but in broader programs, to build, as I mentioned, critical media consumption skills amongst the wider public to prepare them for the possibility that they will encounter this.
For my second round of questions, I would like to return to some of what Mr. Hadley said.
Mr. Hadley, it has been shown that smaller and medium-sized companies face a bigger challenge in terms of being well protected against online risks and threats. You've explained why very well, but I'd like to know a little bit more about how exploitation of their sites by terrorists affects small technology companies.
Can you give any other examples to help us to better understand this reality?
Recognizing the short time available, there's one particular Canadian messaging app, which I won't name, that became totally inundated by terrorist activity. We estimate that at one point, 80% of its user base was associated with ISIS a number of years ago. As a result, that platform was simply unable to operate in any functional way because it had been taken over by terrorist activity.
Increasingly, we see that terrorist-operated websites are a big issue. We're talking about hundreds of terrorist-operated websites, the majority of which are owned or operated, based on our assessment, by extreme far-right actors. The reason these stay up online so much is that the legal infrastructure to guide governments in helping them understand how to go about taking down these websites is very unclear.
The private sector does co-operate to some extent on terrorist-operated websites. I believe that only recently, a website that was highly likely owned or operated by American Futurist, which is an organization closely linked to designated NSO and James Mason, was removed. There are some successful efforts to have terrorist-operated websites removed. However, a lot more needs to be done. It's not just about smaller platforms but also terrorist-operated websites.
Thank you.
Ms. Ramalingam, I'd like to continue with you.
I really appreciated your recommendations for our committee about strengthening mental health and community intervention and making sure that we adapt those services for online use. Our committee recently completed a study into gun smuggling and gang warfare. We heard a lot of testimony about the effectiveness of community-based programs to help vulnerable populations avoid a life with gangs. I think we can use the same model on this.
I want to ask you specifically about the subject of deplatforming.
We had Mr. Imran Ahmed before our committee last week. He is with the Center for Countering Digital Hate. I'll read a quote from his testimony. He said, “Deplatforming these people and putting them into their own little hole, a little hole of anti-Semites, anti-vaxxers and general lunatics, is a good thing, because [actually] you limit their capacity to infect other people. Also, for trends such as the convergence and hybridization of ideologies”.
You're proposing a set of recommendations where it's a positive intervention. Do you have any comments on the concept of deplatforming to try to, I guess, cauterize the wound and prevent some of these crazy ideologies and violent extremism from spreading to vulnerable groups?
:
Thank you for your question, sir.
Deplatforming works. There's plenty of evidence to suggest that deplatforming does work in limiting the spread of terrorist content on platforms, but it's not enough on its own. In order to effectively prevent terrorist abuse of online platforms, we need to accept two things. First, there will always be some content that falls in the grey zone and will not be liable for removal and these groups walk the line very carefully.
Second, there will always be some spaces on tech platforms that are not liable for moderation. I've mentioned “search” a few times now—that's a great example here. Search engines don't prevent you from entering anything you'd like into the search engine box. That search engine box is a great moment to intervene with someone who is searching actively for terrorist content.
For these kinds of cases, in addition to moderation efforts, we need to be thinking about how we deliver safer alternatives to users who might be at risk of getting involved in violence. You can delete the user and you can delete the account or the video, but that person still exists in the community around us.
Thank you.
:
That gives me a chance to thank them even more robustly.
To the witnesses, thank you so much for the insight. This is fascinating, timely and so important to the country. On behalf of all members of our committee and all parliamentarians, thank you for sharing this last hour with us. It's been very valuable.
Colleagues, this is a reminder that the next meeting is the final meeting of the IMVE study. Departmental officials will appear in the first hour, and only two witnesses will appear in the second hour to allow time for instructions to the analysts on drafting the IMVE report. This portion of the meeting will take place in camera.
Thank you. We will now suspend.
The clerk will do his magic and line up the witnesses for the next panel. I don't think that's going to take anything more than a minute or two. We're almost there.
:
I now call this meeting back to order.
With us on this second round, as an individual, we have Navaid Aziz, imam. From the Canadian Race Relations Foundation, we have Mohammed Hashim, executive director; and from MediaSmarts, Dr. Kara Brisson-Boivin, director of research.
Each of our guests will have up to five minutes to make an opening comment. I will start with Mr. Navaid Aziz.
Please, sir, you have five minutes for an opening comment.
:
Thank you so much, honourable Chair and members of the committee. I appreciate this opportunity allowing me to share with you today.
As mentioned, my name is Navaid Aziz, and I am a classically trained Muslim scholar. I have served as an imam for over 10 years in Calgary. From 2012 to 2015, we saw a surge of young Muslims travel overseas to join extremist groups and factions, and it was at this time that I began my own personal study of violent extremism to develop an expertise as much as I possibly could.
I have served as an expert witness with the Supreme Court of British Columbia in a terrorism-related case. I have mentored and helped in the rehabilitation of several individuals charged with terrorism offences, and I've published two papers, one on the reintegration and rehabilitation of Canadian and foreign fighters and a second on a brief guide to right-wing extremism in Canada.
I'm hoping that my perspective today will be unique in the sense that it will be primarily focused on a community-focused point of view.
Starting off with 2012 to 2015, an insurmountable amount of pressure was applied to the Muslim community as to why these problems were happening in the Muslim community, why Muslims were not better integrated, and what the Muslim community was doing to solve this problem. A community that is not homogeneous or monolithic was asked to deal with an issue that it was not responsible for. It was not given any further support other than being told what to do, and it had no prior experience in dealing with such issues.
Law enforcement and policy-makers had securitized the relationship with the Muslim community. It infiltrated mosques with informants, which created a sense of distrust. Relationships were built on the basis of collecting information to facilitate the collecting of information for prosecution, and no support was provided when needed. It also created a perception of good Muslims and bad Muslims. Those who co-operated were good, and those who didn't were bad. The average community member was not afforded any neutrality.
Multiple experts have also pointed out throughout the years that there was a disproportionate number of terrorism-related prosecutions on the Muslim community within Canada.
I struggle with this introduction, my dear committee, to point out that, in what we have seen in 2016 onwards in the rise of populism and right-wing extremism, the Muslim community was a primary target. In 2017, we witnessed the Quebec mosque massacre, and in 2021 the Afzaal family in London, Ontario, was murdered in cold blood. May we never forget these people.
We did not see the same questions being posed to other communities. Why was this happening? What are they doing to solve their own problems?
We did not see the securitization of relationships in the sense that informants were proposed and put forth in very high numbers, nor did we see a dichotomy created of people being labelled as good people or bad people. This is not to say that this is the response that should be expected, but this is to point out that we have some serious problems at an institutional level that need to be addressed.
What am I proposing and what do we need to look at? With regard to my proposal, I suggest that when we look at funding, we look at three approaches.
Number one, with regard to the security infrastructure proposal, we need to understand that not all minority groups will be able to access this grant or this bursary because there is very little history in terms of them actually applying for such grants and the support is not provided. It is very difficult for them.
Number two, with regard to sustainable funding for CVE initiatives across Canada, particularly in the province of Alberta, the Organization for the Prevention of Violence saw an influx of numbers come in, particularly in March and April of 2020, after January 6 and after the freedom convoy. Oftentimes we may think that this may increase right-wing radicalization, but it also created an opportunity to be introspective, where people were seeking support for themselves and their family members when they saw them go down a dangerous path. These programs do not have sustainable funding but are dependent on grants and bursaries as well.
My last proposal for funding is with regard to research to look deeply into what the environments are that create such forms of violent extremism, and this needs to be the primary research.
My last proposal in terms of a recommendation is that, when we look at relationships, we need to have a community-focused addition to this so that, as we look at equity, diversity and inclusion, it's not just that at a physical level or the physical representation is increased, but even representation in terms of thoughts and ideas and sources need to be included as we include equity, diversity and inclusion in our infrastructures and in our boards at that level.
That is what I wanted to share with you in my five minutes. Thank you so much.
Thank you for having me here today. I want to acknowledge that I'm speaking to you from the traditional territories of the Mississaugas of the Credit First Nation in Mississauga, Ontario.
My name is Mohammed Hashim. I'm the executive director of the Canadian Race Relations Foundation. The CRRF was born out of an apology to Japanese Canadians who were wrongfully imprisoned in internment camps during World War II. Part of their redress agreement involved the creation of the CRRF as an independent federal Crown corporation in 1996, which now lives within the Department of Canadian Heritage.
Our organization does research and community engagement, hosts policy discussions, provides funding to community groups and is currently supporting the creation of Canada's renewed anti-racism strategy, new anti-hate strategy and the strategy on combatting online harms with the government.
When we think about the ecosystem around IMVE, what ends in violence isn't always the full story. There was a journey that preceded the violence. We see many actors over time who start by being involved in hate incidents, then move up into hate speech, sometimes go further and commit hate crimes, and even commit violence as part of that journey.
We are not experts on IMVE, but we think the story starts far before the violence, specifically with hate, and that is where our work is primarily focused. It's work we know we can't do alone, and that's why we, along with the RCMP, are co-chairing a national task force on hate crimes. We are bringing together some of the brightest minds across law enforcement to improve training, increase public awareness and build standards for the police and community.
Hate is a growing concern in Canada and globally, and its targets are always changing. Racialized communities have been ringing the alarm bells for years. The night the Quebec City massacre happened, I was speaking to a friend who told me she was not surprised by what had happened because of the ongoing hate that had been targeted at Canadian Muslims and other minorities for years in this country.
The anticipation of violence towards that community was constant and is being felt by many today. There have been consistent failures on the part of institutions to take these harms seriously, which brings us to this moment. While it is crucial that we are here, it is equally important to note that this discussion is long overdue. When we look at hate and the administration of justice, it is hard to have faith that the system will right the wrongs.
For far too long, online platforms have provided safe environments in which hateful rhetoric has been able to spread without recourse. Those spewing such hate feel powerful, above the law or consequences, and those targeted are left feeling helpless and alone. According to the StatsCan survey on victimization, there were over 200,000 hate incidents, almost half of them of a violent nature. Hate incidents reported to the police over the past few years represent only a fraction—probably about 1%—of that number. There is a major gap between what people are saying they're experiencing and what is actually coming to the justice system's attention. There are real impacts on individuals and communities when there is so little faith in the system, even when the system actually works.
There was a recent case presided over by Judge Cidalia Faria. In this case, there was a woman who stepped in to intervene in a situation in which another woman and a child were being mistreated by a man. The man then focused on the intervenor, ripped off her hijab and assaulted her by hitting her in the face while yelling hateful rhetoric. The victim, who was known for being a strong community volunteer, said her voice was taken away from her and that the man said that if she spoke up there would be some horrible consequence for standing up. She is a very outspoken person and she doesn't feel as though she has been herself since then.
I share this with you because I think we failed the victim. I'm not going to question the judge's decision to let the guilty party off with a suspended sentence because of mitigating factors, but I do know that the victim in this case did not receive adequate support to restore her faith in this community.
She isn't alone. Victims of hate are often let down in this country, and, by extension, so are their communities. Canada needs a robust system to support victims of hate. We need this system not only to help individuals recover but also to ensure that communities feel supported through the process—from reporting a hate crime to getting support through a trial and afterwards to finding help to get back on their feet. We know that hate crimes are message crimes. It is time we sent a counter-message to the victims that they are seen and heard and will be supported.
I focused my remarks on victims today because far too often we look at hate crimes and IMVE with a focus solely on the perpetrator, while mostly ignoring victims. We must address prevention, investigation and prosecution as we are doing through our work on the national task force on hate crimes. We must realize what is at stake if we don't address the reverberating harms left on victims. When we leave victims, either individuals or whole communities, without faith that their concerns are being heard, we see people lose faith in democratic systems.
Good afternoon, committee members, and thank you for this opportunity to speak with all of you.
MediaSmarts has been working in the field of online hate for nearly two decades. Our research has consistently found that Canadian youth are frequently exposed to racist and sexist content online and that they feel it is important to do something about it, but also that they are not prepared to critically engage with hate content or to push back when they encounter it.
Our research with youth examined their attitudes and experiences with hate online—specifically, why they do or don’t intervene. We found that what’s more common than overt hate are cultures of hatred, communities in which racism, misogyny and other forms of prejudice are normalized. When hate online goes unchallenged, users may believe that intervention is overreaction. A community's norms are largely set by the most committed 10% of members.
When cultures of hatred are masked as consensus and the behaviour is not seen as harmful, the majority of witnesses may not believe intervention is worth the risk of social exclusion. Youth are particularly vulnerable because they are worried about disrupting social harmony, losing their social capital or status with their peers and drawing unwanted attention to themselves.
Hate groups take advantage of this as well as the digital architecture of online spaces, working to make hate appear more mainstream and acceptable to expand their pools of potential recruits and create an online environment hostile to their targets. Our most recent study with young Canadians shows that 2SLGBTQ+ youth are almost twice as likely to report having been bullied and to have seen racist and sexist content online.
Our study on algorithmic awareness highlights how design, defaults and artificial intelligence are shaping our online spaces. Recommendation algorithms can diminish our capacity to verify whether or not something is true online, as users may perceive content that is delivered algorithmically and curated for them as more trustworthy.
Online hate has the power to change how we know what we say we know about scientific and historical facts, social norms and even our shared reality. As youth overwhelmingly turn to the Internet as a source of information, they run the risk of being misled by hate content. If that misinformation is not challenged and users do not have the critical thinking skills to challenge it, some youth may come to hold dangerously distorted views.
Youth need to be supported in developing the skills and knowledge to be able to recognize online hate. This means learning general critical thinking and digital media literacy skills, as well as the techniques and ideologies of hate. In order to talk about controversial topics and have healthy debate, users need to be able to distinguish between arguments based on facts and those that appeal to dehumanization and fear of the other.
Youth also need clear examples of how they can respond when they encounter hate and prejudice online. Interventions should emphasize that even small efforts to push back against online hate can have profound impacts on motivating others to intervene. They need to feel that their opinions and experiences matter and will be considered by those with decision-making capacity.
Youth believe platforms and technology companies have a responsibility to set clear rules and community standards to make it easier for users to report hate and then respond to those reports through publicized enforcement metrics. They also feel that policy interventions should give youth and the trusted adults in their lives more opportunities to learn digital media literacy in Canadian classrooms, homes and communities.
I'll conclude my comments by expanding on that final point.
The value of an educational approach to online hate cannot be overstated. While governments and online platforms have important roles to play, we cannot legislate, moderate or design our way out of these challenges. We need to ensure that all people in Canada have the tools and critical capacities to safely and positively engage as ethical digital citizens.
In this way, digital media literacy is a preventative measure and a harm reduction approach to ideologically motivated violent extremism. This approach does not let either platforms or regulators off the hook by laying the burden of the challenge on the shoulders of individual users. Rather, what’s needed is a whole-of-society approach that holds platforms and governments accountable, both in their role in combatting online harm as well as in supporting digital media literacy.
MediaSmarts has been advocating for a national digital media literacy strategy for over 15 years, a recommendation consistently endorsed by key stakeholders and community partners and reconfirmed in our report on building a national “Digital Media Literacy Strategy for Canada”, released last month. This strategy would provide experts, advocates and service providers with a unified but flexible approach for preventing and responding to online harm—
:
Thank you very much for the question.
MediaSmarts is Canada's centre for digital media literacy. Part of our mandate as an organization and a national non-profit has been to focus on youth. A lot of the work we do is in the K-to-12 sector, although in the last five years we have engaged in much broader public service campaigns for all Canadians.
As for online hate, in the work we have done, we have focused on the young Canadian experience. That has been part of our mandate. We do believe that is a unique experience that deserves to be studied in its own right. The research we have done does suggests that there are interventions that need to be built and designed for young people in particular, because of some of those things I mentioned in my opening remarks, particularly in regard to the prevalence they give to peer supports and their relationships with other young people.
Of course, we see suicides through many ages, but it's not uncommon to hear about suicides among young people, particularly in high school and middle school, as a result of bullying and that peer pressure. Those reasons for suicide are not as common, it would seem, for adults, for example.
Can you comment on that, the impact of the online universe on the mental health of young people and how your services support that?
We know that one of the benefits of the online community is also one of our biggest challenges, and that is anonymity.
I can use the example that we know from our work in the 2SLGBTQ+ community. In that context, young people have told us that the online environment, and in particular being able to remain anonymous in spaces, is a huge benefit as they can, again, engage in identity play and find community in ways that they may not be comfortable doing in a face-to-face context.
However, we know that it also poses a great challenge because for many perpetrators, from bullying all the way up to hate groups, anonymity is a huge tool that those groups can use to their advantage, both to test the waters in various communities that young people are engaging in—for example, in gaming communities—and as an attempt to recruit potential new recruits to movements.
:
In the context of online hate with young people, the biggest factors we found for why young people do not intervene is, one, because they struggle to recognize when something is definitively online hate and, two, because they don't know how to respond. This is impeded by what I mentioned with regard to young people being understandably concerned with maintaining social harmony among their peers.
However, at the same time, we know that the norms or community morals, if you will, within an online community are typically set and driven by the loudest 10%. What we found was that even a very small action within an online community to demonstrate that there wasn't consensus around, let's say, a particular viewpoint was incredibly motivating and encouraged others to respond as well.
Young people responded to this sort of peer-to-peer.... They had the opportunity to recognize and realize that other young people—or anyone in the community—were responding to the contrary. That pushed the dial within the community and demonstrated how valuable it is to let the community know that this was not the consensus.
At the same time, I want to mention that we also want to make it clear to young people that we need to set parameters around what kinds of content we should engage in, because we might suggest that a particular subject is worthy of debate, which hate groups can utilize to their advantage as well. Part of the resources, tools, lessons and critical thinking capacities we provide are to help young people determine facts from fiction or to be able to distinguish arguments based on fact from those that are attempting to sow doubt and denialism, for example.
The point you raised about the quiet bravery, nudging things forward online and giving some other perspective is really powerful.
I'd like to shift gears for a moment and go to Imam Aziz. I found your work around deradicalization within the mainstream Muslim community really interesting. I want to touch base on some remarks you made in your opening statement around the narratives and framing of this community in particular.
I want to put forth this question. Do you find that terms like “Islamic terrorism”, “Islamism” and “Islamist” are accurate? That's number one. Number two, do you find that the use of these terms is harmful in seeking our objective as a country to mitigate and reduce extremism or a movement towards it?
I'd like your thoughts on those terms in particular, please.
:
Thank you so much for your question.
I'll break things down into two separate parts. With regard to the terms used, such as “Islamic terrorism” and “Islamic extremism”, I believe they're very detrimental to the Muslim community and other minority groups in general. The onus is put on the religion itself. The blame is put on the religion itself, but studies have shown that this couldn't be further from the truth. This has been proven in theory and in practice. The vast majority of Muslims are law-abiding citizens and contributing members to their communities and societies. It's the same thing at a theoretical level. If you study Muslim texts and the literature of Muslim scholars, you see that they are always pushing Muslims toward a balanced way of life.
The challenge here comes from an academic perspective. For the longest period of time, terms like “Islamism”, “Islamist”, “Islamic terrorism” and “jihadism” have been used. They have become mainstream and a part of the vernacular in this field of study. Trying to change the language is a very uphill battle, but I believe it is detrimental and that an effort should be made to come up with more inclusive language that does not blame a religion or a particular community altogether. As we've seen in previous testimonies, there are underlying issues that need to be addressed, and further research needs to be done on more accurate terms to use.
:
Thank you so much for your question, and thank you to the interpreter for facilitating that.
Trust needs to stem from a place of non-heightened emotion. Oftentimes, engagement with law enforcement comes at a time of heightened emotions. My approach to this is to recommend that there should be community advisory boards with law enforcement at all times—when emotions are heightened and when they are not—to guide them and facilitate their conversation with communities. That is the first thing to do.
The second thing is reconciling and apologizing for mistakes that have been made. We have to understand that communities are constituted of human beings with human emotions. If people are hurt, progress cannot be made. Mistakes that have been made need to be recognized, and apologies should be issued for that.
The third thing is education. It's very easy to say, “This is what you need to do in order to report a hate crime,” but in terms of the actual process, people need to be guided through that. Training sessions for community members and community leaders on how to report hate crimes should be there.
The fourth thing is the soft bedside manner that is needed. Oftentimes, people who have gone through a traumatic experience are unable to articulate what they have gone through, or they may forget what actually happened. Police officers need to remember that. You're dealing with someone who's just been through a traumatic experience, or they may not remember all the details right away. Try your utmost not to treat them like the perpetrator. Treat them, rather, like the victim. Oftentimes, because people feel as if they are the perpetrator when they are the victim, they shy away from reporting. The way they're treated goes a very long way.
Those are some of the recommendations I would make regarding the law enforcement question. Thank you so much.
With regard to surveillance, there are two points to keep in mind over here. Number one is the high cost of surveillance. Surveillance is very cost-ineffective. It's very expensive. We have to look at other avenues in order to get information when needed.
Number two, the basis of a relationship, if information is shared both ways and support is provided both ways, is that, naturally, information that may be imperative for law enforcement will be provided. Communities will recognize that it is in their best interests to provide information to law enforcement and to agencies. It will only serve their interests and their own protection. That information needs to come from a place of safety and from a place of equal platform.
One of the examples I like to give in my presentations is of a bus being driven. In a pre-criminal space, the community leads the bus and drives the bus. Law enforcement takes the back seat and just supports the Muslim community, or rather communities in general. In a post-criminal space, or when criminality has taken place, then law enforcement leads the way. They drive the bus. The community is there in a supportive role of what is needed.
That collaborative approach, where everyone is equal and on the same page, is very important, but that can only be done with relationships being built on an equal platform. The key over here is the collection of information and not so much the focus on surveillance itself.
:
Thank you very much for the question.
Yes, we've done a number of surveys in regard to online hate, in particular whether or not Canadians are in favour of supporting online hate legislation and their experiences of facing online hate. I think some of the more striking numbers that come to bear from that research are around who the victims are. The number one victims of online hate, according to the research that we have, are women, women of colour and youth between 18 and 30 years old. Those are the number one targets of online hate. They experience more hateful content, more misogynistic comments and more racist comments than anybody across the spectrum.
There's also a tremendous sense of disappointment in terms of what our communities expect the online experience to look like and what they are experiencing. There's a lack of confidence that a safe space can be provided. However, there is significant support to see greater legislation in this environment, because people want that space to become safer.
:
Thank you very much, Mr. Chair.
Thank you to all of our witnesses for helping guide our committee through this study.
Mr. Aziz, I would like to turn to you for my first question, if I can. I was present as a member of Parliament in the 42nd Parliament, and I remember the furor over the debate involving motion 103, which was using the term “Islamophobia” and calling it out for what it is. It always struck me as very strange that we have a general acceptance of what the term “anti-Semitism” means, but the word “Islamophobia” created just such an uproar and furor.
I guess what I want to know from you, sir, is what the legacy has been of that very charged debate on Islamophobia for the Muslim community. Where are we at now in the years that have passed since that debate?
:
Thank you so much for your question.
I think it's important to highlight the different perspectives with regard to this debate. One perspective of this debate is that we need to call it out for what it is, which is anti-Muslim hatred. It's not this fear that people have. It's clearly targeted against the Muslim community and it should be called anti-Muslim hate. Another perspective of this is that there is a fear that if we start deeming things to be Islamophobic, then one cannot criticize the religion or criticize the religious texts, which is a right that people have.
That being said, I believe this debate is ongoing. I don't see a resolution coming any time soon. That is at a theoretical level. On a practical level, what I think needs to be understood is that all citizens and all human beings deserve those equal rights. They deserve the rights and freedoms that everyone has.
What we label it in particular is not as relevant. What are we doing to keep everyone safe, to keep everyone included and to make sure that everyone has the opportunity and freedom that everyone is afforded? That's what needs to be looked at.
Unfortunately, I don't have the good news of sharing that there's a resolution to this debate any time soon.
:
I appreciate that. Thank you very much for your answers on that.
I'd like to turn to MediaSmarts and Dr. Brisson-Boivin.
I have your printout here, the recommendations for platforms. You mentioned that creating and implementing rules that help to set the values of a community can change how people behave, and that if platforms don't set clear rules and standards, the norms of the community will be set by users.
Throughout this study we found that there's this conflict. Of course, social media platforms make a lot of their money through advertising revenue, which is really pushed by user-generated content, and the more exciting or extreme it can be, the more engagement you get. There's this conflict. Social media companies say that they have clearly written terms and conditions, but it didn't stop people like Pat King from basically using Facebook to livestream on his way to the occupation of Ottawa.
I don't have a lot of time. I guess my question to you is this: What's the federal government's clear role here in helping accountability and transparency to be set by these companies for those clear rules and standards?
Thank you to all the witnesses for spending time with us here today and for sharing their wisdom and knowledge with us as we seek to develop a report around ideologically motivated violent extremism.
I'm going to start with Dr. Brisson-Boivin.
Thank you for the very important work that your organization MediaSmarts is undertaking.
I'm reading from a publication by your organization called “From Access to Engagement”. There's a great working definition, which I'm going to read into the record. It says, “Digital media literacy is the ability to critically, effectively and responsibly access, use, understand and engage with media of all kinds.”
To narrow it down a bit, this is a study about the rise of violent extremism. Your work is particularly with young people and to bring MediaSmarts into their lives. Perhaps you could tie those two together: your research and the rise of violent extremism in our communities.
:
Thank you very much for that question.
I would say that the two are related insofar as we see digital media literacy being the crux, or oftentimes a measure that is sometimes thought of as an afterthought or a response. We really do see it as a preventative harm reduction approach for both young people and the trusted adults in their lives.
The report you're referencing is focusing in particular on how Canada needs to take a stance around digital media literacy, on in which we view it as a lifelong learning process. We are talking about supports from pre-K through to seniors facilities.
Many jurisdictions across the world are in the process of developing strategies for digital media literacy. These strategies include some of those key critical thinking skills I was mentioning around authenticating and verifying information, and recognizing online hate in terms of the cultures of prejudice and some of the ideologies and tactics of hate, including the use of misinformation and disinformation.
The strategy report you're referencing is one in which we are advocating for the federal government to come together to support Canadians in their digital media literacy journey, which is a lifelong journey.
It's a lifelong journey, not just for young people. I think adults as well could benefit a lot from MediaSmarts education.
You said in your testimony that we can't regulate our way out of dangers on the Internet. The Internet is a great gift, but it's also full of dangers. I'll use the example I've used with my children. We expect our police to keep our streets safe, but at the same time we don't walk down dark alleys on our own because it's dangerous.
I am looking for your expert opinion on the allocation of responsibilities among schools, educators, parents, communities, government and the individuals themselves.
:
Thank you very much for the question.
I think this is one of those big questions. It's the big messy challenge we are facing today. How do we create that whole-of-society response that I was mentioning?
First and foremost, we need leadership at the federal level where the federal government takes ownership in building a strategy that will impact other government departments in setting budgets, for example, that would include digital media literacy as a key budget objective.
We also need to map the field of digital media literacy in Canada. That has yet to be done. There are hundreds of organizations doing this work on the ground, including MediaSmarts. We need to better understand what those are, what they are doing and how we can work together.
That also informs my comment about budget. We need to create funding for this work that doesn't pit these civil society organizations in competition with one another, but allows us to work together in synergy to combat these various issues, which include the variety of online harms that we've all been talking about as well as ideologically motivated violent extremism.
We see the education sector as being a player in that but not the only player. We see regulation as being very pivotal and important but not the only solution. It is the same for platform responsibilities and for technological design in that as well. That is part of the solution here but not the only approach.
Thank you very much.
:
To be honest, I think that a lot of the conversation we're having right now is a bit backwards. We hear a lot from academics. We hear a lot from law enforcement. However, the victims are not at the centre of this conversation.
Twenty years ago, I used to work for Mothers Against Drunk Driving, and I could see how when you centre victims' voices, that integration supports not only the social services sector but the policing environment and the justice system. The level of intelligence that the system gains as a whole by centring the voices of victims is pivotally important.
Who do we see as victims? As I said, in terms of online hate, we've seen women, women of colour and particularly young people, as those who are being targeted the most, but hate is an evolving target. Sometimes you see Muslims. Sometimes you see members of the Jewish community. The highest numbers have typically been Black and indigenous communities, and now you see anti-Asian racism rising in very high levels. I think it evolves over time.
:
With that, then, I think it would be great for all of us, as we try to think about how to improve the situation, to focus on victims.
How do we, as folks in the political conversation, depoliticize this a bit to really focus on and acknowledge that this is a very real problem? What should we be doing, in your view—from the work that you do—to be leaning into trying to address this problem head-on?
Could you, in a nutshell, describe what we could do, where we could invest, what would help to solve this problem once and for all, so that we can get to the type of society that allows everyone to feel as though they can live their best self, their most authentic self, without worrying about these types of challenges?
:
I think we need to stop saying to some people that we believe you and to others that we don't. I think what we need to do is focus strictly on victims and hear what their perspectives are and to actually believe them. That doesn't come from just saying I believe you or I don't believe you. I think we create a system to support victims.
We've seen really good examples across the world, particularly in Germany, which are now being exported to the OECD. Understandably, their context is significantly different. However, I think there are ways that we can reframe our social services infrastructure to support victims of hate. That could have tangible impacts, not only in determining who a victim is, what a victim feels, what the supports need to be, but in terms of having the system as a whole acknowledge the wrong of what is happening.
I see a number of components to this. Mr. Aziz pointed towards the sensitive and respectful treatment of hate crime victims. There's a real need to be able to understand that. If you want people to go to the police to report it, there needs to be a specialist to address hate crimes. I think that victim support is a key and pivotal portion of that response.
:
I think there's a real divide. To be frank, when we look at police forces who are responding to hate, because police forces in urban areas have more exposure to racialized communities and have more racialized police officers, their ability to understand the impact of what's happening is typically better than it is for those in rural areas.
I can give you the example of a rural police agency where an individual was targeted and was murdered. The police within 24 hours said it wasn't a hate crime. They then went back to say, after listening to the community, that they were going to investigate it as a potential hate crime, but the harm was already done in terms of what the community was told in haste, which was “Look, yes, one person was targeted, and yes, one person was killed, but we don't think it was hateful.” That had real repercussions on that community.
In terms of responses from rural versus urban, I think the urban ones are more developed. Part of the work we're doing with our task force is to create national standards around investigations and help those rural agencies that don't have the resources or hate crimes units. I will give you the example of London, Ontario, where four people were murdered. They today have a one-person hate crime unit. I'm not even sure if that job has been filled yet.
There's a huge divide across Canada in terms of rural and urban responses to hate. I think creating national standards and being able to support the small local police jurisdictions is an important intervention that hopefully our work will contribute towards.
:
I'm taking us all home. Okay. Thank you, Mr. Chair.
I will direct my last question to you, Dr. Brisson-Boivin. You talked about how it's really important for websites or apps or social media platforms to have those clear and easy-to-use tools for reporting unacceptable behaviour.
We have also seen examples of where, I remember, during the height of the COVID-19 pandemic, for example, there was a lot of misinformation being spread about the nature of the pandemic, its source and whether or not it was even a serious pandemic. Particularly on Facebook, I remember, whenever COVID-19 was mentioned, a little disclaimer was posted at the bottom of each post that would direct people to factual information.
Can you comment a little bit on that specific example? Do we need to have similar ones here, for example, if a post is alluding to white supremacy or is anti-Semitic in nature or something? Do we need to have little educational tools that point people to a verified true source on those things? Could you perhaps expand on that point, please?
:
Thank you very much for the question. I will try to keep my comment brief.
Yes, when we think about technological responses, platforms tend to respond in terms of affordances, which is exactly what you mentioned, design fixes around, for example, this is labelled a piece of misinformation. We could entertain the idea of labelling something hateful or against community standards. I think that's a good starting point. However, we need to ask ourselves this: What then?
What happens when we say to a person that this is a piece of misinformation or hateful content? The user needs to then have other skills and tools to be able to, for example, find the original source of the piece of misinformation to feel confident in being able to verify. Similarly, with online hate, what then? What happens after we have flagged for them that this is a piece of hateful content?
From our perspective, again, those are helpful, but they aren't the end of the story. I think we need other tools and critical thinking skills that will allow people to verify and authenticate information and/or respond to online hate.
:
Thank you, Mr. MacGregor.
Thank you to the witnesses for a very interesting and important hour of reflection from your experiences both as individuals and as leaders of organizations who are immersed in this subject. On behalf of my colleagues on the committee, I want to thank you for your insight and your time.
Colleagues, I will remind you that on Thursday we will have witnesses and a panel. We will go in camera for the last bit of the meeting to give drafting instructions on this study to our analysts.
Looking forward to that and looking forward to the glorious weather between now and then, this meeting is now adjourned.