:
I'm going to call this meeting to order.
I want to welcome everyone to meeting number 95 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.
Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, January 31, 2023, the committee is resuming its study of the use of social media platforms for data harvesting and unethical or illicit sharing of personal information with foreign entities.
[Translation]
Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders of the House. Members are participating in person, in the room, and virtually using the Zoom application.
[English]
I would like to remind all members not to put their earpieces near the microphones, because it could cause injury to our interpreters.
I would now like to welcome our witnesses for the first hour. As individuals, we have Brett Caraway, associate professor of media economics, University of Toronto; and Emily Laidlaw, associate professor and Canada research chair in cybersecurity law, University of Calgary.
Before we begin, the bells are ringing. I received unanimous consent from the committee to begin this meeting for the opening statements in advance of the votes. I appreciate the indulgence of committee members for allowing that to happen, so that we can listen to our witnesses.
Mr. Caraway, you have five minutes, followed by Ms. Laidlaw.
:
I would like to thank the members of the committee for the opportunity to speak today.
I'm an associate professor of media economics at the University of Toronto. I appear here today in a personal capacity, so the views expressed are mine and mine alone.
I want to speak about the risks posed by the underlying incentive structure of social media platforms. In doing so, I hope to convey some sense of the changes that have transpired in our media landscape and why there are, too often, divergences between public and private interests.
Digital platforms are major features of the information economy because of their capacity to reduce market frictions and lower transaction costs. To understand what I'm talking about, imagine how a social media app like Instagram might make a particular group of users, such as amateur photographers or travel enthusiasts, accessible to advertisers who want to target them with commercial messages.
In this scenario, there are actually three market actors. There are the users, the advertisers and the platform operator, and they each have their own set of incentives. Instagram has a financial incentive to maximize the number of users and their level of engagement. This makes the platform more attractive to advertisers. Advertisers want as much information as possible about the platform's users so they can minimize uncertainty, and users just want to enjoy the functionality of the platform with as little disruption as possible.
Multisided markets like these are nothing new. They've been a feature of mass communication systems since the earliest newsletters began selling advertisements in the 1600s. However, terms like “niche marketing” and “targeted advertising” only begin to scratch the surface of what actually transpires every time you enter a search query on Google, watch a video on TikTok, like someone's post on Facebook or retweet something. Information is gathered, auctions take place and commercial messages are delivered.
My concerns are not driven primarily by escalating geopolitical tensions or foreign threat actors, though foreign interference, misinformation, disinformation and radicalization are all genuine concerns. My concern is that these platforms, even when functioning exactly as intended, have adverse impacts on the public sphere. My concern is that the economics of platforms all but guarantees the propagation of disinformation, efforts to influence behaviour and the erosion of individual privacy.
My concern is born out of the realization that, in the economics of platforms, there is no effective upper limit to the exploitation of human attention. “Attention” might refer to the ability to concentrate on something, but from the perspective of society, it speaks to our collective ability to recognize problems and opportunities, to the horizon of our imagination and creativity, and to our ability to rise to the occasion to meet the world's most pressing problems. Attention is a renewable resource, but it isn't like any other resource. You can't hoard it like a precious metal. You can only direct it at something. In fact, that's the economic function of advertising: the allocation of scarce attention among its competing uses. How we choose to allocate our attention is important, both for individuals and for society. Our attention shapes who we are, who we might be and where we might go.
Economists often speak of “the tragedy of the commons”. The origin of the concept is problematic. As a metaphor, however, it can be quite useful. It alerts us to the propensity for overuse and exploitation of finite resources when we allow unfettered access to them. Digital platforms don't merely attempt to measure attention. They seek to modify it—to make it conform to commercial imperatives. Today's attention economy looks less like AMC's Mad Men and more like the speed-of-light trading that takes place in financial markets. The fundamental economics of this system is inconsistent with robust privacy protections.
The overriding economic imperative is to maximize data collection. It's not just the PRC or Russia. It's U.S. firms like Alphabet, Meta, Amazon and a host of data brokers you have never even heard of. As a consequence, our attention is exhausted. Its quality is diminished.
We have protections to safeguard other resources, such as water, air and habitat. We must likewise manage this renewable resource in a similar manner, in a sustainable manner, as we would air, habitat and water.
We are at an inflection point in Canada. It's my hope that we can take concrete steps to empower Canadians by creating a comprehensive regulatory framework for all digital platforms.
Thank you.
:
Thank you, Mr. Chair, for the invitation to address this committee.
I am honoured to speak to you from Calgary and the traditional territory of the people of the Treaty 7 region and the Métis Nation of Alberta.
I've had the opportunity to listen to some of the witnesses and the discussion leading up to my appearance. With my time, I would like to pull us back to look at the broader legal issues at play.
My key message is that this is not just about privacy. Privacy is one piece of the pie. For example, Discord does not use tools to detect child sexual abuse content, and it does not monitor or offer a tool for reporting livestreamed content. That's a recipe for disaster. This is a safety design problem, not only a privacy one.
This is about platform regulation. The health of our information ecosystem depends on privately owned platforms and the choices they make in the design of their products, corporate governance, culture and content moderation systems. In short, platforms have tremendous power.
Canada is currently a laggard in regulating platforms. Much of what this committee has discussed would be addressed by online harms legislation, which we do not yet have in Canada. Europe, the U.K. and Australia all have laws to address these issues. In some cases, they are on their second-generation or third-generation law. Canada has zero federal laws that apply generally to platform regulation. We can learn from the good and the bad of these other laws, but it is time to act now.
What do we need, and what are the areas we must be careful about?
First, platform regulation is a field like protecting the environment, and multiple areas of law must work in concert to protect our safety and rights. In particular, privacy law and online harms legislation are mutually reinforcing, so we need both. For example, algorithms that push harmful content do so by harvesting personally identifiable information, which is covered by privacy law. However, the algorithm can also draw from anonymized aggregate data, which falls outside of privacy law.
Online harms legislation can better target the choices that platforms make about their product designs and content moderation systems. Social media mines data to determine likes and interests, but it is what it does with this that online harms laws can address—such as Meta amplifying emotive and toxic content on Facebook by treating angry and love reactions as five times more valuable than likes. This fuelled the spread of misinformation and disinformation.
Second, platforms are part of the solution. They can be important collaborators and innovators in solving problems. There is, however, a friction when they are almost state-like in their role. Some have their own national security teams, essentially setting national security policy.
We also depend on platforms to go above and beyond the law in addressing hateful content, disinformation and violent extremism, all of which are not necessarily illegal. However, that is not a substitute for law to set industry standards. Standards are needed. The examples I gave were platforms with relatively sophisticated governance structures. There are many popular platforms that minimally govern the risks of their products.
Third, when we talk about the risks of harm, we should be clear that not all risks are the same. Child protection, hate and terrorist propaganda, disinformation, and violence all have different dynamics and should not be distilled to one legal rule, except for the basic idea of corporate due diligence.
Further, when we talk about the risks of harm, these include risks to fundamental rights: the rights to freedom of expression, to privacy and to equality. Any analysis of solutions in law or governance must be through the lens of protection and promotion of rights. This is particularly challenging when it comes to addressing misinformation and disinformation because, except in narrow circumstances, it is lawful to believe and share false information.
I will leave you with this: What are the basic components needed in online harms legislation?
Platforms should have a duty to manage the risks of harm of their products and a duty to protect fundamental rights. There should be transparency obligations matched with a way to vet transparency through audits and access to data by vetted researchers. There should be the creation of a regulator to investigate companies and educate the public, and there should be access to recourse for victims, because this is a collective harm but also an individual one.
Thank you, and I welcome questions.
:
Thank you very much, Chair, and thank you to our witnesses.
I appreciate, as well, that in your statements you provided a number of practical suggestions. As always, feel free to follow up if there's anything additional.
Mr. Caraway, I found some of your work very interesting, because the economics of social media is certainly a fascinating subject. It's that balance between a service that is perceived to be free versus the cost associated with something that is quite expensive to run, like a social media platform. When talking about regulations and managing that, how does that get balanced, the consumer's desire not to have to pay for a service versus the demands associated with running a massive web operation?
:
That's a great question.
The thing I always tell my students is that there's no such thing as a free lunch. Even though it appears that you're getting these services for free, if it's an advertising-supported model of some kind, you're paying for that when you purchase goods or services later on.
Most of the platforms we're talking about in the social media realm run as multisided marketplaces. It is quite difficult to keep everybody happy. As I alluded to, you're trying to keep the advertisers happy, but the advertisers want as much information as possible about the users. The users just want to be left alone to use the platform, but they also don't necessarily want to pay for it. That's never a popular thing, except maybe in some online streaming contexts when you're looking at services like Spotify or Netflix. However, even in the subscription-based models, a company like Netflix, which isn't necessarily doing the same sort of data harvesting that companies like Meta or Alphabet are doing, is also gathering data on how the users use the platform and deploying AI for recommendation systems, etc.
There's always an economic imperative for the advertisers to demand more data; therefore, the platform operators will harvest more data. I think that speaks to the need for the government to step in and say, “Well, here are the enumerated rights that we consider, such as privacy for citizens.” I don't just mean including it in a preamble, but actually putting those in legal tests, so—
:
Thank you very much for that. I would just emphasize that “to help parents out” is a great line there.
Ms. Laidlaw, we're talking about protecting young people. There's a range of harms on social media, from things like bullying all the way up to the most heinous types of exploitation, things associated with human trafficking, child exploitative material and that sort of thing.
In the context of social media and young people, what's the government's role in terms of developing regulations? What is the role of social media platforms in terms of trying to create frameworks that deal with the massive range of possible challenges that we face here?
There's about a minute and a bit left, and I know it's a big question. Hopefully, that's enough time for you to give some feedback to the committee.
It's an enormous question, but it's the money question.
I will keep it brief and state that it's crucial that government play a role, because thus far we've mostly relied on corporate self-governance and it hasn't worked. I mean, we're seeing all kinds of harms happen online.
What we do need is a regulator, because a regulator can be more agile in dealing with this. It's too cumbersome for some of these concerns to work through the courts. We need help to sort of set practices. Each platform is different, so the platforms really do need to come up with solutions for their spaces. It's just that there needs to be a method to hold them accountable for it. They need to demonstrate to some regulator the steps they're taking to protect children.
I think we need to divvy up the harms. If you're talking about specific child protection measures—looking at child sexual abuse images, intimate image abuse, trafficking—these are crimes, and there are the primary actors who, to the extent they can be found and prosecuted, should be the targets, but there is a separate responsibility and special duties that should exist for platforms.
When it comes to child—
First, this is a legal question and we do need the government to pass online harms legislation, because it needs to set the duties for companies. Basically, it needs to set minimum standards. The companies themselves, though, can start taking more seriously protecting children from harms.
I think one of the issues is that a lot of the transparency we're seeing now tends to be more of a marketing exercise. I think it's not as upfront about what some of these practices are. This is a key aspect, of course, that Dr. Caraway has talked about: the attention economy.
Specifically for children, I think we need to think about this as mind manipulation. Historically, there were interventions in areas of advertising to protect children from mind manipulation. You didn't have certain ads at certain times of day and with children's shows. This is the same thing that's happening on social media: pushing suicide content, eating disorder how-to content and so on.
It is critical that these platforms, from a design basis.... How are we designing this platform? How are algorithms pushing content? How are we nudging certain behaviours? They need to address that and account for that, so I think there should be special duties for children.
:
One thing that I think is really important...well, there are kind of two things.
I think we need to pay very careful attention to what constitutes informed consent. What is problematic to me is the way in which not just children but also everyday users are confronted with end-user licence agreements that require someone like Dr. Laidlaw to make sense of them because they are so convoluted. They require so much expertise and are subject to change almost on a daily basis. I think it's important to revisit what actually counts as consent here.
Then there is transparency and the way in which the data is used. This is something where I do think that you need to be able to have something like a Privacy Commissioner, who can send in a third party auditor to see what's actually happening behind the scenes.
Lastly, I would say that the penalties have to have bite to them. Yes, $25 million sounds like a lot, but maybe not to Meta or Alphabet, while 5% of global revenues sounds a little more serious. I like that sort of approach too.
I will say that freedom of expression is foundational. If you pass a law that just incentivizes a focus on harms, you incentivize companies to put in rudimentary solutions that, in fact, backfire. There's been a lot of evidence of backfiring, where what ends up being silenced is racialized and other marginalized voices.
For the requirement on companies, if we care about harms, we care about harms to rights, so it needs to be a dual focus that social media companies have. They need to focus on how they protect and promote freedom of expression and show that to a regulator. They need to demonstrate the steps they are taking that are contextual and bespoke to their services.
:
That's a great question. Thank you.
I think we have seen, just in the last year, the way AI has transformed our society, and we're just at the beginning of that journey. The problem with the AI act, as it stands right now, is that it's not sufficiently developed to be able to actually cope with the different problems we're going to face. It needs to be carved out so that we can actually sit down and have a proper discussion about the ways in which AI can be used that fundamentally will disrupt democracy, interfere with our ability to make decisions and create physical risks to us individually or collectively.
We need to break down those various risks and the opportunities and draft a legislation that reflects that. I think we do have a model, as well, in Europe that can help us. However, as it stands, the AI act must be amended.
:
Thank you, Mr. Green. I'm glad you were keeping track because we hit the wrong button on the phone and lost track. You could have had another two minutes if you wanted.
First of all, I want to thank our guests for appearing today. As I mentioned earlier, if there's any other information that you would like to submit to the committee in consideration of this report, please do so, to the clerk.
I want to apologize, first and foremost, for the disruptions today and for the disruption last week. I wasn't feeling well. I appreciate your patience in coming back to committee this week and sharing the information that you did.
Thank you, Ms. Laidlaw and Mr. Caraway.
We're going to suspend for a couple of minutes. We're going to come back with our new panel and provide opening statements. We have a bit of time for that, so let's suspend for a minute or two.
Thank you.
:
We're going to resume the meeting. I'd like to welcome everyone back.
I'd now like to welcome our witnesses for the second part of our meeting today.
As an individual, we have Mr. Matt Malone, assistant professor at Thompson Rivers University. Welcome, Mr. Malone. From The Dais, we have Sam Andrey, who is the managing director; and Joe Masoodi, who is a senior policy analyst.
Just to advise you, we are under an indication of votes. We have about 27 minutes, so we're going to start with opening statements. We'll suspend the meeting and then we're going to come back for Qs and As. I appreciate your patience with this.
Mr. Malone, you have up to five minutes to address the committee.
Go ahead, sir, please.
My name is Matt Malone, and I am an assistant professor at Thompson Rivers University faculty of law in Kamloops. Today I am attending the meeting in a personal capacity.
I am going to use my opening remarks to share my thoughts using a case study, which is specifically regarding the selective ban of TikTok on government-issued devices that was announced in February 2023. As the committee might recall, that selective ban was accompanied by a statement about concerns relating to privacy and security.
These stated concerns do not explain several things. First of all, they do not explain why the government waited five months to act on the underlying intelligence brief that warned about TikTok's practices. Second, they do not explain why the government continues to buy advertising on TikTok itself. Finally, they do not explain why the government has ignored that TikTok is not the only app that retains user data in foreign jurisdictions and potentially shares it with foreign regimes.
As the Treasury Board Secretariat confirmed to me a couple of days before this hearing, none of the following apps are banned from download and use on government-issued devices: the Russian-affiliated VKontakte social media app, the Russian-affiliated Yandex app, and the Russian-affiliated Mail.ru app, as well as other social media apps, like Facebook, Instagram, Tinder, Snapchat, Bumble, Grindr, Truth Social, Gab and Discord, which was implicated in the 2022-23 Pentagon leaks and which Dr. Laidlaw noted does not have child safety protection measures in place.
As I recommended in a recent article—and as I'll take this opportunity to recommend again now to the President of the Treasury Board—I believe that a better privacy and security baseline would see the government ban all social media apps on government-issued devices, unless there is a strong business justification otherwise. It's crazy to me that the apps I just listed are not banned on government-issued devices. I also believe that the government should stop buying ads on all social media services.
Even with such bans in place, it is worth noting that federal privacy law places no meaningful constraints on data transfers to jurisdictions like Russia and China. An internal government brief that I obtained through the Access to Information Act notes that Bill and the proposed privacy legislation currently before Parliament avoided putting into that bill any new or European-style restrictions on the transfer of personal information across borders specifically out of deference to commercial interests. It's very telling that the privacy bill before Parliament is being stewarded by the industry portfolio in cabinet, not a portfolio in human rights, public safety or national security.
Like many social media apps, TikTok does deserve opprobrium for its privacy violations, data harvesting and narrative control practices, and for granting access to data despite assurances otherwise. Like other social media apps, it is a vector for online harm visited on young people. Its business model is focused on privacy-invasive, targeted advertising that exacerbates the mental health crisis affecting young people. The app's safety features for children are all easy to bypass.
Through various access to information requests, I have seen several internal briefings where Canadian government actors repeatedly identified these problems. I'm happy to talk about these.
However, it's important to note that the real culprit here is Canadian law, because it does not stop these practices for TikTok or any other social media service. As TikTok lobbyists appearing before this committee repeatedly underscored, TikTok's handling of Canadians' user data is governed by Canadian law. That's the problem. Canada's privacy laws fail to respect the rights and interests of individuals and collectives in the digital age. Enforcement is basically non-existent. At the federal level, the Office of the Privacy Commissioner has become skilled at making fanfare announcements about its investigations, but it is very slow at investigating, as I learned in my own complaint about the ArriveCAN app, which was ultimately sustained.
Law enforcement has struggled to adapt to the new digital landscape as well. The RCMP's national cybercrime and fraud reporting system, which this committee recently heard about in glowing terms as part of this study, is actually two years behind schedule and still in beta testing. Its website says that it accepts only 25 complaints per day nationwide.
To give members another illustrative example, as I learned in a recent access to information request, the RCMP's cybercrime investigative team has only eight employees in all of Alberta. Here in British Columbia, where there was a recent tragic sextortion case involving a young person that was carried out over social media, there are only four employees on the cybercrime investigation team for the entire province. There are none in Saskatchewan, Manitoba or any of the maritime provinces.
With privacy and data protection legislation that deprives citizens of meaningful protection, government funding priorities deeply out of alignment with stated values and actual needs, and gaps in law and policy that the government shows no urgency to fill, the federal government's policies and practices pose significant challenges to addressing the real types of harms that we are seeing perpetuated these days on social media.
To wrap up, I want to thank the committee for its unexpected invitation.
I also want to give a particular shout-out of appreciation to the MP for Mississauga—Erin Mills for her leadership on this very important issue. I've been very impressed with her work on this file.
I look forward to answering, to the best of my abilities, any questions that the committee members might have.
Thanks.
Thanks for the invitation to share our perspectives on this important issue.
Good evening. I'm Sam Andrey, and I'm the managing director of The Dais, a policy think tank at Toronto Metropolitan University. We work to develop the people and ideas that we need to advance an inclusive and innovative economy, education system and democracy for Canada.
I have my colleague Joe Masoodi here with me. Together with our former colleague Yuan Stevens, we published a report three years ago called “Home Ice Advantage”, which examined the subject before the committee today, the transborder data security of social media platforms. While a lot has changed in the last three years, the core challenge of inadequate protection for Canadians remains.
:
In our annual survey of online harms, we found that Canadians have very low trust in social media platforms, both to keep their data secure and to act in the best interests of the public, ranking well below other technology companies and other organizations of a variety of types. In fact, trust in TikTok, specifically, fell significantly last year, to last place. Only 7% of Canadians say that they have a high degree of trust in the platform, despite its rapid growth with nearly 30% of Canadians using the platform.
TikTok has been the subject of particular scrutiny, given its corporate structure. As was pointed out earlier in the committee, prior to 2019, TikTok's privacy policy was transparent in stating that it shares people's information “with any member or affiliate of [its] group” in China. This line was later updated to remove that specific location reference, but the sharing provision remains. That same provision is also in the privacy policy of WeChat, which is used by 6% of Canadians. As our colleague Mr. Malone has pointed out, it is true of many others.
Canada's current privacy law does not prohibit companies from transferring personal data to third parties or outside of Canada in this way. We think that there is an opportunity before parliamentarians to respond to these risks through the proposed Bill . However, as it currently stands, Bill C-27 would, in some ways, allow for even easier data sharing to take place between corporate actors by eroding what limited consent provisions do exist. Proposed section 18 of the CPPA creates new, large carve-outs for companies to share data without either knowledge or consent through the inclusion of language like “business activities” and “legitimate interest”.
We don't think that it should be the exclusive responsibility of Canadians to educate and protect themselves online. We would propose that there be more precise requirements added to the bill to ensure that equivalent levels of protection are provided for data when it's transferred outside of Canada. We would also suggest requirements that near the EU's GDPR, to obtain explicit informed consent from Canadians for the transfer of their personal data to jurisdictions that do not provide equivalent levels of protection, providing information about both the specific countries involved and the specific data. While a lot of people have pointed out to this committee that there's consent fatigue, we, at least, think that transparency with respect to data transferred to countries outside of Canada is important.
We'll end by saying that Canadians overwhelmingly support such a change. A representative survey that we conducted found that 86% of Canadians support requirements to keep Canadians' data in Canada, with only 3% disagreeing.
Thanks for your time. We look forward to your questions.
I believe one of the problems that Canada faces is that we're not a large power and we're stuck between approaches to privacy and data protection among large powers that are diametrically opposed. Failing to act soon will lock us into one of those approaches. The Europeans have adopted a more restrictive approach. Ever since the drafting, passage and implementation of the GDPR, we've seen an array of restrictive measures, which are leading to things like data localization, stricter requirements around data transfers, and a robust equivalency test.
The United States is taking a diametrically opposed approach with its regulatory framework, in which it has not updated its privacy legislation, and there's no uniform privacy legislation in the United States. At the same time the U.S. is doing that, it's exporting, through trade treaties and governance bodies worldwide, a view of data governance and privacy that locks in what Canada can do.
Discussions about data transfers have to take into consideration the fact that the Canada-United States-Mexico agreement has a prohibition on restricting cross-border dataflows, and it has other restrictions that are relevant as well. The CPTPP has similar restrictions.
One of the problems with Canada's failure to act is that we're getting locked into one of these approaches. Unfortunately, we show no urgency around acting. The Privacy Act, which regulates government conduct, hasn't been updated in over 40 years. PIPEDA is well in need of a meaningful update, not just tweaks. I personally don't believe that Bill is the appropriate way to do that.
I'll let the other panellists chime in.
:
When you look at the resources that are available, they're not meeting the demand. In 2018, when Public Safety went through a cybersecurity update and threw a lot of money at the RCMP to get more serious about online cybercrime, that was when the initial announcement was made about NC3, the national cybercrime coordination centre.
I wrote about this three years ago and said that we were already waiting a long time to get this rollout happening, but fast-forward three years, and that reporting system is two years behind schedule. If you visit the website right now, it will tell you that the system is still in beta testing and that it accepts only 25 cybercrime complaints a day for the entire country, which is really low. In a series of access to information requests regarding the number of resources that were devoted in terms of personnel, I discovered that there are several provinces that don't have any cybercrime investigators, which is a really shocking statistic. Here in B.C., the third-largest province in the country, we have only four full-time people on the cybercrime team.
I believe these tools need to be rolled out more rapidly. There should be more transparency around them, and legislation should be crafted around what we're seeing, because these tools allow us to understand what types of harms are being perpetuated. There are all kinds of analyses you can run based on the reporting data that comes in, and NC3 shows that more than half the reports that go to NC3 are about ransomware. It's really interesting that Canadian legislation ignores ransomware, which is the biggest cybercrime threat we're facing.
One thing that's interesting to take into consideration when we talk about Bill is also Bill , which would regulate things like ransomware for critical industries.
:
Thanks for the question.
I think it's really important to identify the TikTok representatives who spoke as lobbyists. They're registered lobbyists, and they do lobbyist work. I think it's important to talk about how a lot of the claims they made were very disingenuous. There are easy bypasses around a lot of the safety controls for children that they vaunted.
TikTok has been caught—to respond more directly to your question—engaging in all kinds of worrying conduct with respect to user data. There is public reporting that talks about TikTok accessing physical locations of journalists who are using the app, in order to track down their sources. That's in the public domain. There is public reporting about TikTok directing user data from the United States through China despite assurances otherwise, and there's a raft of other reporting.
There's internal government reporting from Canadian government actors like the Privy Council Office's intelligence assessment secretariat that identifies all kinds of other problems around the type of data and the persistent collection of data that occurs through the app. There are also materials that I've seen from the cyber-threat intelligence unit at the Canadian Forces intelligence command at the Department of National Defence that identify a series of concerning problems around censorship and so forth.
One of the really difficult issues here is that Canadian law is very permissive when it comes to data transfers. Even if you look at the proposed privacy legislation, Bill , there's essentially nothing that would stop data transfers outside of Canada. Certainly, the privacy notice for TikTok states that by using TikTok you accept the terms and conditions, which are that the subsidiary TikTok can share that data with its corporate body, ByteDance, and Canadian law lets that happen. Even the proposed Canadian law would let that happen. Proposed section 19 and proposed subsection 11(1) of Bill specifically permit this type of data transfer.
Canadian data transfer law is essentially premised on the idea that organizations can send data to other organizations if they deem the protections are sufficient or adequate, as they would be in Canada. This approach is really different from the European approach, which is jurisdictionally grounded—country to country. You can't transfer data outside of a country unless you're satisfied that the protections would be essentially equivalent. There's a really big difference in Canadian data transfer law compared to the European data transfer law. Once data gets out of Canada, there's really no telling what happens to it. They don't take basic safeguards like you do.
For this meeting, I asked the chief information officer of the House of Commons where the data was being localized and processed for Zoom, which I would be using, and I was told—and I was very happy and impressed by this—that the data would be processed in Canada. Your in camera meetings are even more secure, so good on you. It's not for the users of TikTok.
Joe, feel free to jump in here.
We do an annual survey of a representative group of Canadians to track, basically, Canadians' experiences online with harmful content or illegal experiences. At a high level, we start with hate speech: 40% of Canadians say they see hate speech at least monthly, and about 10% of Canadians say they have personally been targeted by online hate speech. Those rates are about double or triple for a variety of marginalized communities and racialized communities. It would be about double that rate for 2SLGBTQ Canadians, and three times that rate, or 30%, say they have personally been targeted with hate speech. There's a tracking of that.
We also track exposure to, and belief in, misinformation and disinformation. We have Canadians do a quiz, basically, of a series of true and false statements. We find that about 15% of Canadians we assess as having a high degree of belief in misinformation. Those Canadians are more likely to say they consume their news on social media and are less trusting of mainstream media sources.
:
I think the word mirage accurately captures the current state of affairs.
[English]
I think informed consent, which is what all Canadian privacy laws are currently based on, doesn't serve the ends that we really need data protection and privacy law in this country to serve. The reality that Bill has perpetuated this—the idea that this instrument will still work and still serve its ends even with the legitimate business exceptions, even with the rules around implied consent—really won't take us to a place where we have robust privacy and data protection law in this country.
I think you need to fundamentally shift the paradigm so that possessing, retaining, using or disclosing personal information becomes a liability, as opposed to a profitable way to run a business, which is what we have let these ad exchanges/social media companies do.
:
I think Canada has an opportunity to reclaim a bit of the traditional role that we like to see Canada have, which is serving as a middle power with allied states.
Several ideas have been floated around creating safe dataflow zones that map onto the security alliances that already exist, like NATO for example. We already have a commitment to mutual defence with our NATO allies. It would seem logical that we might feel comfortable sharing our data, our personal information, with these allies in a free cross-border dataflow zone. There are opportunities for Canada to certainly create a niche role when it comes to regulation and the creation of regulatory frameworks for cross-border dataflows.
I think the more appalling concern that I have is with the state of the current law. The fact is that a lot of Canadian law, and certainly the priorities of legislators right now, is to create privacy law that applies only to the private sector. I think one of the real problems we've seen—and we saw this through the pandemic as well—is that we need robust privacy and data protection laws that also apply to government. I've been really upset at the fact that the artificial intelligence and data act does not apply to government actions, which is really concerning when you think about the deployment of AI technologies, AI-fueled and AI-driven technologies such as the ArriveCAN app.
I've also been really concerned about the fact that the priorities with Bill have not focused on government. To me, it's disturbing that this effort has been led by the industry portfolio and Bill C-27 would create new regulatory instruments that would be answerable to the Minister of Industry. It's really hard to say that we're approaching privacy from a human rights or law enforcement or national security perspective when the bodies we're creating are not truly independent. Not only are they not truly independent, but they're subservient to an industry portfolio whose mandate is to grow the economy.
:
I see a role for a digital regulator.
Currently, there's the idea of having an AI data regulator in Bill , but it's an ISED department official. This, I think, is unacceptable, especially given that the minister will have the competing roles of championing the economic benefits of AI and regulating its risks. At a minimum, they should be appointed by the GIC. Ideally, it would be a parliamentary appointment that is separate.
I think you could task the same regulator with the online harms portfolio. It could be two, but that's a lot of digital regulators. That regulator would have the power to do audits and a forum on ombudsman-type functions to support individuals. They would also have a transparency function.
I know there was a shout-out to start the round of testimonies. In the spirit of shout-outs, I want to give one to Christelle Tessono. I understand she is now in policy and research at The Dais. I know her work has been reflected in previous committees, as well as in some of the deep dives I have taken into this field. The technology is often far ahead of the scope of our subject matter expertise, so having subject matter experts like yourselves is incredibly important. I appreciate your being here today. I appreciate any contributions that she may have made, as well.
I want to begin with Mr. Malone.
In a September 2023 article, you mentioned you reviewed a federal government document entitled “Economic Security and Technology: TikTok Takeover”. Are you able to highlight the concerns raised in that report, and do you share those concerns?
:
Yes. I believe it's unethical to advertise with social media companies if we have real concerns about data harvesting and illicit foreign interference.
Last year, the government spent a record $141 million on advertising, which was more than twice what the government spent on the administration of the Access to Information Act, and that included almost $2 million on TikTok.
It's really difficult to attend a committee hearing where there are all these concerns about TikTok's practices but then see the government throw money at TikTok, which, in my view, is an implicit endorsement of those practices that we're seeking to critique. I do believe, just to clarify and to make this point clear, that this concern applies to all social media companies.
I was very pleased that the government stopped advertising on Meta over the summer, but that was in retaliation for Meta's conduct with the Online News Act, so that was a bit of a different measure, but even as retaliatory, I support it.
:
Thanks for the question, and thanks for the shout-out to Christelle, who is a wonderful member of our team, as well. I hope she is invited one day to this committee or to INDU on AI.
To answer your question, yes. Joe and I and another colleague, Yuan, wrote a paper that looked at the data storage practices in cross-border data transfers of social media platforms, which is called “Home Ice Advantage”, and we appreciated the shout-out to that report a few meetings ago, as well.
Joe, I don't know if you want to jump in here.
The question even came up a few rounds ago, about TikTok saying it stores its data in Singapore and the U.S. Yes, that's true, but that is an incomplete picture. There can still be remote access to those servers from any country in the world.
:
Thank you, Mr. Green and Mr. Andrey.
Anything our witnesses have been asked to provide in writing, can you do that by this Friday, please? That would help the analysts. We have to put a timeline on it, so Friday at five o'clock, if you don't mind.
We have Mr. Barrett, and then Mr. Kelloway, Mr. Bains, Monsieur Villemure and Mr. Green, who will bring us home. You each have two and a half minutes.
Go ahead, Mr. Barrett.
I certainly don't think it would be harmful, but I think the logistics around age verification are tricky. I probably don't have time to get into all that right now, but I think, in principle, yes.
I might even suggest going further, since we're talking about trans-border data storage, which is to say that you could ban the transfer of minors' data to countries with insufficient equivalent protection, if you wanted to, as well.
Yes, I think that would be a fine thing to add.
:
I would not support that.
I understand the intent behind the proposal. I think it's well-intentioned, and I considered it seriously, but I think it would have adverse effects that may not be what is intended.
The reality is that we need a privacy law that protects children by default. It shouldn't be the responsibility of a parent. There are mixed harms and benefits with these technologies, and I don't believe that parents or older generations are the ones who are always the best at navigating these technologies. I've seen lots of surveys from within the Privy Council Office itself that show young people are the ones who use these technologies; 30% of teens get their news from TikTok, and a lot of older generations don't use them at all. One concern I would have is that I wouldn't feel comfortable entrusting that responsibility to all parents, but that's just my personal view.
What I would say, though, is that I do believe children should be explicitly referenced as a vulnerable population within Bill . I think it's unacceptable that children and youth, in particular, have been removed from Bill C-27 and are omitted. That was a deliberate intent by the Ministry of Industry. I have an internal brief that talks about the reasons behind that, and I'd be happy to share that with you.
Thank you to our witnesses for joining us today.
It's clear we have a generational risk or danger with online harms. Two tragedies have hit close to home for us in British Columbia. You may recall Amanda Todd's suicide in 2012, which was linked to online harassment, and the recent news of the 12-year-old boy whose suicide was linked to online sexual extortion.
Coincidentally, I have a 15-year-old daughter and a 12-year-old boy, so it's chilling to hear some of these stories that come out.
I'll go to Mr. Malone first.
You've noted in a previous interview that the collection of vast amounts of data creates “power imbalances” with users. Could you please expand on what you mean by that?
:
Thank you for the question.
There was a question on surveillance capitalism, which is a concept that was introduced by Shoshana Zuboff. It was introduced a couple of times during the hearings. The previous question was on what we can do to try to at least mitigate the impacts of surveillance capitalism, which was really initiated, if we look back, by Google. It was Google, through its machine-learning techniques, that facilitated that process. It was the inadequate regulatory and legal regimes that were in place that allowed that to happen.
If I were to provide some key recommendations or suggestions in terms of takeaways, I would say we need robust privacy laws. We've heard that over and over again. I'd like to emphasize that again. We need to have robust privacy measures in place, specifically in areas with regard to cross-border data transfers. I think Bill could use an area that specifically identifies cross-border data transfers as an area for robust protections.
:
My understanding is that folks are very concerned about back doors that China might potentially have to get the type of data that's being collected by TikTok and the opportunities for China to operate as a threat actor that those would provide.
I would say that Canada needs to show rather than tell in this area. By that I mean that we need to hold our own government to account to make sure that it is transparent and accountable and that it protects human rights and democracy online.
You earlier had the Communications Security Establishment and the head of CCCS speak at this committee on this study. CSE will neither confirm nor deny that it's using spyware against foreign adversaries as part of its work. You're not going to get an answer about whether China is doing that when Canadian authorities won't provide a clear answer either.
I would also just say that I would really like to see whatever information CSE gave to PCO or TBS in the lead-up to banning TikTok, because it's really strange that this social media app was selectively banned. The timing is notable, because it was obviously 10 days after an explosive report came out.
:
Thank you, Mr. Villemure.
[English]
First of all, I want to thank all of our witnesses—Mr. Malone, Mr. Andrey, and Mr. Masoodi—for being here today.
If you have any written documents you'd like to provide to the clerk and the committee, please do so by Friday at 5 p.m. You've provided some pretty valuable information today, and I really appreciate it. I also appreciate your patience as we went through votes, and your patience for coming back this week.
I'm going to dismiss the witnesses, as I have a couple of things for the committee. It's just an update.
I am in receipt of an emergency meeting request. We are going to do that on Wednesday. The notice should be out shortly.
I will tell you, as well, that we have received confirmation from Google and Meta that they will appear before the committee as part of this study on Wednesday, December 13. We will have both those entities here next week.
There being no other business, I am going to adjourn the meeting.
Thank you all for being here. Thank you to our analysts, our clerk, and our technicians.
The meeting is adjourned.